id
stringlengths 10
10
| title
stringlengths 26
192
| abstract
stringlengths 172
1.92k
| authors
stringlengths 7
591
| published_date
stringlengths 20
20
| link
stringlengths 33
33
| markdown
stringlengths 269
344k
|
---|---|---|---|---|---|---|
2302.13053 | Scalable Neural Network Training over Distributed Graphs | Graph neural networks (GNNs) fuel diverse machine learning tasks involving
graph-structured data, ranging from predicting protein structures to serving
personalized recommendations. Real-world graph data must often be stored
distributed across many machines not just because of capacity constraints, but
because of compliance with data residency or privacy laws. In such setups,
network communication is costly and becomes the main bottleneck to train GNNs.
Optimizations for distributed GNN training have targeted data-level
improvements so far -- via caching, network-aware partitioning, and
sub-sampling -- that work for data center-like setups where graph data is
accessible to a single entity and data transfer costs are ignored.
We present RETEXO, the first framework which eliminates the severe
communication bottleneck in distributed GNN training while respecting any given
data partitioning configuration. The key is a new training procedure, lazy
message passing, that reorders the sequence of training GNN elements. RETEXO
achieves 1-2 orders of magnitude reduction in network data costs compared to
standard GNN training, while retaining accuracy. RETEXO scales gracefully with
increasing decentralization and decreasing bandwidth. It is the first framework
that can be used to train GNNs at all network decentralization levels --
including centralized data-center networks, wide area networks, proximity
networks, and edge networks. | Aashish Kolluri, Sarthak Choudhary, Bryan Hooi, Prateek Saxena | 2023-02-25T10:42:34Z | http://arxiv.org/abs/2302.13053v3 | # Retexo: Scalable Neural Network Training over Distributed Graphs
###### Abstract
Graph neural networks offer a promising approach to supervised learning over graph data. Graph data, especially when it is privacy-sensitive or too large to train on centrally, is often stored partitioned across disparate processing units (clients) which want to minimize the communication costs during collaborative training. The fully-distributed setup takes such partitioning to its extreme, wherein features of only a single node and its adjacent edges are kept locally with one client processor. Existing GNNs are _not_ architected for training in such setups and incur prohibitive costs therein. We propose Retexo, a novel transformation of existing GNNs that improves the communication efficiency during training in the fully-distributed setup. We experimentally confirm that Retexo offers up to \(6\)_orders of magnitude better communication efficiency_ even when training shallow GNNs, with a minimal trade-off in accuracy for supervised node classification tasks.
Machine Learning, ICML
## 1 Introduction
Graph neural networks (GNN) are designed for learning representations over graph data which are useful for classification, regression, and other applications. GNNs have been deployed for serving recommendations, financial fraud detection, and language modeling and are reported to offer state-of-the-art performance (Wu et al., 2020; Zhou et al., 2020). However, their design thus far has centered mainly for the centralized setup where the graph data is stored and processed in centralized servers. The centralized setup is not ideal due to its threat to security and privacy of user's data (Cadwalldar & Graham-Harrison, 2018; Illmer, 2021). Services that build on distributed graphs are becoming popular as alternatives to the centralized ones. For instance, social networks such as Signal and Mastodon do not store the social graph at centralized servers (Marlinspike, 2017; Mastodon, 2022). Graphs capturing location traces, health data, and financial transactions can also be distributed across mutually untrusting parties (Suzumura et al., 2019; Hulsen, 2020). Thus, designing machine learning protocols for distributed graphs enable the aforementioned GNN-based applications with better privacy and data sovereignty.
In this paper, we introduce the problem of learning neural networks for supervised node classification on fully-distributed graphs. In such graphs, each node stores its features and edges locally on its trusted device (client) such as a mobile or an IoT device. The clients along with a server (of a service provider) aim to learn a machine learning model, in a supervised fashion, that can be used to classify the nodes based on their features and the graph structure. This setup applies to health monitoring over bluetooth-proximity networks, serving recommendations in distributed social networks, and social fitness tracking applications (Dong et al., 2021; Gui et al., 2017; Madan et al., 2010).
Existing GNNs have not been designed for training in a fully-distributed setup and retrofitting them directly to this setup leads to high communication costs. For example, a standard graph convolutional network (GCN) with \(K\) trainable layers would require message-passing, i.e., propagating the layers and other intermediate node representations between clients during each round of training. The clients with ground-truth labels (_train-clients_) would engage their \(K\)-hop neighborhood in multiple client-to-client interactions in every round of training. Typically, there are hundreds of training rounds. This overhead is prohibitively high, especially for bandwidth-constrained clients. Further, clients may not be available for such communication to ensure successful completion of every training round. Therefore, it is important to devise GNNs for which training has much lower client-to-client communication costs.
We propose Retexo, the first framework for training neural networks, that are transformations of existing message-passing GNN architectures, in the fully-distributed setup. For instance, a graph convolutional network GCN can be transformed to its corresponding RetexoGCN. Unlike
GNNs, training RetexoGNNs needs only \(K\) rounds of message-passing between clients. The key idea in RetexoGNNs is to disentangle the aggregation of intermediate representations over graph edges from the end-to-end training of the model. Instead, RetexoGNNs are trained layer-by-layer and only after fully training every layer, the embeddings (logits) output by the trained layer are propagated over the graph edges. The propagated embeddings are aggregated in the way existing GNNs do and the aggregated embeddings are used as features to train the next layer.
We emphasize that Retexo offers generic flexibility at retrofitting to several base GNN architectures that exist today. Retexo does not assume anything about the structural properties of the graph, correlations between features and attributes, and the aggregation mechanisms of the base GNN. Therefore, GNNs designed for all kinds of graph datasets, such as homophilous, heterophilous and inductive settings, can be transformed in to their distributed counterparts.
To show its wide applicability, we evaluate RetexoGNNs on common real-world graph datasets for the transductive, heterophilous, and inductive setups. We find that classification accuracy of RetexoGNNs is within \(1.2\%\) of their respective GNNs on average in our evaluated configurations. If GNNs and RetexoGNNs were to be implemented for the fully-federated setup, we measure how much data is transferred per client to train the models. The total data transferred on client-to-client communication channels per client in RetexoGNNs is **up to 6 orders of magnitude (\(\mathbf{10^{6}\times}\)) lower** in comparison to base GNNs. The total data transferred per client across both client-to-client and client-to-server channels is up to \(20\times\) lower for training RetexoGNN than for the base GNN.
Broader ImpactWe only highlight Retexo's applicability for the fully-distributed setup, however, Retexo can be used in any distributed setup. For instance, efficiently training GNNs in the federated setup, where the graph is split into few large subgraphs with one subgraph stored per client, is being studied extensively (Zhang et al., 2021; Wang et al., 2022; Md et al., 2021; He et al., 2021; Gandhi and Iyer, 2021). RetexoGNNs can be cheaper to train than their corresponding GNNs in the federated setup as well due to much fewer rounds of client-to-client communication.
## 2 Background & Problem
Consider an graph \(\mathcal{G}:(\mathcal{V},\mathcal{E})\) which has nodes \(\mathcal{V}\) and edges \(\mathcal{E}\). Each node has attribute or feature vector of \(I\) dimensions, given by the node-feature map \(\mathbf{F}:\mathcal{V}\rightarrow\mathbb{R}^{I}\), and its neighbors are denoted as the set \(\mathcal{N}(v)\).
Message-passing Graph Neural Networks.A GNN model (\(\mathbf{M}:(v,\mathcal{G},\mathbf{F}))\rightarrow\mathbb{R}^{O}\) takes a node, all features of all nodes, and the graph as an input. It outputs the node embeddings for the given node \(v\). We focus on _message-passing_ GNNs which are widely-used in practice, wherein every node aggregates representations from its \(m\)-hop neighbors and concatenates with its own representation in each round \(m\in[1,\dots,K]\). \(K\) is a parameter between \(2-5\) specified by the architecture. Therefore, the representation of the node \(v\) after \(m\) rounds of message passing will be:
\[\mathbf{Q}_{v}^{m}=\mathbf{f}_{\theta}^{m-1}(\mathsf{COMBINE}(\mathbf{Q}_{v}^{ m-1},\mathsf{AGGR}_{u\in\mathcal{N}(v)}^{m-1}(\mathbf{Q}_{u}^{m-1})))\]
Here, \(\mathbf{f}_{\theta}^{m-1}\) is a non-linear function with learnable parameters, \(\mathsf{COMBINE}\) is usually the concatenate operation, and \(\mathsf{AGGR}\) represents aggregation functions that may have learnable parameters. \(\mathsf{AGGR}\) functions vary with different GNNs. For example, \(\mathsf{AGGR}\) is the mean operator in GCNs (Kipf and Welling, 2016) or a max-pooling layer in GraphSage (Hamilton et al., 2017). In essence, to get the \(m^{th}\) representation of a node, a non-linear transformation over an aggregate of the \(m\)-\(1^{th}\) representations of it neighbors are used.
The goal of training a GNN is to find the values of \(\mathbf{f}_{\theta}^{m-1}\) and \(\mathsf{AGGR}^{m-1}\) functions \(\forall m\in\{1\cdots K\}\). These parameters are optimized for accuracy on upstream tasks such as supervised node classification, which is our focus as well.
Prior Setups.Prior work has considered two different setups for supervised node classification (see Figure 1, (a)). In the centralized setup, which is most widely used, the entire graph including node attributes and the structure is stored
Figure 1: Training GNNs for supervised node classification across three different setups—centralized, federated and fully-distributed.
and processed at one central server. The server is trusted and it bears the costs of learning the model. In contrast, the more recently proposed federated setup (Figure 1, (b)) for graph learning has a central server that does _not_ have the full graph. The graph is partitioned into sub-graphs and stored on a few clients, typically \(5\)-\(20\), and each such client is trusted with storing a large sub-graph. The central server and the clients collaborate with each other to learn a GNN, and the clients do not reveal the raw sub-graphs to the server in this process. This setup is useful when the clients may belong to different organizations (e.g. banks, drug discovery companies, or hospitals) or when the clients are servers of an organization that are available across geopolitical boundaries (Wang et al., 2022). Raw data sharing across clients is disallowed by privacy laws and regulations in such setups.
Training in the federated setup has higher costs than in the centralized setup. In every training round, the clients communicate with each other to pass intermediate representations and gradients across the cross-client edges and with the server to share intermediate models. However, the clients are still trusted "centralized" servers with large sub-graphs and the communication costs, especially client-to-client, can be optimized using their sub-graphs (see Section 6). Further, the federated graph learning libraries may ignore the cross-client edges too (Wang et al., 2022; He et al., 2021).
### Node Classification on Fully-Distributed Graphs
Unlike prior setups, we propose the _fully-distributed_ setup where the clients can hold only one node's data, taking the federated setup to an extreme. Specifically, a client contains a node with attributes and the node's edges. There is a server which knows the identities of the clients in the graph. We study the supervised node classification problem where the server knows the labels for a small fraction of the nodes and the goal is to learn a model that can classify the remaining nodes into prior-known \(O\) label categories. Classification can be for other nodes in the same graph (_transductive setting_) or on an unseen graph (_inductive setting_). The latter is important for evolving graphs.
We do not consider the labels as private in this work, however, our proposed solution can be adapted to the setup where the clients do not reveal their labels to the server with minor changes. Clients share only the intermediate gradients and embeddings, not raw features, as is also the case in federated learning protocols. All involved parties are assumed to be honest, i.e., the server and the clients would follow the protocol. Figure 1(c) illustrates this setup.
Concrete Application.Mobile sensing based proximity networks and distributed social networks are the most popular examples of this setup. Consider the proximity networks formed by user's mobile devices using sensors such as blue-tooth. Proximity networks model users' short-term and recurring physical interactions which implies that they can be used to model social influence propagation, location traces, and infectious diseases (Pan et al., 2011; Madan et al., 2010; Hernandez-Orallo et al., 2020). Consequently, supervised node classification on proximity networks can enable important applications such as health monitoring, serving recommendations, and sentiment analysis. For instance, GNNs achieve state-of-the-art performance in predicting symptoms caused by Influenza using supervised node classification on proximity networks (Dong et al., 2021). However, users might not want to reveal their sensitive attributes and edges in the physical proximity network to any external centralized services. Therefore, enabling GNNs on the fully-distributed proximity networks helps such applications.
Evidently, the clients in a fully-distributed network may be bandwidth-constrained and the client-to-client communication networks may be asynchronous and unreliable (due to mobility). How do we train a GNN in the fully-distributed setup for supervised node classification efficiently?
## 3 GNNs for Fully-Distributed Setup
Existing GNNs do not naturally extend to the proposed setup since message-passing is built into the end-to-end training process. We illustrate this using a \(2\)-layer GCN.
Graph convolutional network (GCN).GCN is a canonical example of message-passing GNNs with the AGGR being a mean function, i.e., a node's representation is an average of its neighbors' representations. For a GCN:
\[\mathbf{Q}_{v}^{m}=\theta^{m-1}\cdot\mathsf{Concat}(\mathbf{Q}_{v}^{m-1}, \mathsf{Mean}_{u\in\mathcal{N}(v)}(\mathbf{Q}_{u}^{m-1}))\]
\(\mathbf{Q}^{0}\) is the input node features with dimension \(I\), and \(\theta^{m-1}\) are the weight parameters with hidden size \(H\). 1
Footnote 1: In practice, there is a non-linear activation (ReLU) on every \(\theta\).
Therefore, a GCN can be parameterized by its set of weight layers \(\{\theta^{0},\cdots,\theta^{m-1}\}\). A two layer GCN can be simply represented as \(\{\theta^{0}_{IH},\theta^{1}_{HO}\}\). \(\theta^{0}_{IH}\) represents the first layer which transforms the input node features with dimension \(I\) to the hidden dimension of size \(H\). \(\theta^{1}_{HO}\) represents the second layer which transforms the hidden features with dimension \(H\) to output embedding with dimension \(O\).
### Baseline: Fully Distributed GCNs
When a GCN is trained as-is in a fully distributed setup, the communication costs will be high. The GCN is trained across \(R\) training rounds. Each training round begins with the server sending an aggregated model to the train-clients. The train-clients then perform two message-passing rounds, one each for the forward and the backward pass, wherein
they mobilize the clients in their \(2\)-hop neighborhood. Figure 2 shows the message-passing for the forward pass.
In the forward pass, a train-client will send the first layer of the GCN to its neighbors and ask them for their initial representations, i.e., raw features (see (1) in Figure 2). 2 The \(1\)-hop neighbors will then ask their neighbors for their initial representations. The \(2\)-hop and \(1\)-hop neighbors will send their initial representations to their neighbors (2). The \(1\)-hop neighbors will use their neighbors' initial representations and the first layer to compute the hidden representations, which they send to the train-clients (3). The train-clients use the initial and hidden representations of their neighbors to compute the output embedding.
Footnote 2: Sending raw features can be avoided by using an additional non-invertible weight layer on the features before aggregation.
The backward pass would also require another round of message-passing since certain weights are shared by multiple nodes. Concretely, the train-client would have to ask its neighbors for certain factors to compute the gradient and the neighbors might need some factors from theirs too.
Appendix A gives more details of the training process. One noteworthy point here is that clients have to send layer parameters to their neighbors in the above process. This is because the server does not know neighbors of any client.
### Problems with Distributed GCN
Challenge 1.In each training round, GCN requires many rounds of communication between the clients in \(K\)-hop neighborhoods. This leads to _high latency and data transfer volumes_. For instance, the clients in proximity networks could be mobile phones with low-bandwidth networks.
Challenge 2.Implementing the backward pass is akin to re-designing the automatic gradient computation used in the popular machine learning libraries for the fully-distributed setup. This is a challenge in itself and further demands _high availability_ from the clients. For instance, in proximity networks, it can be difficult to ensure the availability of the neighbors of a node at the same time for both forward and backward passes. The problem is exacerbated when \(2\)-hop (and higher order) neighbors are necessary. Even if everyone was available for one round, doing this for all training rounds is not practical.
We explain next why the client-to-client communication overheads for training even a 2-layer neural network in practice can be in the order of gigabytes for a client.
_Forward pass, sharing model:_ In a \(2\)-layer GCN, when the degree of a train-client is \(d\), the train-client will have to first send the parameters of the first layer to its \(d\) neighbors. The size of the first layer will be \(2IH\) and the number of parameters sent by the train-clients is \(2IH\cdot d\). The clients from the \(1\)-hop neighborhood would receive the layer. The total number of parameters received by such clients would be \(2IH\). Across \(R\) rounds of training, the data transferred by the train-clients just for sharing the models with neighbors is \(2IHdR\times 32\) bytes. 3 Further, the data transferred by each neighbor of the train-clients is \(2IHR\times 32\) bytes. The hidden representations and factors required for gradients would be \(O((I+H)\cdot d)\) and \(O(I\cdot d)\) bytes respectively.
Footnote 3: Assuming the parameters are represented using \(32\)-bit floating point precision.
Cost Illustration.Consider a \(2\)-layer GCN where \(I=1000\) and \(H=256\). Let there be a train-client with \(d=10\) neighbors. The total data transferred by it during forward pass, for one training round, would be \(2\times 1000\times 256\times 10\times 32=160\) Mb. If the model is trained for \(R=400\) rounds, the total data transferred just for message-passing during forward pass for that train-client is \(64\) Gb. Further, the neighbors of this train-client would transfer at least \(6.4\) Gb. For each train-client and their neighbors, these costs would increase linearly with degree, input size, size additional pooling layers, and the number of layers \(K\).
## 4 The Retexo GNN Architecture
We present a novel graph learning architecture called Retexo that requires much fewer message-passing rounds for training. Retexo can be seen as a generic transformation of a traditional GNN into its Retexo counterpart. Retexo architectures retain the AGGR and COMBINE functions in the original GNNs. This is important for preserving accuracy in homophilous, heterophilous (Zheng et al., 2022), and inductive settings (Hamilton et al., 2017).
The graph edge structure is directly encoded as a sandwiched layer in a traditional GNN architecture. Therefore, all layers parameters are optimized simultaneously in each
Figure 2: A single train-client \(v_{i}\) performing forward pass with its \(1\)-hop neighbor \(v_{j}\) and \(2\)-hop neighbor \(v_{k}\) in one training round.
training round. In a fully-distributed setup, this is expensive as it requires syncing parameter state between neighbors in a training round. Our framework Retexo remedies the architecture by _decoupling_ the edge structure from training of layers and delaying the message-passing to only _after_ the training of one layer completes. For instance, if a GCN has \(2\) layers then the Retexo framework will deconstruct the layers and train them sequentially one after another. After training each layer, its output logits are sent between neighbors in one message-passing round. Once the nodes receive their neighbors' logits, they will compute the AGGR and COMBINE functions and use the aggregated logits as features to train the next layer. We illustrate this in Figure 3.
The number of message-passing rounds in a RetexoGNN equal the number of layers (\(K\)) in the original GNN, but crucially, are independent of the number of training rounds (\(R\)). This drastically reduces the total number of message-passing rounds from \(O(R)\) to \(K\). Computing gradients while training the layers does not require any message-passing between clients. Furthermore, Retexo naturally extends to the federated setup wherein a large sub-graph is stored at every client instead of a single node. Therefore, Retexo remains compatible with training improvements developed for the federated learning setup as well. Examples of such improvements include federated optimization (Reddi et al., 2020), handling for asynchronous real-world deployments (Bonawitz et al., 2019; Nguyen et al., 2022), defenses against data poisoning, and training for better robustness (Liu and Yu, 2022).
### Architecture of RetexoGNNs
For a \(K\)-layer base GNN, a RetexoGNN counterpart is a collection of \(K+1\) independent trainable models.
In RetexoGNN (Figure 3, right side), the first model is an MLP which is used to learn embeddings from the node features. 4 There is no AGGR before the first MLP since that requires propagating raw features. There are \(K\) more MLPs after the first one, each having the AGGR and COMBINE layers of the base GNN. RetexoGNN's intermediate embeddings (logits after every model) are given by:
Footnote 4: In fact, this model could be any classifier, such as a CNN.
\[\mathbf{Q}_{v}^{1} =\mathbf{f}_{\theta}^{0}(Q_{v}^{0})\] \[\mathbf{Q}_{u}^{m} =\mathbf{f}_{\theta}^{m-1}(\mathsf{COMBINE}(\mathbf{Q}_{v}^{m-1}, \mathsf{AGG}_{u\in\mathcal{N}(v)}(\mathbf{Q}_{u}^{m-1})))\] \[\mathbf{f}_{\theta}^{0} =\text{ MLP}_{0}=(\theta_{H}^{0},\theta_{HO}^{0})\] \[\mathbf{f}_{\theta}^{m-1} =\text{ MLP}_{m-1}=(\theta_{OH}^{m-1},\theta_{HO}^{m-1}),\;m\in\{2, \ldots,K+1\}\]
In \((\theta_{OH}^{m-1},\theta_{HO}^{m-1})\), the \(\theta_{OH}^{m-1}\) converts the aggregated embeddings to a representation of size \(H\) and the \(\theta_{HO}^{m-1}\) converts that into the output. In our running example, the \(2\)-layer RetexoGCN can be represented by \(\{\text{MLP}_{0}:(\theta_{IH}^{0},\theta_{HO}^{0}),\text{ MLP}_{1}:(\theta_{OH}^{1},\theta_{HO}^{1}),\text{ MLP}_{2}:(\theta_{OH}^{2},\theta_{HO}^{2})\}\).
### Training a RetexoGNN
For training every MLP we use the traditional federated learning approaches. Before training each MLP, except the first one, we do a message-passing round to get the embeddings from a node's neighbors. We describe an outline of the training protocol here (see Algorithm 1) and provide full details, for the federated learning and message passing protocols, in the Appendix B.
Federated Learning.The server and the clients together train \(K+1\) models sequentially. For each model, the training proceeds in rounds. The server samples a subset of the train-clients for each round and delegates training / validation tasks to them. In each training task, the server shares the model parameters with the selected clients. The clients then perform a forward and backward pass on the model using their local data. The clients then return the updated model to the server. In each validation task, the server just asks the clients to report their accuracy. Notice that for the AGGR layer, the client uses the embeddings collected from its neighbors in the previous message-passing round. Each client has a single input (feature vector and a label). Hence, it carries out only a single forward and backward pass per training round. Therefore, this is equivalent to the standard FedSGD training protocol. For the federated setup with multiple nodes per client, one can instead use the FedAVG protocol wherein multiple forward and backward passes are carried out locally on the client (McMahan et al., 2017).
Message-Passing.Before training the \(m^{th}\) MLP, the server initializes a message-passing round by sending all the clients the \((m-1)\) trained MLP parameters, i.e., it syncs the latest model with the clients. The clients then compute embeddings from it and share them with their neighbors. For example, consider a RetexoGNN with three MLP and say the server is training its third MLP. At this point, the server sends the second model to all clients. So, now all clients will have the first MLP (\(\text{MLP}_{0}\)), their own first embeddings output by their first MLP (\(\mathbf{Q}_{v}^{1}\)), the first embeddings obtained from their neighbors (\(\mathbf{Q}_{u}^{1}\), \(u\in\mathcal{N}(v)\)) from the previous message passing round, and the second MLP (\(\text{MLP}_{1}\)). The clients locally compute \(\mathbf{Q}_{v}^{2}\) using this
Figure 3: Base GNN to its RetexoGNN.
information. The clients then share \(\mathbf{Q}_{v}^{2}\) with their neighbors to complete the current message-passing round.
Retexo allows the communication network to be asynchronous. The server starts a message-passing round and lets the computation take as long as needed. In proximity networks, clients can communicate information with neighbors when in close proximity using Nearby Share or Airdrop protocols (Google; Apple). In distributed social networks, the clients do not need to be in proximity or have public IP addresses. Instead, WebRTC can be used (WebRTC).
In highly asynchronous networks, one successful message-passing round including the sync, can take a few seconds to few hours before we can start the training procedure. This underscores the importance of having only few of message-passing rounds. We emphasize that, unlike in a GNN, training RetexoGNN would require clients to share only the embeddings and only once per message-passing round which is practical even in weak mobile network environments. Further, having only \(K\) rounds of message-passing makes it feasible for the server to sync the model with all the clients after training it.
All neighbors of a train client do not need to be available for a message-passing round; the client can aggregate embeddings received from the available nodes. Retexo is robust to such unavailability. More experiments reported in Appendix D elaborate on this feature of RetexoGNNs.
## 5 Evaluation
We evaluate RetexoGNNs on \(2\) aspects:
**Performance.** What accuracy do RetexoGNNs achieve in the transductive, inductive, and heterophilous settings?
**Communication Efficiency.** How do RetexoGNNs communication costs compare to those of the baseline GNNs in the fully distributed setup?
### Experimental Setup
We consider three widely-used traditional GNNs that are known to perform well across most settings: GCN (Kipf and Welling, 2016), GraphSAGE (Hamilton et al., 2017), and GAT (Velickovic et al., 2017). 5 For GraphSAGE, we use the max-pooling layer for aggregation.
Footnote 5: We use simple mean aggregation instead of localized spectral convolution for GCN as described here (Hamilton et al., 2017).
Datasets.For the transductive setting, we evaluate on \(7\) datasets--Cora, Citeseer, PubMed, Facebook, LastFMAsia, Wiki-cooc, and Roman-empire. The first \(5\) datasets are commonly used for the homophilous setting (Wu et al., 2020) and the latter \(2\) have been introduced recently for the heterophilous setting (Anonymous, 2023). We use \(5\) to \(10\%\) of the nodes for training and validation each. For smaller datasets we use 10% so that there are enough nodes for training and for the larger ones we use 5%. For the inductive setting, we construct evolving graphs similar to prior work, from the homophilous-transductive datasets (Zeng et al., 2019). For the homophilous-transductive and the inductive settings, we use \(2\)-layer GNNs and for the heterophilous-transductive setting, we use \(5\)-layer GNNs as recommended here (Anonymous, 2023). We then construct the RetexoGNNs that correspond to these GNNs. We provide full details on the datasets, splits, and architectures in Appendix C.1.
Implementation & Training.We first implemented the baseline GNNs and their respective RetexoGNNs on the centralized server. We further implemented RetexoGNNs in the FederatedScope library (Wang et al., 2022) to confirm that our implementations in the centralized setup transfer to the fully-distributed setup. Our designs are carefully implemented such that all operations performed in the centralized setup can be extended to the fully-distributed setup as is. We only use the SGD optimizer with momentum and weight decay since other optimizers lead to unpredictable results when there is only one node per client. For similar reasons, we do not use dropout and batch normalization.
For measuring communication costs, we do not run baseline GNNs and RetexoGNNs on a real distributed network and instead report on the communication-efficiency (data transferred between clients) in our simulations on the centralized setup. To emulate practical implementations of federated learning protocols, we diverge from the traditional epoch-based training in the centralized setup to
round-based (Bonawitz et al., 2019). In each round, we randomly sample a batch of train nodes and validation nodes, up to \(1024\) total, independent of the previous rounds. We fix the total number of rounds to \(400\) in the transductive and inductive settings whereas \(1000\) in the heterophilous setting. We sample the neighbors considered for aggregation up to a maximum of \(25\) across every hop, as in (Anonymous, 2023).
We report accuracy of all models under their best hyperparameter settings found using grid search (see Appendix C). We also separately report validation accuracies with early-stopping to show how it can further reduce communication costs for a small drop in accuracy.
criteria to early-stop across all evaluated configurations. So, we manually choose the patience as \(30\) since that is much smaller than \(400\) and gave us the best results for GNNs in most configurations. In Figure 4 (right side) we report the results for these experiments. Consider the example of GraphSAGE vs RetexoSage. On client-to-client channels, the total data transferred for GraphSAGE is about \(3.3\times 10^{6}\) MB and RetexoSage is \(1.13\) MB which is \(10^{6}\times\) lower. On both channels together, the total data transferred for GraphSAGE and RetexoSage is about \(4.3\times 10^{6}\) MB and about \(2.4\times 10^{5}\) MB respectively, which is \(20\times\) lower. We report the remaining evaluated configurations in Appendix D.
To summarize, RetexoGNNs by design have minute communication costs, up to \(10^{6}\times\) lower, on client-to-client channels compared to GNNs even with early-stopping. Further, as GNNs become deeper, with higher \(K\) for instance in the heterophilous setting, the difference in communication costs between GNNs and RetexoGNNs will only increase.
## 6 Related Work
Training GNNs in the federated setup for node classification tasks on a single connected graph is popular, see surveys (He et al., 2021; Liu and Yu, 2022). The protocols (Zheng et al., 2021; Wang et al., 2020; Chen et al., 2021) ignore cross-client edges, i.e., the clients do not propagate messages across such edges. The remaining protocols (Zhang et al., 2021; Wan et al., 2022; Yao and Joe-Wong, 2022; Chen et al., 2021; Md et al., 2021) consider aggregating information from cross-client edges either directly by propagating messages or by generating synthetic graphs to mimic cross-client graph information. However, none of these works have considered the fully-distributed setup and are also motivated differently. For instance, one motivation is to increase the efficiency of training on large graphs using distributed computation with shared memory. Further, these protocols do not extend to the fully-distributed setup since they either assume or only work if there are a large number of nodes stored in one client. (Zhang et al., 2021) relies on each client generating synthetic graphs based on the subgraph it stores. (Wan et al., 2022) require an initialization step of partitioning the graph across clients to minimize the inter-client communication. In contrast, it is straight-forward to extend Retexo to the federated setup considered by these protocols since, outside of message-passing, Retexo mimics the typical federated learning setup irrespective of the number of nodes per client. RetexoGNNs would still require only \(K\) rounds of message-passing across the clients.
Alternative federated setups have also been studied for node and graph classification tasks where clients store disjoint graphs. None of the protocols designed for these setups apply to ours. In the federated setup for graph classification, the clients store multiple full graphs, such as molecules, where each graph has a label associated with it. The task is to collaboratively learn a GNN that can be used by the clients to label their local graphs (Jiang et al., 2020; He et al., 2021; Xie et al., 2021). Vertically federated setup considers federated graph learning where the same nodes are present across different clients but their features are split among clients (Chen et al., 2020). Protocols have also been designed for user-item bipartite graphs to do item recommendations (Wu et al., 2021). (Scardapane et al., 2020) propose a serverless distributed protocol, to train a GNN, where clients have their own communication channels between them which may be different from the actual graph edges. This protocol also relies on the clients storing large subgraphs and incurs significant communication costs to do distributed consensus in the absence of a server. Extending
\begin{table}
\begin{tabular}{l|l|l|l|l|l|l|l} Method & Cora & Citeseer & PubMed & Facebook & LastFMAsia & Wiki-cooc & Roman-empire \\ \hline MLP & \(0.644\pm 0.011\) & \(0.652\pm 0.007\) & \(0.779\pm 0.006\) & \(0.710\pm 0.007\) & \(0.645\pm 0.007\) & \(0.895\pm 0.005\) & \(0.653\pm 0.004\) \\ GCN & \(0.814\pm 0.014\) & \(0.710\pm 0.007\) & \(0.825\pm 0.003\) & \(0.871\pm 0.002\) & \(0.804\pm 0.007\) & \(0.918\pm 0.005\) & \(0.752\pm 0.004\) \\ RetexoGCN & \(0.763\pm 0.011\) & \(0.702\pm 0.011\) & \(0.801\pm 0.008\) & \(0.816\pm 0.010\) & \(0.756\pm 0.010\) & \(0.940\pm 0.004\) & \(0.804\pm 0.005\) \\ GraphSAGE & \(0.805\pm 0.010\) & \(0.699\pm 0.006\) & \(0.815\pm 0.004\) & \(0.894\pm 0.005\) & \(0.815\pm 0.005\) & \(0.936\pm 0.007\) & \(0.815\pm 0.005\) \\ RetexoSage & \(0.785\pm 0.010\) & \(0.705\pm 0.010\) & \(0.806\pm 0.010\) & \(0.843\pm 0.009\) & \(0.788\pm 0.010\) & \(0.9616\pm 0.010\) & \(0.835\pm 0.004\) \\ GAT & \(0.813\pm 0.014\) & \(0.707\pm 0.008\) & \(0.815\pm 0.003\) & \(0.877\pm 0.006\) & \(0.808\pm 0.009\) & \(0.976\pm 0.009\) & \(0.800\pm 0.009\) \\ RetexoGAT & \(0.802\pm 0.013\) & \(0.711\pm 0.006\) & \(0.804\pm 0.011\) & \(0.851\pm 0.006\) & \(0.807\pm 0.006\) & \(0.972\pm 0.004\) & \(0.816\pm 0.005\) \\ \end{tabular}
\end{table}
Table 1: Micro-F1 scores for node classification in the transductive setting.
\begin{table}
\begin{tabular}{l|l|l|l|l|l} Method & Cora & Citeseer & PubMed & Facebook & LastFMAsia \\ \hline MLP & \(0.593\pm 0.012\) & \(0.623\pm 0.020\) & \(0.769\pm 0.031\) & \(0.708\pm 0.005\) & \(0.603\pm 0.010\) \\ GCN & \(0.747\pm 0.016\) & \(0.689\pm 0.018\) & \(0.828\pm 0.009\) & \(0.850\pm 0.007\) & \(0.757\pm 0.005\) \\ RetexoGCN & \(0.670\pm 0.021\) & \(0.673\pm 0.015\) & \(0.807\pm 0.012\) & \(0.810\pm 0.008\) & \(0.712\pm 0.013\) \\ GraphSAGE & \(0.709\pm 0.024\) & \(0.684\pm 0.020\) & \(0.816\pm 0.007\) & \(0.876\pm 0.006\) & \(0.760\pm 0.011\) \\ RetexoSage & \(0.722\pm 0.025\) & \(0.664\pm 0.027\) & \(0.808\pm 0.010\) & \(0.836\pm 0.007\) & \(0.741\pm 0.010\) \\ GAT & \(0.769\pm 0.015\) & \(0.685\pm 0.017\) & \(0.810\pm 0.008\) & \(0.827\pm 0.019\) & \(0.743\pm 0.014\) \\ RetexoGAT & \(0.743\pm 0.030\) & \(0.683\pm 0.018\) & \(0.808\pm 0.011\) & \(0.860\pm 0.006\) & \(0.774\pm 0.012\) \\ \end{tabular}
\end{table}
Table 2: Micro-F1 scores for node classification in the inductive setting.
Retexo to these setups is an interesting future work.
## 7 Conclusion
We have presented new neural network architectures, RetexoGNNs, that are practical to train on fully-distributed graphs, as they offer significantly lower communication costs for training compared to existing GNN architectures.
To reproduce our experiments, we release our code here. 6
Footnote 6: [https://github.com/aashishkolluri/retexognn-prototype](https://github.com/aashishkolluri/retexognn-prototype)
|
2302.05631 | A Survey on Spectral Graph Neural Networks | Graph neural networks (GNNs) have attracted considerable attention from the
research community. It is well established that GNNs are usually roughly
divided into spatial and spectral methods. Despite that spectral GNNs play an
important role in both graph signal processing and graph representation
learning, existing studies are biased toward spatial approaches, and there is
no comprehensive review on spectral GNNs so far. In this paper, we summarize
the recent development of spectral GNNs, including model, theory, and
application. Specifically, we first discuss the connection between spatial GNNs
and spectral GNNs, which shows that spectral GNNs can capture global
information and have better expressiveness and interpretability. Next, we
categorize existing spectral GNNs according to the spectrum information they
use, \ie, eigenvalues or eigenvectors. In addition, we review major theoretical
results and applications of spectral GNNs, followed by a quantitative
experiment to benchmark some popular spectral GNNs. Finally, we conclude the
paper with some future directions. | Deyu Bo, Xiao Wang, Yang Liu, Yuan Fang, Yawen Li, Chuan Shi | 2023-02-11T09:16:46Z | http://arxiv.org/abs/2302.05631v1 | # A Survey on Spectral Graph Neural Networks
###### Abstract
Graph neural networks (GNNs) have attracted considerable attention from the research community. It is well established that GNNs are usually roughly divided into spatial and spectral methods. Despite that spectral GNNs play an important role in both graph signal processing and graph representation learning, existing studies are biased toward spatial approaches, and there is no comprehensive review on spectral GNNs so far. In this paper, we summarize the recent development of spectral GNNs, including model, theory, and application. Specifically, we first discuss the connection between spatial GNNs and spectral GNNs, which shows that spectral GNNs can capture global information and have better expressiveness and interpretability. Next, we categorize existing spectral GNNs according to the spectrum information they use, _i.e_., eigenvalues or eigenvectors. In addition, we review major theoretical results and applications of spectral GNNs, followed by a quantitative experiment to benchmark some popular spectral GNNs. Finally, we conclude the paper with some future directions.
## 1 Introduction
Graph neural networks (GNNs) have shown strong performance in graph-related tasks, including node classification [18, 17], link prediction [16], and graph classification [20]. In general, GNNs can be divided into spatial and spectral methods according to different definitions of graph convolution. Spatial methods, also known as message-passing neural networks (MPNNs) [14], perform graph convolutional in the vertical domain by aggregating the neighborhood information along graph structures. In contrast, spectral methods use spectral graph theory [15] to transform the convolutions into products of the frequency domain. Over the past few years, research on GNNs has focused on spatial methods because of their flexibility and scalability, [1]. On the contrary, research on spectral methods is somewhat under-explored.
Spectral GNNs have related but different views from spatial methods and play an important role in graph representation learning. First, **spatial and spectral GNNs capture different information**. Spatial GNNs aggregate node features layer by layer. Nodes can only capture information within a fixed distance, emphasizing local information. In contrast, spectral GNNs transform all node features into weighted sums of different eigenvectors via graph Fourier transform, which naturally captures global information. Second, **spatial and spectral GNNs use different design principles**. Spatial GNNs involve node-based updating, where the gradients flow between the connected nodes only and therefore have lower complexity, but their expressive power is bounded from above by the 1-Weisfeiler-Lehman (WL) test [21]. Spectral GNNs involve feature-based updating, where each feature is filtered by the fully-connected eigenspaces, and messages are passed among all nodes simultaneously. Although more complex, feature-based updating breaks the localized property and is thus more powerful than node-based updating [1]. Third, **spatial and spectral GNNs have different interpretability**. Spatial GNNs require a post-hoc interpretation strategy, which aims to find the most important graph structures related to a prediction, _e.g_., nodes, edges, or subgraphs [20]. Spectral GNNs are interpretable models in which the learned graph filters can directly state the most important frequency information associated with the labels, _e.g_., low-, medium-, and high-frequencies.
The above three points show that spectral GNNs are important in designing and explaining GNNs. However, little effort has been made to summarize the development of spectral methods. On the one hand, existing surveys of GNNs [20, 19] focus on spatial methods. On the other hand, spectral GNNs are close to graph signal processing (GSP). Ortega _et al_. [1] and Dong _et al_. [1] are two surveys on traditional GSP that formally define some fundamental concepts, such as filtering and sampling, but they do not explain the connection between spectral filtering and graph representation learning in detail. Besides, while both eigenvalues and eigenvectors play a role in the spectral domain, the latter has been largely ignored in previous literature.
In this paper, we provide a comprehensive and system
atic review of spectral GNNs. Specifically, we first analyze the differences between spatial and spectral GNNs and introduce the unique challenges of designing spectral methods. Then, we differentiate existing methods into two categories: eigenvalue-based and eigenvector-based, which correspond to the design of filters and basis functions in signal processing. Within each category, there are three subcategories, and we further analyze their strengths and weaknesses in terms of effectiveness and efficiency. Next, we introduce major theoretical results and important applications of spectral GNNs. We also make a quantitative experiment to test the performance and complexity of different graph filters on both homophilic and heterophilic benchmarks. Finally, we discuss a few promising future directions in spectral GNNs.
## 2 Challenges and Concepts
Spectral GNNs utilize neural networks to learn representations from spectral information. The design of spectral GNNs is not trivial, which poses the following three challenges:
* **Diversity of spectral information.** Spectral GNNs aim to learn node or graph representations from spectral information, which is informative but hard to explore. For example, eigenvalues and eigenvectors indicate the global and local structures [14], and the Fourier coefficients reflect the energy of signals on different harmonics. These spectral features with different meanings and tensor sizes pose the first challenge.
* **Complexity of graph data.** There are various types of graph data in real-world applications, such as dynamic graphs, heterogeneous graphs, hyper-graphs, etc. Most of them cannot be represented by a basic adjacency matrix. Therefore, how to extend the idea of spectral convolution to graph data other than undirected graphs is indispensable for spectral GNNs.
* **Scalability of methodology.** The complexity of the eigenvalue decomposition is the cube of the number of nodes. Besides, explicitly constructing the fully-connected eigenspaces also brings a quadratic time and space complexity. Both operations are important but expensive for spectral GNNs. How to make spectral GNNs scalable to large-scale graphs is a great challenge for spectral GNNs.
The reviews of spectral GNNs are shown in Section 3, where each approach solves at least one of these three challenges. Before that, we first introduce some important concepts to give a general understanding of spectral GNNs.
Assume that \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) is a graph, where \(\mathcal{V}\) is the node set with \(|\mathcal{V}|=N\) and \(\mathcal{E}\) is the edge set. Let \(\mathbf{A}\in\{0,1\}^{N\times N}\) be the adjacency matrix of \(\mathcal{G}\) and the normalized graph Laplacian matrix is defined as \(\mathbf{L}=\mathbf{I}_{N}-\mathbf{D}^{-1/2}\mathbf{A}\mathbf{D}^{-1/2}\), where \(\mathbf{D}\) is a diagonal degree matrix with \(D_{i,i}=\sum_{j}A_{i,j}\) and \(\mathbf{I}_{N}\) denotes the identity matrix.
Eigenvalue Decomposition (EVD).The graph Laplacian matrix can be decomposed as \(\mathbf{L}=\mathbf{U}\mathbf{A}\mathbf{U}^{\top}\), where \(\mathbf{\Lambda}=\text{diag}\{\lambda_{1},\cdots,\lambda_{N}\}\) is a diagonal matrix of eigenvalues and \(\mathbf{U}=[\mathbf{u}_{1},\cdots,\mathbf{u}_{N}]\) is the corresponding eigenvectors. If \(\mathbf{L}\) is symmetric, the eigenvalues are real and satisfy \(0\leq\lambda_{1}\leq\cdots\leq\lambda_{N}\leq 2\). Otherwise, both \(\mathbf{\Lambda}\) and \(\mathbf{U}\) will be complex. If no special emphasis, the graphs are undirected.
Graph Fourier Transform (GFT).Discrete Fourier transform (DFT) aims to decompose an input signal or function into a weighted sum of the orthogonal bases. Through SVD, we have \(\mathbf{L}\mathbf{u}_{i}=\lambda_{i}\mathbf{u}_{i}\) and \(\lambda_{i}=\mathbf{u}_{i}^{\top}\mathbf{L}\mathbf{u}_{i}\). Note that the eigenvalues reflect the frequency/smoothness of the corresponding eigenvectors. Since the eigenvectors are normalized and orthogonal, _i.e._\(||\mathbf{u}_{i}||=1\) and \(\mathbf{u}_{i}^{\top}\mathbf{u}_{i}=\delta_{ij}\), the eigenvectors can be used as the orthogonal bases. Given a graph signal \(\mathbf{x}\in\mathbb{R}^{N\times 1}\), its GFT and inverse GFT can be defined as:
\[\hat{\mathbf{x}}=\mathbf{U}^{\top}\mathbf{x},\;\mathbf{x}=\mathbf{U}\hat{ \mathbf{x}}, \tag{1}\]
where \(\hat{\mathbf{x}}\) is the Fourier coefficients of \(\mathbf{x}\).
Based on the eigenvalues and eigenvectors of graph Laplacian, we can define the convolutional in the Fourier domain.
Spectral Graph Convolution.The convolution between a filter \(f\) and a signal \(\mathbf{x}\) in the spatial domain is equal to their production in the spectral domain. Through GFT, the spectral graph convolutional is defined as
\[f*_{G}\mathbf{x}=\mathbf{U}\left(\left(\mathbf{U}^{\top}f\right)\odot\left( \mathbf{U}^{\top}\mathbf{x}\right)\right), \tag{2}\]
where \(f*_{G}x\) is the convolution and \(\odot\) is the production. Generally, spectral GNNs will parameterize \(\mathbf{U}^{\top}f\) with a learnable diagonal matrix \(\mathbf{G}=\text{diag}\{g_{1},\cdots,g_{N}\}\) as the spectral filter that can be optimized by gradient descent. We can rewrite the spectral graph convolutional as:
\[f*_{G}\mathbf{x}=\mathbf{U}\mathbf{G}\mathbf{U}^{\top}\mathbf{x}=(g_{1} \mathbf{u}_{1}\mathbf{u}_{1}^{\top}+\cdots+g_{N}\mathbf{u}_{N}\mathbf{u}_{N}^ {\top})\mathbf{x}, \tag{3}\]
where \(\mathbf{u}_{i}\mathbf{u}_{i}^{\top}\in\mathbb{R}^{N\times N}\) is the \(i\)-th fully-connected eigenspace.
## 3 Spectral Graph Neural Networks
In this section, we classify existing spectral GNNs into two categories: eigenvalue-based and eigenvector-based, according to the spectral information they used. Within each category, we will introduce three subcategory methods and explain their advantages and disadvantages as well as the connection in detail. Additionally, we present advances in theory and application, and benchmark existing methods on six datasets. The overall framework can be seen in Figure 1.
Figure 1: Framework of this survey.
### Eigenvalue-based Spectral GNNs
Eigenvalue-based spectral GNNs correspond to the design of filters in signal processing. Approaches belonging to this category aim to learn a good filter that captures the most important frequency information of input graph signals. In this section, we introduce three types of graph filters: advanced filters, polynomial filters, and linear filters, from which we can observe how the spectral GNNs evolve to reduce complexity and scale to large graphs, along with the declining of expressive power.
**Advanced Filters**
We refer to the spectral GNNs that explicitly take eigenvalues as the input of neural networks as advanced filters. Since the decomposition of graph Laplacian is time-consuming, advanced filters usually choose to truncate the graph spectrum, _i.e._, using the smallest-\(k\) eigenvalues and eigenvectors, to reduce the complexity. Although computationally intensive, such methods can capture the geometric information hidden in the eigenvalues. For example, the second-smallest eigenvalue indicates the algebraic connectivity of a graph.
SpectralCNN [14] is the first advanced graph filter, which treats the graph filter as the parameters of neural networks and directly optimizes it by gradient descent:
\[\mathbf{h}_{j}^{(l+1)}=\sigma(\mathbf{U}\sum_{i=1}^{d_{in}}\mathbf{G}_{ij}^{(l )}\mathbf{U}^{\top}\mathbf{h}_{i}^{(l)})\quad(j=1\cdots d_{out}), \tag{4}\]
where \(\sigma\) is the activation function and \(\mathbf{h}_{i}^{(l)}\in\mathbb{R}^{N\times 1}\) indicates the \(i\)-th dimension of the representations in \(l\)-th layer. SpectralCNN generalizes the design of convolutional neural networks (CNNs) to graphs and assigns each feature an independent graph filter, _i.e_. \(\mathbf{G}_{ij}^{(l-1)}\). Therefore, the total parameters of one convolution layer are \(\mathcal{O}(k^{2}d_{in}d_{out})\). More parameters enhance the fitting ability of SpectralCNN but also bring high time and space complexity. Additionally, the non-parametric graph filters make the model brittle. When there are multiple eigenvalues, SpectralCNN cannot handle the ambiguity of eigenvectors [15].
LanczosNet [11] uses spectral convolution to define a multi-scale graph representation learning framework. Spatial GNNs construct multi-scale representations by stacking multiple message-passing layers, _i.e_., calculating \(\mathbf{A},\mathbf{A}^{2},\cdots,\mathbf{A}^{K}\) recurrently, which is infeasible for large graphs. LanczosNet leverages the fact \(\mathbf{A}^{K}=\mathbf{U}\mathbf{A}^{K}\mathbf{U}^{\top}\) to reduce complexity and capture the long-range information in the spectral domain, which is more efficient. The long-range spectral filter is defined as:
\[L_{i}(\mathcal{I})=\sum_{t=1}^{k}f_{i}(\lambda_{t}^{\mathcal{I}_{1}},\lambda_{t }^{\mathcal{I}_{2}},\cdots,\lambda_{t}^{\mathcal{I}_{|\mathcal{I}|}})\mathbf{ u}_{t}\mathbf{u}_{t}^{\top}, \tag{5}\]
where \(f_{i}\) is an element-wise function, such as multilayer perceptron (MLP) and \(\mathcal{I}=\{10,20,\cdots,50\}\). For short-range information, LanczosNet directly uses spatial convolution. The overall framework is defined as:
\[\mathbf{h}^{(l+1)}=[\mathbf{A}\mathbf{h}^{(l)}\cdots\mathbf{A}^{K}\mathbf{h}^ {(l)},L_{1}(\mathcal{I})\mathbf{h}^{(l)}\cdots L_{T}(\mathcal{I})\mathbf{h}^{( l)}]\mathbf{W}. \tag{6}\]
It is worth noting that LanczosNet takes the eigenvalues as input and outputs the long-range graph filter. By leveraging the correspondence between eigenvalues and eigenvectors, LanczosNet learns invariant representations to the permutation of nodes and eigenvectors. The design of parameterizing eigenvalues as graph filters is first proposed by Defferrard _et al._ [13] and continued in subsequent work.
Specformer [1] is a recent work on advanced graph filters that considers the set relationships between eigenvalues, where rich geometric information is embedded. For example, the algebraic multiplicity of eigenvalue 0 reflects the number of connected components in the graph. But element-wise graph filters are not aware of such statistics. Therefore, Specformer uses Transformer [18] as a set-to-set graph filter to learn the set relationships between eigenvalues:
\[\mathbf{G}=\text{Transformer}(\rho(\mathbf{\Lambda})), \tag{7}\]
where \(\rho(\cdot)\) is a positional encoding function to vectorize the scalar eigenvalues. Theoretically, the set-to-set graph filter is a generalization of element-wise filters and, therefore, more expressive. However, the quadratic complexity of Transformer further increases the complexity of Specformer and makes it impossible to scale to large graphs.
**Summary.** The advanced filters are the first kind of spectral GNNs, which has great potential in graph representation learning. However, the explicit matrix factorization is time-consuming and prevents advanced filters from scaling to large graphs. Therefore, it is important to find suitable domains for advanced filters, such as molecular representation learning [1].
**Polynomial Filters**
The polynomial filters are special cases of advanced filters that simplify the neural networks into polynomial functions. The greatest advantage of the polynomial filters is that they avoid the explicit EVD but still preserve the arbitrary filtering ability. In addition to reducing the time complexity, ChebNet also learns a localized graph filter, _i.e_. only nodes within \(K\)-hops are connected. The localization property ensures that graph filters remain sparsely connected in space and reduce the cost of memory. Therefore, polynomial filters attract considerable attention in recent years.
The basic polynomial filter is the weighted sum of graph Laplacian of different orders, which can be defined as:
\[\mathbf{G} =\theta_{0}\mathbf{I}+\theta_{1}\mathbf{\Lambda}+\cdots+\theta_{K} \mathbf{\Lambda}^{K}, \tag{8}\] \[\mathbf{UGU}^{\top} =\theta_{0}\mathbf{I}+\theta_{1}\mathbf{L}+\cdots+\theta_{K} \mathbf{L}^{K},\]
where \(\{\theta_{0},\theta_{1},\cdots,\theta_{K}\}\) are the coefficients of polynomials. However, there are some disadvantages. For example, the polynomial terms \(\{\mathbf{L},\mathbf{L}^{2},\cdots,\mathbf{L}^{K}\}\) are non-orthogonal, which may affect the convergence of models. There are many works that try to improve the design of polynomial filters, most of which are based on the approximation theory, such as Chebyshev polynomial, Bernstein polynomial, etc.
ChebNet [1] uses Chebyshev expansion to construct a polynomial filter. Due to its orthogonality, ChebNet can be trained quickly to converge with a small
number of polynomial terms. The Chebyshev polynomial filter is defined as:
\[\mathbf{G}=\sum_{k=1}^{K}\theta_{k}\mathcal{T}_{k}(\tilde{\mathbf{A}}), \tag{9}\]
where \(\tilde{\mathbf{A}}=\frac{2\mathbf{A}}{\lambda_{max}}-\mathbf{I}_{N}\) that scales the eigenvalues in the range \([-1,1]\) and \(\mathcal{T}_{k}(x)=2x\mathcal{T}_{k-1}(x)-\mathcal{T}_{k-2}(x)\) with \(\mathcal{T}_{0}=1,\mathcal{T}_{1}=x\) constructs the orthogonal space. Although ChebNet has a good approximation ability, it may fail under certain boundary conditions, known as the Runge phenomenon (Eperson, 1987). ChebNetII (He _et al._, 2022) is then proposed to use Chebyshev interpolation to enhance the approximation of ChebNet and alleviate the over-fitting problem.
There are also other polynomial filters. For example, GPRGNN (Chien _et al._, 2021) considers the PageRank information, BernNet (He _et al._, 2021) uses Bernstein polynomial to keep the graph filter positive semidefinite, JacobiConv (Wang and Zhang, 2022) provides a more general orthogonal base through Jacobi polynomial, and DSGC (Li _et al._, 2021) designs polynomial filters on both node and attribute affinity graphs. Here we do not describe them in detail.
Another type of polynomial filter is the relational graph filter, which has a good approximation to the narrow bands, where there is a sharp change in the frequency response. The aforementioned polynomial filters need to increase their order to fit such narrow bands, which may result in numerical instability. Relational graph filters solve this problem by learning the ratio between two polynomials as the filtering, which can be formulated as:
\[\mathbf{G}=\left(\sum_{q=0}^{Q}\theta_{q}\mathbf{\Lambda}_{i}^{q}\right) \left(\mathbf{I}+\sum_{p=0}^{P}\theta_{p}\mathbf{\Lambda}_{i}^{p}\right)^{-1}, \tag{10}\]
where \(Q\) and \(P\) are the orders of the two polynomials. CayleyNets (Levie _et al._, 2019) and GraphARMA (Bianchi _et al._, 2022) are two traditional relational graph filters in the complex and real domains, respectively. Relational graph filters can fit the narrow bands with fewer orders. However, the calculation of the inverse of a matrix is time-consuming and unstable, which requires carefully designed constraints, _i.e._, \(1+\sum_{p=0}^{P}\theta_{p}\lambda_{i}\neq 0,\forall i\in N\).
**Summary.** Polynomial filters can approximate arbitrary filters, _e.g._, low-, medium-, and high-pass, and easily scale to large graphs, which are widely used in various applications. For example, multivariate time-series forecasting (Liu _et al._, 2022). However, the numerical instability needs to be carefully considered in designing polynomial filters, which is caused by calculating the \(K\)-th power of the Laplacian matrix. Spec-GN (Yang _et al._, 2022) provides a spectrum-shrinking method to increase the order of polynomial filters. One can also consider the trigonometric polynomial to avoid the potential numerical problem.
**Linear Filters**
Linear filters further simplify the polynomial filters to yield high scalability and inductive learning ability. But they lost the capability to approximate arbitrary graph filters. In general, linear filters can only scale the graph spectrum, but not change the response patterns.
GCN (Kipf and Welling, 2017) is one of the most important works in the field of GNNs, which is the first-order approximation of ChebNet and can be seen as a linear filter. GCN takes the first two terms of ChebNet, _i.e._, \(\theta_{0}\mathbf{I}\) and \(\theta_{1}(\mathbf{L}-\mathbf{I})\), and assumes that \(\theta=\theta_{0}=-\theta_{1}\). Then it has a linear filter:
\[\mathbf{UGU}^{\top}=\theta(2\mathbf{I}-\mathbf{L})=\theta(\mathbf{I}+ \mathbf{A})\approx\theta\tilde{\mathbf{A}}, \tag{11}\]
where \(\tilde{\mathbf{A}}\) is the renormalization trick of adjacency matrix. Here we can see that the graph filter \(\mathbf{G}\) is linear with the input graph structure \(\tilde{\mathbf{A}}\). This explains why GCN has the over-smoothing phenomenon (Li _et al._, 2018): The parameters of neural networks can only scale the graph spectrum of \(\tilde{\mathbf{A}}\) but do not change its inherent low-pass pattern (Wu _et al._, 2019). By continuously multiplying the graph filter, only the first trivial eigenspace \(\mathbf{u}_{1}=[\frac{1}{\sqrt{N}},\cdots,\frac{1}{\sqrt{N}}]^{\top}\) can preserve the amplitude 1 and others 0.
Although the linear property restricts the expressive power of GCN, it still inspires a series of subsequent works. PPNP (Klicpera _et al._, 2019) combines GCN with PageRank and adds a residual connection to the graph convolutional layers:
\[\mathbf{UGU}^{\top}=\theta((1-\alpha)\tilde{\mathbf{A}}+\alpha\mathbf{I}), \tag{12}\]
where \(\alpha\) is a hyperparameter to balance the low-pass filter \(\tilde{\mathbf{A}}\) and all-pass filter \(\mathbf{I}\). Compared with GCN, PPNP has a hyperparameter to control the response function of the graph filter. But PPNP still cannot approximate arbitrary filters. GNN-LF/HF (Zhu _et al._, 2021) further unifies all linear filters into an optimizing framework. It decomposes the design of graph convolution into two objectives: fitting and regularization. By setting different hyperparameters, the fitting term can make the model approximate different filters, _e.g._, low- and high-pass. However, such methods can only approximate predefined filters and cannot learn from data, which requires a lot of expert knowledge.
FAGCN (Bo _et al._, 2021) provides a solution to adaptively learn low- or high-pass filters from data but still preserves the linear property. The key idea is to use neural networks to combine some predefined graph filters.
\[\mathbf{UGU}^{\top}=\theta(\mathbf{I}-\mathbf{M}\odot\mathbf{A}), \tag{13}\]
where \(\mathbf{M}\in\mathbb{R}^{N\times N}\) is a sparse edge weight matrix learned by neural networks. When the elements in \(\mathbf{M}\) is larger than 0, FAGCN can act as a low-pass filter and otherwise a high-pass filter. There are also some similar methods. For example, AdaGNN (Dong _et al._, 2021), AKGNN (Ju _et al._, 2022) and ADA-UGNN (Ma _et al._, 2021). Through parameterizing the hyperparameters with neural networks, the linear filters can adaptively approximate different filters. But the expressive power is still not as good as the aforementioned two methods.
**Summary.** Generally, the linear filters are equal to the basic spatial GNNs. Taking GCN as an example, if we assign each feature an independent filter, then the spectral filter becomes:
\[\mathbf{h}_{j}^{(l+1)} =\sigma(\mathbf{U}\sum_{i=1}^{d_{in}}\theta_{ij}\tilde{\mathbf{A} }\mathbf{U}^{\top}\mathbf{h}_{i}^{(l)})=\sigma(\sum_{i=1}^{d_{in}}\theta_{ij} \mathbf{U}\tilde{\mathbf{A}}\mathbf{U}^{\top}\mathbf{h}_{i}^{(l)})\] \[=\sigma(\sum_{i=1}^{d_{in}}\theta_{ij}\tilde{\mathbf{A}}\mathbf{ h}_{i}^{(l)})=\sigma(\tilde{\mathbf{A}}\mathbf{h}^{(l)}\mathbf{\theta}_{j}), \tag{14}\]
where \(\mathbf{\theta}_{j}\in\mathbf{R}^{d_{in}\times 1}\) is the parameters of neural networks. Therefore, linear filters have good scalability, but the fitting ability is heavily restricted. Linear filters cannot directly capture global or high-order information because the predefined filters are usually designed for first-order information. It is a promising direction to combine linear filters with complicated filters to learn representations effectively and efficiently.
### Eigenvector-based Spectral GNNs
A graph signal can be decomposed into the weighted sum of bases, where the weights indicate the energy in different frequencies. Therefore, it is important to choose a set of bases that well reflect the distribution of graph signals. Eigenvector-based spectral GNNs aim to design bases to effectively represent the graph signals in the spectral domain. In this section, we introduce two important bases: Graph Wavelet and Hermitian Laplacian. In addition, we also mention the basis encoder, which takes the eigenvectors as input and generates positional encodings for nodes.
**Wavelet Basis**
The Fourier bases, _i.e_., eigenvectors, are the most commonly used basis function. However, there are two shortcomings of eigenvectors: First, eigenvectors are dense in space. If graph signals only exist in a subgraph, _e.g_., diffusion signals, GFT needs to use a large number of bases to approximate such signals. Second, eigenvectors are non-localized, which cannot reflect the spatial location of the graph signals. To represent graph signals more efficiently, graph wavelets [11] are proposed, which are sparse and localized in the vertex domain. The graph wavelets \(\mathbf{\psi}_{s}\) are defined as:
\[\mathbf{\psi}_{s}=[\mathbf{\psi}_{s1},\cdots,\mathbf{\psi}_{sN}]=\mathbf{U}\text{diag}(e^ {s\lambda_{1}},\cdots,e^{s\lambda_{N}})\mathbf{U}^{\top}, \tag{15}\]
where \(\mathbf{\psi}_{si}\) is a basis that represents a diffusion signal centered on node \(i\) with the scaling parameter \(s\). Generally, a smaller value \(s\) indicates a shorter diffusion, _i.e_., covering fewer neighbors. By using different scaling parameters, graph wavelets can construct a multi-scale representation of the graph signals. Graph wavelets not only reflect the energy distribution in the spectral domain but also indicate its location distribution in the vertex domain, which is a powerful spatial-spectral analysis tool. There are many spectral GNNs based on graph wavelets.
GWNN [20] proposes a non-parametric spectral GNNs based on the graph wavelet transform (GWT):
\[\mathbf{h}_{j}^{(l+1)}=\sigma(\mathbf{\psi}_{s}\sum_{i=1}^{d_{in}}\mathbf{G}_{ij}^ {(l)}\mathbf{\psi}_{s}^{-1}\mathbf{h}_{i}^{(l)})\quad(j=1\cdots d_{out}), \tag{16}\]
where \(\mathbf{\psi}_{s}^{-1}=\mathbf{U}\text{diag}(e^{-s\lambda_{1}},\cdots,e^{-s\lambda _{N}})\mathbf{U}^{\top}\). Since the calculation of \(\mathbf{\psi}_{s}\) and \(\mathbf{\psi}_{s}^{-1}\) are time-consuming, Hammond _et al._[11] uses Chebyshev polynomials to efficiently approximate the wavelet bases. Zheng _et al._[20] design orthogonal graph wavelets to provide a multi-resolution framework for graph signal analysis.
**Conjugate Basis**
A basic assumption for spectral GNNs is that the underlying graph structures are symmetric. The EVD of asymmetric graphs will generate complex eigenvalues and eigenvectors. However, existing graph filters do not support complex operations. Therefore, it is important to find a basis that can encode the direction information of edges and have real eigenvalues for graph filtering. Hermitian Laplacian is a generalization of graph Laplacian, which contains the real part and imaginary parts. The real part is a symmetric matrix, which represents the graph structures. The imaginary part is conjugate, which can be used to encode side information, _e.g_., edge direction. The EVD of Hermitian Laplacian has real eigenvalues and complex eigenvectors, which we call conjugate basis. Therefore, it can be directly incorporated with existing graph filters.
MagNet [14] uses the Hermitian Laplacian to design a spectral GNN for directed graphs. The normalized Hermitian Laplacian of a directed graph is defined as:
\[\mathbb{L}=\mathbf{I}-\mathbb{D}^{-\frac{1}{2}}\mathbb{A}\mathbb{D}^{-\frac{1}{ 2}}\odot\exp\left(\mathrm{i}\mathbf{\Theta}^{(q)}\right)\quad\mathrm{i}\cdot \mathrm{i}=-1, \tag{17}\]
where \(\mathbb{A}=\frac{1}{2}(\mathbf{A}+\mathbf{A}^{\top})\), \(\mathbb{D}\) is the degree matrix of \(\mathbb{A}\) and \(\mathbf{\Theta}^{(q)}=2\pi q(\mathbf{A}-\mathbf{A}^{\top})\) is the phase matrix, where the incoming and outgoing edges are set to 1 and -1, respectively. Therefore, they will have opposite values in the complex plane. The hyperparameter \(q\) represents the importance of structural information and direction information. A smaller value of \(q\) indicates that the structural information is more important. Based on the Hermitian Laplacian, MagNet defines a linear filter similar to GCN in the directed graph:
\[\mathbf{h}_{j}^{(l+1)}=\theta_{ij}\tilde{\mathbb{A}}\odot\exp\left(\mathrm{i} \mathbf{\Theta}^{(q)}\right)\mathbf{h}_{i}^{(l)}, \tag{18}\]
where \(\tilde{\mathbb{A}}\) is the renormalization of \(\mathbb{A}\). MagNet sheds light on how to combine graph filters with the side information of graphs. It is promising to extend graph filters to other types of graph data by using Hermitian Laplacian. For example, heterogeneous graphs that use different relations as the side information.
**Basis Encoder**
In addition to representing the energy of graph signals, the eigenvectors also reflect the global positions of nodes, which can be used as positional encodings to improve the expressive power of spatial GNNs and break the limitation of 1-WL test [15]. There are many attempts to learn data representation from the eigenvectors. For example, spectral clustering [21] aims to use the top-\(k\) eigenvectors to represent the manifold of data. Besides, graph embedding, which aims to embed nodes into a low-dimension space, can be seen as factorizing different graph matrices [22], where the eigenvectors are used as the node embeddings. However, such methods are two-stage approaches that cannot update representations through back-propagation.
Basis encoders aim to learn the position representations of nodes from the eigenvectors in an end-to-end manner. In addition, they also consider the sign and basis ambiguity of eigenvectors [13, 14] and design models to learn invariant representations. Although the basic encoders are not necessarily GNNs, there is a close relationship between them. Spectral GNNs aim to find the important eigenspaces, _i.e_. \(\mathbf{u}\mathbf{u}^{\top}\), and basis encoders are designed to find
the important eigenvectors that can well describe the node positions.
SAN [14] proposes to use Transformer to construct a permutation-equivariant basis encoder. Through the self-attention mechanism, SAN can learn the importance of different eigenvectors, which has a similar idea to spectral GNNs. SignNet [12] finds that the eigenspaces are invariant to the sign flips and basis symmetries of eigenvectors and uses invariant and equivariant GNNs to learn positional representations from the eigenspaces, which can be seen as a generalization of spectral GNNs. PEG [20] uses the Euclidean distance between eigenvectors to reweight the adjacency matrix, thus avoiding the ambiguity of eigenvectors. PEG encodes the positional information in edges and performs well on the link prediction task.
SummaryCompared with eigenvalues-based spectral GNNs, eigenvector-based methods are under-explored, possibly due to theoretical and computational limitations. A future direction is to combine the basis information with other tasks. For example, GCN-SVD [15] shows that the low-frequency bases are robust to graph perturbation.
### Theory
In this section, we mainly introduce the theoretical progress on spectral GNNs, including expressive power, robustness, and transferability, which may give a deeper understanding of spectral GNNs.
Expressive powerThe expressive power of MPNNs is proven to be restricted by 1-WL test [23]. But the expressive power of spectral GNNs remains under-determined. balcilar2021efficient proves that spectral graph convolution with nonlinear filtering can break the limit of the 1-WL test and is as powerful as the 3-WL model. wang2022efficient make a further attempt. They show that spectral filters are universal approximators when satisfying two conditions: no multiple eigenvalues and no missing frequency components. Since existing graph filters will map the multiple eigenvalues into the same scalar, this conclusion gives a future direction for designing spectral GNNs.
RobustnessIt has been proved that GCNs will have different predictions under structural perturbation [24], implying that spatial convolutions are vulnerable to attack. GCN-LFR [1] is the first to verify the vulnerability of graph filters through matrix perturbation theory. Their theoretical analysis shows that graph filters are more robust against structural perturbations when the eigenvalues fall into a specific interval. lei2022efficient find that even-order graph filters are more robust than full-order methods and propose EvenNet, which is a graph filter with only even-order terms and performs well on both homophilic and heterophilic datasets.
TransferabilityTransferabilityTransferability
Transferability reflects the generalization ability of graph filters across different graphs. Intuitively, if two graphs have similar spectrums, the graph filters should have representations on both graphs. levie2021efficient first proves that spectral filters are transferable on different graphs and can learn similar representations when there is a small perturbation between two graphs, which establishes the connections between transferability and robustness. kenlay2021efficient further proposes a stability bound to measure the stability and transferability of graph filters.
### Application
In this section, we review the latest applications and explain why they are suitable for spectral GNNs, including multivariate time-series forecasting, neuroscience, and computer vision.
Multivariate Time-series ForecastingMultivariate time-series forecasting task aims to predict multiple correlated time-series simultaneously, which needs to consider both intra-series and inter-series patterns. Learning a latent graph between different time-series can capture the inter-relations and significantly improve forecasting performance. stemGNN [1] uses Discrete Fourier Transform (DFT) and GFT to capture the intra- and inter-patterns in the spectral domain, respectively. TPGNN [13] further proposes a temporal polynomial filter to capture the dynamics between different time-series.
NeuroscienceNeuroscience studies the nervous system of human brains. Brain connectivity can be seen as a special brain graph, where nodes are different brain regions, and edges are their connections. bessadok2021efficient introduces how to apply GNNs for network neuroscience, and xu2019multilayer utilizes graph wavelets to construct a multi-scale analysis of brain networks. In general, neuroscience is a suitable application for spectral GNNs, as it does not require scalability and has a strong need for interpretability.
Computer VisionMany computer vision tasks can improve performance by introducing graph structures, such as detection, classification, recognition, etc. chen2022efficient provides a comprehensive summary of vision tasks involving GNNs, and we present two of them that make use of spectral GNNs.
Point clouds can be seen as special 3D graphs, where the nodes are points and edges are their nearest neighbors. It is natural to generalize the idea of graph filters to 3D graphs. hu2020efficient introduces the recent development of GSP in geometric data, and liu2022efficient reviews the attacks on point clouds from the perspective of the graph spectral domain. Point cloud tasks, such as segmentation, often need to learn the global structures of nodes. Compared to the layer-by-layer aggregation mechanism of spatial methods, spectral GNNs are better at learning global information through GFT.
Skeleton-based motion prediction is another task aimed at predicting human motions from sensor data, where sensors can be considered as the nodes in a graph, and edges are their spatial connections. li2021efficient proposes a graph scattering network to project the sensor signals into different graph spectrum bands to predict human motions. Through
decomposing the sensor signals, spectral GNNs can capture the potential relationship between human motions and basis functions and use it for classification.
### Benchmark
We benchmark existing spectral GNNs on six graph datasets, three of which (Cora, Citeseer, Pubmed) are homophilic graphs, while the other three (Chameleon, Squirrel, Actor) are heterophilic graphs. We implement these spectral GNNs based on our GAMMA-Spec 1 framework, which provides the implementation and visualization of different graph filters. We follow the setting of He _et al._ [2021] and randomly select 60% data for training, 20% for validation and 20% for testing. The results are shown in Table 1.
Footnote 1: [https://github.com/liuyang-tian/Spectral-GNN-Library](https://github.com/liuyang-tian/Spectral-GNN-Library)
We can see that the advanced filters outperform other filters in four datasets, confirming that the advanced filters have better approximation ability. However, due to the high complexity, there is less research on this type of filter. Polynomial filters take the runner-up spot but are more studied than advanced filters, probably because polynomial filters strike a good balance between effectiveness and efficiency. The performance of linear filters is worse than other filters. Due to the lack of nonlinear filtering, there is also less research on linear filters. Therefore, it is an important direction to empower linear filters with adaptive filtering capability.
## 4 Conclusion and Future Directions
In this paper, we introduced the development of spectral GNNs from the perspective of model, theory, and application. Although some progress has been made, spectral GNNs still lag behind spatial GNNs. To facilitate the development of spectral GNNs, we briefly discuss some promising future directions.
**Information intersection**
As mentioned in the challenges, the spectral information of graphs is informative, such as eigenvalues, eigenvectors, Fourier coefficients, etc. Therefore, it is important to exploit the connections between different spectral information and fuse them to learn better representations. Besides, spectral GNNs should not be limited to supervised or semi-supervised learning. Applying the rich spectral information to unsupervised or self-supervised learning is also a promising direction. For example, Liu _et al._ [2022b] connects graph spectrum and graph data augmentation to improve the performance of graph contrastive learning.
**Learning from signal processing**
Signal processing on Euclidean data, _e.g._, time series and images, has a long history. While non-Euclidean signal processing is still an emerging field. Therefore, spectral GNNs can borrow ideas from traditional signal processing methods. For example, Kalman filtering, Bessel filtering, etc. In addition, the spectrum of the Euclidean data can be divided into amplitude spectrum and phase spectrum, representing the shape and location information, respectively. However, in spectral GNNs, most graph spectrums are amplitude spectrums, and the phase spectrums are largely ignored, which motivates the discovery of phase information in spectral GNNs.
**Broader data and applications**
One of the most important advantages of spatial GNN is the flexibility to adapt to different graph data. The message-passing mechanism can be easily extended to heterogeneous graphs and hypergraphs. However, the idea of signal processing is hard to generalize to other graph data, which hinders the development of spectral GNN to some extent. Therefore, an important future direction is to extend spectral GNNs to broader graph data and find suitable applications, _e.g._, biomedical and neuroscience.
\begin{table}
\begin{tabular}{c|c|c c c c c} \hline \hline & \multicolumn{2}{c}{Cora} & Citeseer & Pubmed & Chameleon & Squirrel & Actor \\ \hline \multirow{4}{*}{Advanced} & SpectralCNN & 83.17\(\pm\)0.87 & 71.06\(\pm\)1.93 & 83.76\(\pm\)0.51 & 55.73\(\pm\)2.66 & 44.79\(\pm\)2.58 & 28.87\(\pm\)3.77 \\ & LanczosNet & 88.03\(\pm\)1.70 & 80.55\(\pm\)1.19 & **90.13\(\pm\)0.49** & 60.96\(\pm\)3.32 & 45.14\(\pm\)1.44 & 36.07\(\pm\)1.38 \\ & Specformer & 88.57\(\pm\)1.01 & **81.49\(\pm\)0.94** & - & **74.72\(\pm\)1.29** & **64.64\(\pm\)0.81** & 41.93\(\pm\)1.04 \\ \hline \multirow{9}{*}{Polynomial} & ChebNet & 86.37\(\pm\)1.71 & 78.65\(\pm\)1.55 & 88.00\(\pm\)0.74 & 66.94\(\pm\)1.59 & 53.15\(\pm\)1.97 & 37.06\(\pm\)2.41 \\ & ChebNetII & 88.08\(\pm\)1.17 & 78.46\(\pm\)1.56 & 88.10\(\pm\)0.63 & 71.26\(\pm\)1.28 & 61.92\(\pm\)1.37 & 41.68\(\pm\)1.04 \\ & GPRGNN & **89.61\(\pm\)1.80** & 80.85\(\pm\)1.29 & 90.09\(\pm\)1.15 & 65.27\(\pm\)2.73 & 46.46\(\pm\)2.01 & 39.16\(\pm\)1.39 \\ & BernNet & 87.21\(\pm\)1.13 & 79.33\(\pm\)1.59 & 89.00\(\pm\)0.63 & 68.53\(\pm\)3.05 & 51.74\(\pm\)2.37 & 40.72\(\pm\)1.17 \\ & JacobiConv & 89.05\(\pm\)1.17 & 80.44\(\pm\)0.95 & 89.51\(\pm\)0.80 & 74.07\(\pm\)1.63 & 57.42\(\pm\)1.94 & 40.82\(\pm\)1.72 \\ & Spec-GN & 88.18\(\pm\)0.85 & 80.07\(\pm\)1.48 & 88.51\(\pm\)0.35 & 65.34\(\pm\)3.20 & 50.96\(\pm\)2.19 & 40.49\(\pm\)1.00 \\ & DSGC & 88.24\(\pm\)1.53 & 79.75\(\pm\)1.98 & 87.72\(\pm\)0.69 & 51.73\(\pm\)1.95 & 35.68\(\pm\)1.22 & **42.63\(\pm\)1.32** \\ & ARMA & 87.70\(\pm\)1.25 & 79.02\(\pm\)1.31 & 88.96\(\pm\)0.51 & 60.96\(\pm\)5.66 & 51.10\(\pm\)0.92 & 38.10\(\pm\)2.19 \\ & CayleyNet & 86.55\(\pm\)1.61 & 78.40\(\pm\)1.32 & - & 68.21\(\pm\)1.85 & 51.65\(\pm\)1.98 & 42.26\(\pm\)1.42 \\ \hline \multirow{4}{*}{Linear} & GCN & 87.23\(\pm\)0.82 & 79.82\(\pm\)0.74 & 86.43\(\pm\)0.57 & 59.73\(\pm\)1.75 & 46.55\(\pm\)1.02 & 34.01\(\pm\)1.57 \\ & FAGCN & 88.03\(\pm\)1.30 & 80.04\(\pm\)0.78 & 89.22\(\pm\)0.59 & 66.52\(\pm\)2.17 & 50.40\(\pm\)1.73 & 39.69\(\pm\)1.57 \\ \cline{1-1} & AdaGNN & 87.32\(\pm\)1.10 & 78.69\(\pm\)1.50 & 88.72\(\pm\)0.47 & 60.85\(\pm\)2.44 & 49.88\(\pm\)1.85 & 37.16\(\pm\)0.93 \\ \cline{1-1} & AKGNN & 88.28\(\pm\)1.52 & 81.06\(\pm\)0.84 & 88.96\(\pm\)0.50 & 68.53\(\pm\)0.69 & 46.82\(\pm\)0.75 & 35.70\(\pm\)1.20 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Node classification on real-world datasets, where bold and underline indicate the best and runner-up, respectively. |
2307.08044 | Towards Flexible Time-to-event Modeling: Optimizing Neural Networks via
Rank Regression | Time-to-event analysis, also known as survival analysis, aims to predict the
time of occurrence of an event, given a set of features. One of the major
challenges in this area is dealing with censored data, which can make learning
algorithms more complex. Traditional methods such as Cox's proportional hazards
model and the accelerated failure time (AFT) model have been popular in this
field, but they often require assumptions such as proportional hazards and
linearity. In particular, the AFT models often require pre-specified parametric
distributional assumptions. To improve predictive performance and alleviate
strict assumptions, there have been many deep learning approaches for
hazard-based models in recent years. However, representation learning for AFT
has not been widely explored in the neural network literature, despite its
simplicity and interpretability in comparison to hazard-focused methods. In
this work, we introduce the Deep AFT Rank-regression model for Time-to-event
prediction (DART). This model uses an objective function based on Gehan's rank
statistic, which is efficient and reliable for representation learning. On top
of eliminating the requirement to establish a baseline event time distribution,
DART retains the advantages of directly predicting event time in standard AFT
models. The proposed method is a semiparametric approach to AFT modeling that
does not impose any distributional assumptions on the survival time
distribution. This also eliminates the need for additional hyperparameters or
complex model architectures, unlike existing neural network-based AFT models.
Through quantitative analysis on various benchmark datasets, we have shown that
DART has significant potential for modeling high-throughput censored
time-to-event data. | Hyunjun Lee, Junhyun Lee, Taehwa Choi, Jaewoo Kang, Sangbum Choi | 2023-07-16T13:58:28Z | http://arxiv.org/abs/2307.08044v2 | # Towards Flexible Time-to-event Modeling: Optimizing Neural Networks via Rank Regression
###### Abstract
Time-to-event analysis, also known as survival analysis, aims to predict the time of occurrence of an event, given a set of features. One of the major challenges in this area is dealing with censored data, which can make learning algorithms more complex. Traditional methods such as Cox's proportional hazards model and the accelerated failure time (AFT) model have been popular in this field, but they often require assumptions such as proportional hazards and linearity. In particular, the AFT models often require pre-specified parametric distributional assumptions. To improve predictive performance and alleviate strict assumptions, there have been many deep learning approaches for hazard-based models in recent years. However, representation learning for AFT has not been widely explored in the neural network literature, despite its simplicity and interpretability in comparison to hazard-focused methods. In this work, we introduce the Deep AFT Rank-regression model for Time-to-event prediction (_DART_). This model uses an objective function based on Gehan's rank statistic, which is efficient and reliable for representation learning. On top of eliminating the requirement to establish a baseline event time distribution, _DART_ retains the advantages of directly predicting event time in standard AFT models. The proposed method is a semiparametric approach to AFT modeling that does not impose any distributional assumptions on the survival time distribution. This also eliminates the need for additional hyperparameters or complex model architectures, unlike existing neural network-based AFT models. Through quantitative analysis on various benchmark datasets, we have shown that _DART_ has significant potential for modeling high-throughput censored time-to-event data.
## 1 Introduction
Time-to-event analysis, also known as survival or failure time analysis, is a widely used statistical method in fields such as biostatistics, medicine, and economics to estimate either risk scores or the distribution of event time, given a set of features of subjects [43, 8, 11, 31]. While assessing risk and quantifying survival probabilities have benefits, time-to-event analysis can be challenging due to the presence of censoring. In real-world studies, subjects (e.g. patients in medical research) may be dropped out before the event of interest (e.g. death) occurs, which can prevent full follow-up of the data [30]. The presence of censoring in survival data can create a serious challenge in applying standard statistical learning strategies. In general, the censoring process is assumed to be non-informative in that it is irrelevant of the underlying failure process given features, but their relationship should be properly accounted for, otherwise leading to biased results.
Cox's proportional hazards (CoxPH) model is the most well-known method for time-to-event data analysis. It specifies the relationship between a conditional hazard and given features in a multiplicative form by combining the baseline hazard function with an exponentiated regression component, allowing for the estimation of relative risks. However, this model requires so-called the proportional hazards assumption and time-invariant covariate-effects, which can be difficult to verify in many applications [2]. Statistical testing procedures, such as Schoenfeld's test, are typically conducted to examine the PH assumptions, as they are often vulnerable to violation of underlying assumptions [3, 23].
The accelerated failure time (AFT) model, also known as the accelerated life model, relates the logarithm of the failure time to features in a linear fashion [28]. This model has been used as an attractive alternative to the CoxPH model for analyzing censored failure time data due to its natural physical interpretation and connection with linear models. Unlike the CoxPH models, the classical parametric AFT model assumes the underlying time-to-event distribution can be explained with a set of finite-dimensional parameters, such as Weibull or log-normal distribution. However, such assumption on the failure time variable can be restrictive and may not accurately reflect real-world data. This can decrease performance of the AFT model compared to Cox-based analysis, making it less attractive for practical use [9, 23]. Recently, researchers have been exploring a range of time-to-event models that leverage statistical theories and deep learning techniques to circumvent the necessity of assumptions such as linearity, single risk, discrete time, and fixed-time effect [22, 26, 37, 24, 5, 41, 36].
In particular, _Cox-Time_[25] and _DATE_[7] alleviate some of the strict assumptions of the CoxPH and parametric AFT models by allowing non-proportional hazards and non-parametric event-time distribution, respectively. _Cox-Time_ utilizes neural networks as a rela
tive risk model to access interactions between time and features. The authors also show that their loss function serves as a good approximation of the Cox partial log-likelihood. _DATE_ is a conditional generative adversarial network that implicitly specifies a time-to-event distribution within the AFT model framework. It does not require a pre-specified distribution in the form of a parameter, instead the generator can learn from the data using an adversarial loss function. Incidentally, various deep learning-based approaches have been proposed to improve the performance by addressing issues such as temporal dynamics and model calibration [27, 34, 13, 20, 17]. These approaches have highlighted the importance of utilizing well-designed objective functions that not only take into account statistical properties but also optimize neural networks.
In this paper, we introduce a Deep AFT Rank-regression for Time-to-event prediction model (_DART_), a deep learning-based semiparametric AFT model, trained with an objective function originated from Gehan's rank statistic. We eliminate the need for specifying a baseline event time distribution while still preserving the advantages of AFT models that directly predict event times. By constructing comparable rank pairs in the simple form of loss functions, the optimization of _DART_ is efficient compared to other deep learning-based event time models. Our experiments show that _DART_ not only calibrates well, but also competes in its ability to predict the sequence of events compared to risk-based models. Furthermore, we believe that this work can be widely applied in the community while giving prominence to the advantages of AFT models which are relatively unexplored compared to the numerous studies on hazard-based models.
## 2 Related Works
We first overview time-to-event modeling focusing on the loss functions of _Cox-Time_ and _DATE_ models to highlight the difference in concepts before introducing our method. The primary interest of time-to-event analysis is to estimate survival quantities like survival function \(S(t)=P(T\geq t)\) or hazard function \(h(t)=\lim_{\hat{s}\to 0}P(t\leq T\leq t+\delta|T\geq t)/\delta\), where \(T\in\mathbb{R}^{+}\) denotes time-to-event random variable. In most cases, due to censored observations, those quantities cannot be directly estimated with standard statistical inference procedure. In the presence of right censoring, Kaplan and Meier and Aalen provided consistent nonparametric survival function estimators, exploiting right-censoring time random variable \(C\in\mathbb{R}^{+}\). Researchers then can get stable estimates for survival quantities with data tuples \(\{y_{i},\delta_{i},X_{i}\}_{i=1}^{N}\), where \(y_{i}=\min(T_{i},C_{i})\) is the observed event time with censoring, \(\delta_{i}=I(T_{i}\leq C_{i})\) is the censoring indicator, and a vector of features \(X_{i}\in\mathbb{R}^{P}\). Here, \(N\) and \(P\) denote the number of instances and the number of features, respectively. While those nonparametric methods are useful, one can improve predictive power by incorporating feature information in a way of regression modeling. Cox proportional-hazards (CoxPH) and accelerated-failure-time (AFT) frameworks are the most common approaches in modeling survival quantities utilizing both censoring and features.
### Hazard-Based Models
A standard CoxPH regression model [10] formulates the conditional hazard function as:
\[h(t|X_{i})=h_{0}(t)\exp(\beta^{T}X_{i}),\ (i=1,\ldots,N), \tag{1}\]
where \(h_{0}(\cdot)\) is an unknown baseline hazard function which has to be estimated nonparametrically, and \(\beta\in\mathbb{R}^{P}\) is the regression coefficient vector. It is one of the most celebrated models in statistics in that \(\beta\) can be estimated at full statistical efficiency while achieving nonparametric flexibility on \(h_{0}\) under the proportionality assumption. Note the model is semiparametric due to the unspecified underlying baseline hazard function \(h_{0}\). Letting \(\mathcal{R}_{i}\) be the set of all individuals "at risk", meaning that are not censored and have not experienced the event before \(T_{i}\), statistically efficient estimator for regression coefficients can be obtained minimizing the loss function with respect to \(\beta\):
\[L_{\text{CoxPH}}(\beta)=\sum_{i}\delta_{i}\log\left(\sum_{j\in\mathcal{R}_{i} }\exp\left[\beta^{T}X_{j}-\beta^{T}X_{i}\right]\right) \tag{2}\]
which is equivalent to the negative partial log-likelihood function of CoxPH model.
Based on this loss function, Kvamme et al. proposed a deep-learning algorithm for the hazard-based predictive model, namely _Cox-Time_, replacing \(\beta^{T}X_{j}\) and \(\beta^{T}X_{i}\) with \(g(y_{j},X_{j};\theta)\) and \(g(y_{i},X_{i};\theta)\), respectively. Here, \(g(\cdot)\) denotes the neural networks parameterized by \(\theta\), and \(\mathcal{R}_{i}\) would be replaced by \(\tilde{\mathcal{R}}_{i}\), representing the sampled subset of \(\mathcal{R}_{i}\). With a simple modification of the standard loss function in Eq. (2), _Cox-Time_ can alleviate the proportionality for relative risk, showing empirically remarkable performance against other hazard-based models in both event ordering and survival calibration.
### Accelerated-Failure-Time Models
The conventional AFT model relates the log-transformed survival time to a set of features in a linear form:
\[\log T_{i}=\beta^{T}X_{i}+\epsilon_{i},\ (i=1,\ldots,N), \tag{3}\]
where \(\epsilon_{i}\) is an independent and identically distributed error term with a common distribution function \(F_{0}(\cdot)\) that is often assumed to be Weibull, exponential, log-normal, etc. As implied in Eq. (3), AFT model takes a form of linear modeling and provides an intuitive and physical interpretation on event time without detouring via the vague concept of hazard function, making it a powerful alternative to hazard-based analysis. However, imposing a parametric distributional assumption for \(\epsilon_{i}\) is a critical drawback of the model, for which model in Eq. (3) could be a subclass of the hazard-based models.
To alleviate linearity and parametric distributional assumptions, several works brought the concept of generative process and approximated the error distribution via neural networks like generative adversarial networks (GANs) [33, 7]. Especially, Chapfwa et al. proposed a deep adversarial time-to-event (_DATE_) model, which specifies the loss function as:
\[L_{\text{DATE}}(\theta,\phi)=\mathbb{E}_{(X,y)\sim F_{nc}}[D_{\phi }(X,y)] \tag{4}\] \[+\mathbb{E}_{X\sim F_{nc},\xi\sim F_{\xi}}[1-D_{\phi}(X,G_{\theta }(X,\xi;\delta=1))]\] \[+\lambda_{2}\mathbb{E}_{(X,y)\sim F_{c},\xi\sim F_{\xi}}[\max(0,y- G_{\theta}(X,\xi;\delta=0))]\] \[+\lambda_{3}\mathbb{E}_{(X,y)\sim F_{nc},\xi\sim F_{\xi}}[\|t-G_{ \theta}(X,\xi;\delta=1)\|_{1}]\]
where \(\theta\) and \(\phi\) denote the parameter set associated with a generator \(G_{\theta}\) and a discriminator \(D_{\phi}\), respectively, \((\lambda_{2},\lambda_{3})\) are hyperparameters to tune censoring trade-off, \(F_{nc}(X,y)\) and \(F_{c}(X,y)\) are empirical joint distributions for non-censored cases and censored cases,
respectively, and \(F_{\xi}\) is the simple distribution, such as uniform distribution. The generator \(G_{\theta}\) implicitly defines event time distribution. Despite _DATE_ achieves prominent survival calibration via the sample-generating process, the objective function is quite complicated and the GAN framework is inherently prone to mode collapse, i.e., the generator learns only a few modes of the true distribution while missing other modes [40]. Also, when optimizing neural networks with multiple loss functions, it is difficult to balance and there might be conflicts (i.e. trade-off) with each term [12]. Therefore, their loss function might be difficult to be optimized as intended and requires a burdening training time, and consequently not be suitable for large-scale time-to-event analysis.
In the statistical literature, there have been many attempts to directly estimate regression coefficients in the semiparametric AFT model, where the error distribution \(F_{0}\) is left unknown, rather than imposing specific parametric distribution or exploiting generative models. In this work, we bridge non-linear representation learning and an objective function for estimation of semiparametric AFT model, which is originated from Gehan's rank statistic. By extensive quantitative analysis, we have shown the beauty of simplicity and compatibility of rank-based estimation, along with outstanding experimental performance.
## 3 Method
In this section, we introduce the concept of _DART_, followed by predictive analysis for survival quantities. The conceptual differences with the other neural network-based AFT models are illustrated in Figure 1. The semiparametric AFT is distinct from a parametric version in that the error distribution function \(F_{0}\) is left completely unknown like the baseline hazard function in the CoxPH. By further exploiting neural networks, we propose _DART_ model that can be formulated as a generalization of model in Eq. (3):
\[\log T_{i}=g(X_{i};\theta)+\epsilon_{i},\;(i=1,\ldots,N), \tag{5}\]
where \(g(X_{i};\theta)\) denotes arbitrary neural networks with input feature vector \(X_{i}\) and a parameter set \(\theta\), outputting single scalar value as predicted log-scaled time-to-event variable. With this simple and straightforward modeling, _DART_ entails several attractive characteristics over existing AFT-based models. First, the semiparametric nature of _DART_ enables flexible estimation of error distribution, allowing improved survival prediction via neural network algorithms for \(F_{0}\). Second, the restrictive log-linearity assumption of AFT model can be further alleviated by exploiting deep neural networks. Specifically, while standard AFT model relates time-to-event variable to feature variable in linear manner, deep learning is able to approximate any underlying functional relationship, lessening linearity restriction. Although _DART_ still requires log-transformed time as a target variable, its deep neural network redeems the point with powerful representative performance supported by universal approximation theorems, enabling automated non-linear feature transformation [29, 38, 45].
### Parameter Estimation via Rank-based Loss Function
In statistical literature, many different estimating techniques have been proposed for fitting semiparametric AFT model [44, 42, 18, 19]. Among them, we shall adopt the \(l_{1}\)-type rank-based loss function by taking into account the censoring information, which is efficient and suitable for stably fitting neural networks. Letting a residual term \(e_{i}\equiv e_{i}(\theta)=\log y_{i}-g(X_{i};\theta)\), the objective loss function for _DART_ can be formulated as:
\[L_{\text{Rank}}(\theta)=\frac{1}{N}\sum_{i=1}^{N}\sum_{j=1}^{N}\delta_{i}(e_{ i}-e_{j})I\{e_{i}\geq e_{j}\}, \tag{6}\]
where \(I(\cdot)\) is the indicator function that has value 1 when the condition is satisfied, otherwise 0. The estimator \(\hat{\theta}\) can be obtained by minimizing the loss function with respect to model parameter set \(\theta\). Optimization of model parameters can be conveniently conducted via batched stochastic gradient descent (SGD). Notice that the loss function in Eq. (6) involves model parameter \(\theta\) only, without concerning estimation of the functional parameter \(F_{0}\), enabling flexible time-to-event regression modeling.
Strength of the loss function is theoretical consistency of optimization without requiring any additional settings. Let the neural network be expressed: \(g(X_{i};\phi,\beta)=\beta^{T}W_{i}\), where \(W_{i}\in\mathbb{R}^{K}\) is transformed feature vector through hidden layers with parameter set \(\phi\), and \(\beta\in\mathbb{R}^{K}\) is a parameter set of linear output layer. Then, it is easy to see that the following estimating function is the negative gradient of the loss function with respect to \(\beta\):
\[U_{\text{Rank}}(\beta) =\frac{1}{N}\sum_{i=1}^{N}\sum_{j=1}^{N}\delta_{i}(W_{i}-W_{j})I( \tilde{e}_{i}\leq\tilde{e}_{j})\stackrel{{\text{set}}}{{=}}0\] \[\tilde{e}_{i} =\log y_{i}-\beta^{T}W_{i}. \tag{7}\]
Eq. (7) is often called the form of Gehan's rank statistic [18], testing whether \(\beta\) is equal to true regression coefficients for linear model \(\log T_{i}=\beta^{T}W_{i}+\epsilon_{i}\), and the solution to the estimating equation \(\hat{\beta}\) is equivalent to the minimizer of Eq. (6) with respect to \(\beta\). This procedure entails nice asymptotic results such as \(\sqrt{n}\)-consistency and asymptotic normality of \(\hat{\beta}\) under the counting processes logic, assuring convergence of \(\hat{\beta}\) towards true parameter \(\beta\) as the number of instances gets larger [42, 18]. Although these asymptotic results might not be directly generalized to the non-linear predictor function, we expect that hidden layers would be able to assess effective representations \(W_{i}\) with non-linear feature transformation, as evidenced by extensive quantitative studies. Note that, to keep theoretical alignment, it is encouraged to set the last layer as a linear transformation with an output dimension of 1 to mimic the standard linear model following non-linear representation. In addition, a robust estimation against outlying instances can be attained, depending rank of residual terms along with their difference.
### Prediction of Survival Quantities
Predicted output \(g(X_{i};\hat{\theta})\) from trained _DART_ model represents estimated expectation of \(\log T_{i}\) conditional on \(X_{i}\), i.e. mean log-transformed survival time with given feature information of \(i\)th instance. However, estimating survival quantities (e.g. conditional hazard function) cannot be directly done for AFT-based models. Instead, we utilize the Nelson-Aalen estimator [1], verified to be consistent under the rank-based semiparametric AFT model [35]. Define \(N(t;\theta)=\sum_{i=1}^{N}N_{i}(t)\) and \(Y(t)=\sum_{i=1}^{N}Y_{i}(t)\), where \(N_{i}(t)=I(e_{i}\leq t,\delta_{i}=1)\) and \(Y_{i}(t)=I(e_{i}>t)\) are the counting and the at-risk processes, respectively. Then the Nelson-Aalen estimator of \(H_{0}(t)\) is defined by
\[\hat{H}_{0}(t)=\int_{0}^{t}\frac{I\{Y(u)>0\}}{Y(u)}dN(u). \tag{8}\]
The resulting conditional hazard function given \(X_{i}\) is defined by
\[\hat{h}(t|X_{i})=\hat{h}_{0}[t\exp\{-g(X_{i};\hat{\theta})\}]\exp\{-g(X_{i};\hat{ \theta})\}, \tag{9}\]
where \(\hat{h}_{0}(\cdot)=d\hat{H}_{0}(\cdot)\) is pre-trained baseline hazard function using Nelson-Aalen estimator. Consequently, conditional survival function can be estimated by relationship \(\hat{S}(t|X_{i})=\exp\{-\int_{0}^{t}\hat{h}(t|X_{i})dt\}\), providing comparable predictions to other time-to-event regression models. In practice, training set is used to get pre-trained Nelson-Aalen estimator.
## 4 Evaluation Criteria
In this section, we evaluate models with two metrics for quantitative comparison: concordance index (CI) and integrated Brier score (IBS).
**Concordance Index.** Concordance of time-to-event regression model represents the proposition: if a target variable of instance \(i\) is greater than that of instance \(j\), then the predicted outputs of \(i\) should be greater than that of \(j\). By letting target variable \(y\) and predicted outcome \(\hat{y}\), concordance probability of survival model can be expressed as \(P(\hat{y}_{i}>\hat{y}_{j}|y_{i}>y_{j})\), and concordance index measures the probability with trained model for all possible pairs of datasets [16]. With non-proportional-hazards survival regression models like _Cox-Time_ or Lee et al. [26], however, Harrell et al. [16] cannot be used to measure discriminative performance properly. For fair comparison of survival regression models, time-dependent concordance index [4], or \(C^{\text{ad}}\) was used for those baseline models proposed by Kvamme et al. [25] to account for tied events. \(C^{\text{ad}}\in[0,1]\) can be regarded as AUROC curve for time-to-event regression model, denoting better discriminative performance for a value close to 1. Note that standard concordance index yields identical results with \(C^{\text{ad}}\) for AFT-based models.
**Integrated Brier Score.** Graf et al. [14] introduced generalized version of Brier score [6] for survival regression model along with inverse probability censoring weight (IPCW), which can be described as:
\[\text{BS}(t)=\frac{1}{N}\sum_{i=1}^{N}\frac{\hat{S}(t|X_{i})^{2}I(y_{i}\leq t,\delta_{i}=1)}{\hat{G}(y_{i})} \tag{10}\]
where \(\hat{G}(t)=\hat{P}(C>t)\) is a Kaplan-Meier estimator for censoring survival function to assign IPCW. \(\text{BS}(t)\) measures both how well calibrated and discriminative is predicted conditional survival function: if a given time point \(t\) is greater than \(y_{i}\), then \(\hat{S}(t|X_{i})\) should be close to 0. Integrated Brier score (IBS) accumulates BS for a certain time grid \([t_{1},t_{2}]\):
\[\text{IBS}=\frac{1}{t_{2}-t_{1}}\int_{t_{1}}^{t_{2}}BS(s)ds. \tag{11}\]
If \(\hat{S}(t|X_{i})=0.5\) for all instances, then IBS becomes 0.25, thus well-fitted model yields lower IBS. For experiments, time grids can practically be set to minimum and maximum of \(y_{i}\) of the test set, equally split into 100 time intervals.
## 5 Experiments
In this section, we describe our experiment design and results to validate performance of _DART_ compared to other time-to-event regression models. We conduct experiments using four real-world survival datasets and baseline models provided by Kvamme et al. and Chapfuwa et al. with two evaluation metrics mentioned in previous section.
### Datasets
We use three benchmark survival datasets and a single large-scale dataset provided by Kvamme et al.. The descriptive statistics are provided in Table 1. Specifically, three benchmark survival datasets include: the Study to Understand Prognoses Preferences Outcomes and Risks of Treatment (SUPPORT), the Assay of Serum Free Light Chain (FLCHAIN), and the Rotterdam Tumor Bank and German Breast Cancer Study Group (GBSG). Of particular interest is the
Figure 1: Illustration of conceptual differences between deep learning-based AFT models in terms of their respective contributions and required assumptions with a format of the standard AFT. To alleviate the parametric distribution assumption, which _DRAFT_ has, _DATE_ exploits the GAN framework and learns the implicit underlying distribution \(q_{\theta}\) through the generator parameterized by \(\theta\). For _DRAFT_, \(L_{NLL}\) and \(L_{PR}\) denote negative log-likelihood and partial ranking likelihood, respectively. _DATE_ basically requires four loss functions: \(L_{G_{\theta}}\), \(L_{D_{\phi}}\) for the generator and the discriminator, \(L_{Cens}\) for adjusting censoring distribution, and \(L_{Dist}\) for the distortion penalty. Compared to the others, _DART_ does not require pre-specification or modeling for error distribution and it is trained with a simple loss function supported by statistical theory.
GBSG dataset, which includes an indicator variable for hormonal therapy, allowing us to evaluate the effectiveness of a treatment recommendation system built using survival regression models. In addition, we use the large-scale WSDM KKBox dataset containing more than two millions of instances for customer churn prediction, which was prepared for the 11th ACM International Conference on Web Search and Data Mining. With a large-scale dataset, we can clearly verify the consistency of predictive performance of time-to-event models.
### Baseline Models
We select six neural network-based time-to-event regression models as our experimental baselines: _DRAFT_ and _DATE_[7] as AFT-based models for direct comparison with our model, and _DeepSurv_[22], _Cox-CC_ and _Cox-Time_[25] as hazard-based models.
For AFT-based models, _DRAFT_ utilizes neural networks to fit log-normal parametric AFT model in non-linear manner. However, it should be noted that if the true error variable does not follow a log-normal distribution, this model may be misspecified. In contrast, _DATE_ exploits generative-adversarial networks (GANs) to learn conditional time-to-event distribution and censoring distribution using observed dataset. The generator's distribution of _DATE_ can be trained from data to implicitly encode the error distribution. One major benefit of this approach is that it eliminates the need to pre-specify the parameters of the distribution. Despite these advantages, the method is challenging to apply to real-world datasets due to the complexity of the training procedure and objective function.
In case of hazard-based models, _DeepSurv_ fits Cox regression model whose output is estimated from neural networks. The model outperforms the standard CoxPH model in performance, not clearly exceeding other neural network-based models. Furthermore, the proportional hazards assumption still remains unsolved with _DeepSurv_. _Cox-CC_ is another neural network-based Cox regression model, using case-control sampling for efficient estimation. While both _DeepSurv_ and _Cox-CC_ are bounded to proportionality of baseline hazards, _Cox-Time_ relieves this restriction using event-time variable to estimate conditional hazard function.
In this study, we focus on neural network-based models and exclude other machine learning-based models from the comparison to avoid redundant analysis that was conducted in previous studies. Some neural network-based models are also excluded as we aim to alleviate fundamental assumptions such as proportionality and parametric distribution. Note that comparing hazard-based models and AFT-based models has been rarely studied due to their difference in concept and purpose: modeling hazard function and modeling time-to-event variable. While models can be evaluated using common metrics, it is important to conduct a thorough analysis when analyzing numerical experiments, particularly when comparing hazard-based models and AFT-based models. These two types of models have different underlying concepts and purposes, making it crucial to take into consideration their unique characteristics during the analysis.
### Model Specification and Optimization Procedure
For a fair comparison, we apply neural network architecture used in Kvamme et al.: MLP with dropout and batch-normalization. Every dense blocks are set to have the equal number of nodes (i.e. the dimension of hidden representations), no output bias is utilized for output layer, and ReLU function is chosen for non-linear activation for all layers. Preprocessing procedure has also been set based on Kvamme et al. including standardization of numerical features, entity embeddings [15] for multi-categorical features. The dimension of entity embeddings is set to half size of the number of categories. In addition, due to the fact that parameters of AFT-based models tend to be influenced by scale and location of the target variable, \(y\) has been standardized and its mean and variance are separately stored to rescaled outputs.
_DeepSurv, Cox-CC, Cox-Time, DART._ The PyCox1 python package provides the training codes for these models. For WSDM KKBox dataset, we repeat experiments 30 times with best configurations provided by Kvamme et al.. Because train/valid/test split of KKBox dataset is fixed, we don't perform a redundant search procedure. For the other datasets (SUPPORT, FLCHAIN, and GBSG), we perform 5-fold cross-validation as performed at Kvamme et al. because the size of datasets is relatively small. At each fold, the best configuration is selected among 300 combinations of randomly selected hyperparameters which are summarized in Table 3. We use AdamWR [32] starting with one epoch of an initial cycle and doubling the cycle length after each cycle. The batch size is set to 1024 and the learning rates are found by Smith as performed at Kvamme et al..
Footnote 1: [https://github.com/havakv/pycox](https://github.com/havakv/pycox)
_DRAFT, DATE._ The implementation of _DATE2_ by the authors includes the code of _DRAFT_ as well. We utilize their official codes
\begin{table}
\begin{tabular}{l r r r} \hline \hline Dataset & Size & \# features & \% Censored \\ \hline WSDM KKBox & 2,646,746 & 15 & 0.28 \\ SUPPORT & 8,873 & 14 & 0.32 \\ FLCHAIN & 6,524 & 8 & 0.70 \\ GBSG & 2,232 & 7 & 0.43 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of survival datasets.
\begin{table}
\begin{tabular}{l r} \hline \hline Hyperparameter & Values \\ \hline \# Layers & \{1, 2, 4\} \\ \# Nodes per layer & \{64, 128, 256, 512\} \\ Dropout & \{0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7\} \\ Weight decay & \{0.4, 0.2, 0.1, 0.05, 0.02, 0.01, 0.001\} \\ Batch size & \{64, 128, 256, 512, 1024\} \\ \(\lambda\) (CoxTime and CoxCC) & \{0.1, 0.01, 0.001, 0.001\} \\ \hline \hline \end{tabular}
\end{table}
Table 2: Hyperparameter search space for the WSDM KKBox dataset.
\begin{table}
\begin{tabular}{l r} \hline \hline Hyperparameter & Values \\ \hline \# Layers & \{1, 2, 4\} \\ \# Nodes per layer & \{64, 128, 256, 512\} \\ Dropout & \{0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7\} \\ Weight decay & \{0.4, 0.2, 0.1, 0.05, 0.02, 0.01, 0.001\} \\ Batch size & \{64, 128, 256, 512, 1024\} \\ \(\lambda\) (CoxTime and CoxCC) & \{0.1, 0.01, 0.001, 0.001, 0.0\} \\ \hline \hline \end{tabular}
\end{table}
Table 3: Hyperparameter search space for GBSG, FLCHAIN, and SUPPORT datasets.
for all datasets. The batch size for KKBox dataset is set to 8192 because the experiments are not feasible with the batch size 1024 due to their training time. The best configurations of _DRAFT_ and _DATE_ also are founded by grid search with same hyperparameter search space with others. We repeat experiments 30 times with the best configuration as mentioned above. For the other datasets, as same with other models, we perform 5-fold cross-validation and choose the best configuration among 300 random hyperparameter sets at each fold. The hyperparameter search space for WSDM KKBox dataset is summarized in Table 2. Our implementation for _DART_ is publicly available at: [https://github.com/teboozas/dart_ccai23](https://github.com/teboozas/dart_ccai23).
### Performance Evaluation
To measure discriminative performance of outputs, we exploit standard C-index [16] for AFT-based models while letting hazard-based models to utilize \(C^{\text{td}}\) since equivalent evaluation is possible for AFT-based models including _DART_ since it outputs a single scalar value to evaluate ranks. In terms of survival calibration, we implement our own function to obtain IBS based on its definition, due to the fact that evaluation methods of the conditional survival function and IPCW provided by Kvamme et al. are not compatible with AFT-based models. Specifically, we first fit Kaplan-Meier estimator upon standardized training set, and subsequently evaluate conditional survival estimates and IPCW utilizing estimated residuals, following the definition of baseline hazard function of AFT framework rather than to use time-to-event variable directly. For numerical integration, we follow settings of time grid from Kvamme et al., and standardize the grid with mean and standard deviation stored with standardization procedure of training set. By doing so, IBS can be compared upon identical timepoints for both hazard-based models and AFT-based models.
### Summary of Results
Experiment results are provided in Table 4 and 5. In summary, _DART_ is competitive in both discriminative and calibration performance, especially for large-scale survival datasets. Specifically, _DART_ yields consistent results for WSDM KKBox dataset compared to other baselines, maintaining competitive performance in terms of \(C^{\text{td}}\) and IBS. We point out that _DART_ is the most powerful and AFT-based time-to-event model that can be a prominent alternative when hazard-based models might be not working.
## 6 Analysis
We provide analysis on experimental results, pointing out strengths of _DART_ model in terms of performance metrics.
**Characteristic of _DART_ for large-scale dataset.** As provided in Table 4 and 5, _DART_ generally yields prominent survival calibration performance with small variance in terms of IBS. Especially for large-scale dataset (KKBox), _DART_ shows state-of-the-art performance with the smallest variance in evaluated metrics. This result comes from the characteristic of rank-based estimation strategy. Specifically, on the basis of asymptotic property of Eq. (7), estimated model parameters get stable and close to true parameter set, when the size of dataset gets larger. Thus, once the trained model attains effective representation (\(W_{i}\) in Eq. (7)) from hidden layers via stochastic optimization methods, _DART_ is able to provide stable outputs with strong predictive power, without sophisticated manipulation upon time-to-event distribution.
**Comparison with AFT-based models.** In case of _DRAFT_, model does not generally perform well for both \(C^{\text{td}}\) and IBS for most datasets. This is attributed to the fact that _DRAFT_ is a simple extension of the parametric AFT model with log-normality assumption. Thus, this approach is quite sensitive to true underlying distribution of dataset. On the other hand, _DATE_ yields clearly improved performance against _DRAFT_ especially for survival calibration in terms of IBS. Unlike _DRAFT_, _DATE_ utilizes GAN to learn conditional error distribution without parametric assumption, allowing the model to yield more precise survival calibration. However, time-to-event distribution is trained with divided loss functions by optimizing two tuning hyperparameters in Eq. (4). This approach can be significantly affected by well-tuned hyperparameters and heavy computation is required to this end, resulting insufficient performance. Meanwhile, as illustrated in Figure 1, _DART_ has advantages of simplicity in theoretical and practical points compared to the other AFT-based models.
**Comparison with hazard-based models.** As previously reported by Kvamme et al., _Cox-Time_ shows competitive performance against other hazard-based models, directly utilizing event-time variable to model conditional hazard function. However, we found out that _Cox-Time_ requires precise tuning of additional hyperparameters (\(\lambda\) and Log-durations) largely affecting predictive performance.
In contrast, _DART_ shows smaller variance in evaluation metrics as the size of data increases, ensuring stable output for large-scale dataset with asymptotic property which is crucial for practical application. In addition, while _Cox-Time_ showed better performance against _DART_ with respect to C-index in some cases, _DART_ outperformed in _IBS_; gaining equivalent mean IBS scores against _Cox-Time_ with smaller variance indicates our method dominant others in comprehensive survival metrics.
**Comparison of the required time for optimizing each model.** To verify the compatibility for large-scale data, we measure the training time of each model. We strictly bound the scope of the target process for a fair comparison, as from data input to parameter update excluding other extra steps. The specifications of all models are set equally: the number of nodes 256, the number of layers 6, and the batch size 1024. With the consumed time of 1000 iterations, we calculate the training time for a single epoch. We exclude the first iteration that is an outlier in general. All experiments were run on a single NVIDIA Titan XP GPU. Table 6 shows that the simplicity of _DART_ leads to practical efficiency, while _DATE_ is computationally expensive due to the generator-discriminator architecture.
**Notes on practical impact of performance gain** We acknowledge that practically interpreting IBS might be less intuitive and challenging. Thus we provide a simplified example below. Consider that a random-guess model, which estimates all conditional survival functions at 0.5, would result in an IBS score of 0.250. In this context, an improvement from 0.174 to 0.150, which are close to IBS of _Cox-Time_ and _DART_ for GBSG case respectively, of is indeed substantial.
While the analogy below might not be entirely suitable, one can infer the practical improvement in survival estimation precision. It is worth to mention that the decreased IBS from 0.175 to 0.150 is still significant improvement in model accuracy, even though this comparison is somewhat rough. It represents an increase in _Perfect_ estimations (i.e. \(\tilde{S}(t|X_{i})=0\ \forall i\ \text{s.t.}\ t\geq y_{i},\delta_{i}=1\)) from 30 to 40 occurrences (+33.3%) out of 100 estimations. Considering the fact that \(\tilde{S}(t|X_{i})\) ranges from 0 to 1, it is hard to achieve 0.025 points improvements in real settings. Consequently, while the differences in the metrics might appear moderate, we would like to emphasize that they are practically significant. Regarding the C-index, our model showed a 1.64% improvement compared to Cox-Time on the KKBox dataset, with scores rising from 0.853 to 0.867. Considering that a perfect score is 1.00, this implies that our model shows noticeable performance and exhibits high consistency In summary, _DART_ would be the attractive alternative to existing time-to-event regression frameworks by ensuring remarkable performance and fast computation
## 7 Conclusion
In this work, we propose flexible time-to-event regression model, namely _DART_, utilizing the semiparametric AFT rank-regression method, coupled with deep neural networks, to alleviate strict assumptions and to attain practical usefulness in terms of high and stable predictive power. Extensive experiments have shown that our approach is prominent in discrimination and correction performance even on large-scale survival datasets. Although we do not yet address more complex censoring data such as competing risks and interval censoring, our approach can provide a stable baseline to handle these tasks in the near future with simple modifications of the loss function.
## Acknowledgements
The research conducted by Sangbum Choi was supported by grants from the National Research Foundation of Korea (2022M3J6A1063595, 2022R1A2C1008514) and the Korea University research grant (K2305261). Jaewoo Kang and Junhyun Lee's research was supported by grants from the National Research Foundation of Korea (NRF-2023R1A2C3004176), the Korea Health Industry Development Institute (HR20C0021(3)), and the Electronics and Telecommunications Research Institute (RS-2023-00220195).
|
2307.13470 | Combinatorial Auctions and Graph Neural Networks for Local Energy
Flexibility Markets | This paper proposes a new combinatorial auction framework for local energy
flexibility markets, which addresses the issue of prosumers' inability to
bundle multiple flexibility time intervals. To solve the underlying NP-complete
winner determination problems, we present a simple yet powerful heterogeneous
tri-partite graph representation and design graph neural network-based models.
Our models achieve an average optimal value deviation of less than 5\% from an
off-the-shelf optimization tool and show linear inference time complexity
compared to the exponential complexity of the commercial solver. Contributions
and results demonstrate the potential of using machine learning to efficiently
allocate energy flexibility resources in local markets and solving optimization
problems in general. | Awadelrahman M. A. Ahmed, Frank Eliassen, Yan Zhang | 2023-07-25T13:01:25Z | http://arxiv.org/abs/2307.13470v1 | # Combinatorial Auctions and Graph Neural Networks for Local Energy Flexibility Markets
###### Abstract
This paper 1 proposes a new combinatorial auction framework for local energy flexibility markets, which addresses the issue of prosumers' inability to bundle multiple flexibility time intervals. To solve the underlying NP-complete winner determination problems, we present a simple yet powerful heterogeneous tri-partite graph representation and design graph neural network-based models. Our models achieve an average optimal value deviation of less than 5% from an off-the-shelf optimization tool and show linear inference time complexity compared to the exponential complexity of the commercial solver. Contributions and results demonstrate the potential of using machine learning to efficiently allocate energy flexibility resources in local markets and solving optimization problems in general.
graph neural networks, combinatorial auctions, energy flexibility, local energy markets
Footnote 1: Accepted in The IEEE PES ISGT Europe 2023 (ISGT Europe 2023), Grenoble, France, on October, 2023.
## I Introduction
The increasing adoption of photovoltaic (PV) energy at the distribution level due to the decreasing costs of solar and battery storage systems poses significant challenges for modern power systems. The European Commission has recognized the importance of managing local energy communities to integrate distributed energy resources and enable end-users to become active energy service providers [1]. Energy flexibility refers to a system's capacity to utilize its resources in response to fluctuations in the net [2]. Prosumers who own PV systems can offer energy flexibility services at a local level, which can be combined and utilized in the energy market with the help of aggregators. In this market, prosumers serve as sellers, while the distribution system operator (DSO) is the buyer, and flexibility services serve as the commodity.
Combinatorial auctions allow owners of PV and energy storage systems (ESS) to bundle flexibility intervals, rather than bidding for them individually. For example, a prosumer with two PV production time intervals, \(a\) and \(b\), can choose to provide energy flexibility immediately in the same time interval, or store energy in the ESS and bid for a later use. Additionally, the prosumer can supplement any one of the PV production intervals with an off-peak interval, \(c\), for example, at night, by using the ESS. Combinatorial auctions enable prosumers to submit more competitive bids by offering more choices and mitigating the risk of losing desirable flexibility-provision intervals. In this example, the prosumer can bundle all options as a list of mutually exclusive time-interval-combinations \(\left[\{a\},\{b\},\{c\},\{a\wedge b\},\{a\wedge c\},\{b\wedge c\}\right]\) with a corresponding list of power values. Combinatorial auctions have been successful in other domains, such as airport time slots allocation and spectrum auctions [3], but they have not been thoroughly studied in the energy domain, especially in local energy markets. Previous research on local energy markets has focused on sequential auction mechanisms as in [4, 5], which do not satisfy the bundling need in local flexibility markets (LFMs).
However, in LFMs, each time interval comprises multiple flexibility dividends that can be won by different bids and prosumers. This divisibility notion leads to complex winner determination problems (WDP) posing a challenge for implementing combinatorial auctions. This paper addresses these challenges by proposing a machine learning approach that efficiently solves the winner determination problem using graph neural networks (GNN). The approach considers multi-minded bidders proposing a reverse combinatorial auction framework for LFMs. By learning an end-to-end model that maps WDP instances to solutions, computation complexity is decoupled and efficient allocation time is achieved. This work mainly focuses on the management layer of energy market and does not address regulations or physical layer challenges.
We introduce the combinatorial auction framework in Section II and discuss the LFM-WDP and its complexity in Section III. Then, we propose our LFM neural combinatorial auction in Section IV and discuss our GNN model for learning the LFM-WDP solution in Section V. We present numeric results in Section VI and concluding remarks in Section VII.
## II Local Flexibility Market Combinatorial Auction
We define the flexibility interval as the time period during which the DSO requires a provision of ramp-up (e.g., energy supply or load shedding) or ramp-down (e.g., energy reduction or load increase) in active power units. We assume that the DSO communicates its flexibility requirements through a flexibility curve representing the needed amount of flexibility units. Figure 1 shows the proposed auction framework which involves the DSO as the buyer and aggregators as sellers, each managing a portfolio of prosumers with flexible resources and accessing their forecasting modules. Aggregators bid on
behalf of their prosumers, considering preferences and available resources. The flexibility market operator evaluates bids and determines winners by solving an LFM-WDP. Next, we mathematically model the flexibility request and resources and define the bids' formats and auction objective.
### _Flexibility modelling_
#### Ii-A1 Flexibility curve
The flexibility curve \(\mathcal{F}\) explicitly represents the amount of ramp-up or ramp-down requested by the DSO for each interval, with real power units as flexibility units. We represent \(\mathcal{F}\) as \(\{u_{1},u_{2},...,u_{T}\}\), where \(u_{j}\in\mathbb{Z}\) is the quantity of requested flexibility units at interval \(j\in I_{M}=\{0,1,2,...,T\}\) and \(sign(u_{j})\) indicates a ramp-up (+) or a ramp-down (-) flexibility request. An example of a DSO flexibility request when \(T\) is 6 can be expressed as \(\mathcal{F}=\{25,10,15,0,-30,-10\}\) megawatt, denoting requesting ramp-ups flexibility units at \(\{1,2,3\}\) and ramp-downs \(\{5,6\}\) flexibility units with no requirement at \(\{4\}\).
#### Ii-A2 PV system
We investigate prosumers with fixed nominal PV production and no advanced peak-point-tracking technology, that have roof-top solar systems that follow a bell-curve pattern with peak power output at midday. In our study, we assume that PV systems are not connected to the grid during the bidding process, showing that they can only offer ramp-up services by contributing their production. We define \(I_{pv}\) as the set of peak production intervals and \(\sigma_{j}^{pv}\) as the forecasted PV power output in flexibility units in each interval \(j\).
#### Ii-A3 PV-storage system
As a practical consideration, we study the case where a portion of PV owners are equipped with sufficient energy storage systems (PV-ESS) which can be used to store energy either from their own PV units or the grid and can be used to feed energy to the grid. Under this consideration, PV-ESS can provide both ramp-up and ramp-down flexibility services. Hence, let \(I_{ss}\subseteq I_{M}\) be the set of intervals of storage availability and \(\sigma_{j}^{ss}\leq u_{j}\) is the storage system charging/discharging power in flexibility units such that \(\sigma_{j}^{ss}\in\mathbb{Z}\) for each interval \(j\), where \(sign(\sigma_{j}^{ss})\) indicates a ramp-up (+) or a ramp-down (-) flexibility service.
#### Ii-A4 Flexibility provision
We can summarize the available flexibility provision options based on the resources of prosumers as follows. A PV system can provide a ramp-up service by connecting to the grid while an ESS can provide both ramp-up by discharging energy to the grid and ramp-down by charging energy from the grid.
### _Combinatorial Bid Format and Auction Objective_
In this study, we consider a multi-minded bidder scenario where bidders can submit multiple and mutually exclusive bids. This scenario differs from the single-minded bidder case where bidders are only allowed to to submit a single bid. Such mutual exclusivity nature is referred to as XOR bids in the standard scheme in combinatorial auctions logic [3].
#### Ii-B1 Valuation function
The flexibility request \(\mathcal{F}\) of \(\tau\leq T\) flexibility intervals forms a subset space of \(2^{\tau}\) for possible bids. Hence, we define a valuation function to represent prosumer \(p\) assigned value to the subset space, as \(v_{p}:2^{\tau}\rightarrow\mathbb{R}\). Now, let \(S\) be a desired subset by prosumer \(p\) which consists of \(\{\sigma_{1}^{pv},\sigma_{2}^{pv},...,\sigma_{m}^{pv}\}\) produced by PV and \(\{\sigma_{1}^{ss},\sigma_{2}^{ss},...,\sigma_{q}^{ss}\}\) fed from the ESS. The cost is a principal component of the valuation from the prosumer perspective. Therefore, we can model the valuation of subset \(S\) as
\[v_{p}(S)=\sum_{m^{{}^{\prime}}=1}^{m}\alpha_{m^{{}^{\prime}}}.\sigma_{m^{{}^{ \prime}}}^{pv}+\sum_{q^{{}^{\prime}}=1}^{q}\beta_{q^{{}^{\prime}}}.\sigma_{q^{ {}^{\prime}}}^{ss}+\gamma_{p}(S) \tag{1}\]
where \(\alpha_{m^{{}^{\prime}}}\) and \(\beta_{q^{{}^{\prime}}}\) are the cost coefficients of time interval \(m^{{}^{\prime}}\) delivered by the PV and time interval \(q^{{}^{\prime}}\) which is stored in the ESS, respectively. Note that \(\alpha_{m^{{}^{\prime}}}\) is ideally 0, as no cost is connected to the PV generation and \(\beta_{q^{{}^{\prime}}}=2(1-\eta)\) where \(\eta\) is the charging and discharging efficiencies and \(\gamma_{p}(S)\) is a term representing the prosumer profit for subset \(S\).
#### Ii-B2 Bid format
From a prosumer set \(N=\{1,2,...,n\}\), prosumer \(p\) can submit multiple bids and we denote the bid \(i\) as \(bp_{i}\) which offers flexibility quantities \(Si\) from the flexibility curve \(\mathcal{F}\). Then, we can calculate the prosumer's valuation \(v_{p}(Si)\) using (1). The bid consists of the subset of flexibility items and the prosumer's value for that subset, denoted as \((Sp_{i},v_{p}(S_{p_{i}}))\). The flexibility market platform receives bids from all prosumers through their aggregators as \(\mathcal{B}=(S_{I_{z}},v_{I}(S_{I_{z}})),...,(S_{n_{\kappa_{p}}},v_{n}(S_{n_{ \kappa_{n_{n}}}}))\), where \(\kappa_{p}\) is the number of bids submitted by prosumer \(p\). The subset \(Sp_{i}=\{\sigma_{j}\}\) is a set of flexibility units in time intervals \(j\in I_{Sp_{i}}\subseteq I_{M}\), where \(\sigma_{j}\) is the quantity of flexibility units. It is important to note that a prosumer may submit multiple bids, hence \(\kappa_{p}\in\mathbb{Z}^{+}\).
#### Ii-B3 Auction objective
The objective of the auction mechanism is to determine the optimal allocation set, which is a combination of bids that collectively provide the required flexibility represented by \(\mathcal{F}\) with the minimum total cost. This is achieved by solving an NP-complete optimization problem, i.e., the LFM-WDP, while adhering to essential constraints. Details about LFM-WDP and solution is in the next sections.
## III Local Flexibility Market Winner Determination Problem
Our LFM combinatorial auction aims to minimize the total cost while allocating flexibility intervals to prosumers based
Figure 1: LFM combinatorial auction temporal structure
on their bids. In general, solving WDPs in combinatorial auctions is a challenging computation problem due to the NP-completeness of multidimensional weighted set packing problems, as proven in [6]. This challenge arises from the overlapping nature of bid items. Our contribution is to use machine learning to address this problem in the LFM-WDP.
### _LFM-WDP Formulation_
We mathematically formulate the LFM-WDP as a combinatorial optimization problem considering the divisibility nature of flexibility items mentioned earlier. Given a flexibility curve \(\mathcal{F}=\{u_{j}\}\), where \(j\in I_{M}\) and a bid space \(\mathcal{B}=(S_{p_{i}},\,v_{p}(S_{p_{i}}))\), \(\forall i\in 1,...,\kappa_{p}\) and \(\forall p\in N\), the objective is to find the set \(\mathcal{A}\) of winning bids that minimizes total costs. We formulate the LFM-WDP as
\[\underset{\mathbf{X}}{\text{minimize}} J=\sum_{\forall i,\forall p}v_{p}(S_{p_{i}})x_{p_{i}} \tag{2a}\] \[\text{subject to} \sum_{\forall i,\forall p}\sigma_{j}(S_{p_{i}})x_{p_{i}}\geq u_{ j};\ \forall j\in I_{S_{p_{i}}},\] (2b) \[\sum_{\forall i}x_{p_{i}}\leq 1;\ \forall p\in N,\] (2c) \[x_{p_{i}}\in\{0,1\};\ \forall i,\forall p \tag{2d}\]
where \(\mathbf{x}\) is the decision variables' vector. The solution should satisfy the set of constraints we discuss below.
Firstly, it is plausible that the bid space may not exactly match the required flexibility curve due to the limitations in prosumers' bids controlled by their PV and ESS nominal capacity. As a result, the constraint \(\sum\mathcal{A}=\mathcal{F}\) may not be feasible. To account for this, we assume the DSO can tolerate allocating more bids than needed to exceed flexibility requirements, rather than allocating less flexibility for lower costs. This is shown in the relaxation \(\sum\mathcal{A}\geq\mathcal{F}\) and encourages prosumer engagement, increasing the _thickness_ of the long-term market. Constraint (2b) satisfies this requirement, and the relaxation is implicitly upper-bounded by the minimization problem (2a). Secondly, prosumers are constrained to submit mutually exclusive bids, meaning that they can win at most one bid. This constraint is expressed in (2c). Thirdly, we impose an integrality constraint (2d) to account for the indivisibility of bids, as prosumers are unwilling to accept partial bids. Note that _bid_ indivisibility is distinct from _item_ divisibility in the flexibility curve. The latter allows multiple units to be allocated to different prosumers within a given time interval, whereas the former dictates that each bid is a unitary entity that can only either be won or lost.
### _LFM-WDP Complexity_
The complexity of an optimization problem is typically measured by its encoding length, which is the number of binary symbols required to store an instance of the problem. Tractable problems have a number of operations that are bounded by a polynomial function in the encoding length. For our LFM-WDP, the number of bids made by \(n\) prosumers is in \(\mathcal{O}(n\times 2^{c.T})\), which is exponential in the number of time intervals and biddable items. Although our LFM-WDP in (2) is an integer linear programming problem, it has two characteristics that place it in the difficult category.
Firstly, the integrality condition in (2d) makes the feasible region non-convex and shifts the problem to mixed-integer linear NP-hard problems. Secondly, the correlation between valuations \(v_{p}\) and their corresponding multiplicities (\(\sum_{\forall j\in I_{S_{p_{i}}}}\sigma_{j}\)) in each bid \(b_{p_{i}}\) determines the problem's classification as either uncorrelated or strongly-correlated, with the latter being more challenging. Our LFM-WDP belongs to the _strongly-correlated_ problem class. Although the correlation between the values and multiplicities of items appears to be stochastic, our problem exhibits significant correlation. For example, when instances are generated with equation (1) for up to 24 intervals in the flexibility curve and 200 submitted bids, the correlation factor is approximately 0.9. This class of problems has been extensively analyzed in the Knapsack problem literature [7] and to address this difficulty, we leverage machine learning.
## IV LFM Neural combinatorial auction
Despite previous discussions on the complexity and difficulty of LFM-WDP, we intend to utilize the inherent similarity among the problem instances. We hypothesize that LFM-WDP instances follow an underlying unknown probability distribution \(\mathcal{P}\), which is determined by the nature of LFMs. This research aims to use graph neural networks (GNNs) as a machine learning framework to learn this unknown probability distribution. The primary objective is to develop a machine learning model capable of mapping LFM-WDP instances to their optimal solutions. This will be achieved by adopting a supervised learning approach, imitating an off-the-shelf solver as an expert system and producing plausible optimal solutions. Our specific aim is to learn a function \(h:\mathcal{X}\rightarrow\mathcal{Y}\), where \(\mathcal{X}\) represents the LFM-WDP representation, and \(\mathcal{Y}\) denotes the corresponding target set of solutions. To achieve this, we followed our design framework shown in Figure 2.
### _LFM-WDP Instance Generation and Expert Solver_
One challenge of our machine learning approach is the lack of gold standard data, i.e., \(\mathcal{X}\times\mathcal{Y}\) pairs. To address this, we have developed a procedure for generating LFM-WDPs. We adopt a bottom-up approach using a real PV production dataset from prosumers [8]. Specifically, we start by forming a set of possible bids that prosumers would bid for and then generate the corresponding flexibility curve. This approach provides more control over the complexity of the generated instances compared to randomly generating flexibility curves and then generating bids.
From a prosumer perspective, a biddable time interval is one that has PV production \(u_{prod}\) greater than a threshold portion \(\epsilon\) of its full capacity \(u_{max}\). Then, we define prosumer \(i\)'s biddable set as \(I_{pv}=\left\{j:u_{prod,j}\geq\epsilon\cdot u_{max,j};\ \ j\in I_{M}\right\}\) with its PV production set as \(U_{pv}=\left\{u_{prod,j};\ \ \forall j\in I_{pv}\right\}\).
Considering the two types of prosumers, those who own only PVs and those equipped with ESS, the former can
only bid on all subsets of time intervals in \(I_{pv}\) with their corresponding values from \(U_{pv}\) to provide ramp-up services, resulting in \(\sum_{k=1}^{|I_{pv}|}\binom{|I_{pv}|}{k}\) bids, the latter can bid on additional ramp-up and ramp-down intervals.
We define the availability of ESS time intervals as any time interval when the PV production is not sufficient as \(I_{ss}=\big{\{}j:u_{prod_{j}}<\epsilon.u_{max_{j}};\quad\forall j\in I_{M}\big{\}}\). Then the biddable intervals for this group of prosumers is \(I_{pv,ss}=\big{\{}\{j:u_{prod_{j}}\geq\epsilon.u_{max_{j}}\}\cup\{k:u_{prod_{k }}<\epsilon.u_{max_{k}}\};\forall j,k\in I_{M}\big{\}}\) thus the bids space that serves for the ramp-up flexibility requests contains all subsets of \(S_{pv,ss}\subseteq\big{\{}I_{pv,ss}:\big{(}|I_{pv,ss}:j\in I_{pv}|+|I_{pv,ss}:k \in I_{ss}\big{)}\big{|}\leq\big{|}I_{pv}\big{\}}\).
With ESS, prosumers can bid on ramp-down intervals if their ESS is fully discharged, for every subset \(S\in I_{pv,ss}\), with up to \(|S|\) ramp-down intervals. We calculate the price using formula (1) and accumulate the flexibility curve proportionally to the sum of the bids, as \(\mathcal{F}=\left\{\eta\sum\limits_{i=1}^{\kappa}\sigma_{j}(S_{i});\ \forall j\in I_{M}\right\}\) where \(0<\eta\leq 1\) is the proportionality factor and we use it to control the correlation and hence the problems complexity, \(\kappa\) is the number of bids, \(\sigma_{j}(S_{i})\) is the offered flexibility by the subset \(S_{i}\) for time interval \(j\).
Commercial mixed-integer linear programming (MILP) solvers use advanced algorithms to converge to optimal solutions, but they still make heuristic decisions during runtime. For example, in branch-and-bound algorithms, variable and node selections are critical decisions, while the Gomory cut in cutting-plane approaches requires computation time. We aim to learn these underlying heuristics during the GNN-based model's training phase.
To obtain the corresponding set of labels \(\mathcal{Y}\), we represent our LFM-WDPs as equations set (2) and use a mixed-integer linear programming solver (MILP) as an expert based on the data generated in the previous step.
### _LFM-WDP Graph Representation_
To effectively utilize GNNs for solving the LFM-WDPs, it is critical to create an appropriate graph representation. A graph is a mathematical structure that consists of nodes and edges, representing measurable elements and their relationships. Nodes and edges have quantifiable features, which enable the analysis and comparison of different graphs. The proposed graph representation for LFM-WDPs includes bid nodes **x**, flexibility (Flex) nodes \(\beta\), and mutual exclusiveness (MUX) nodes \(\alpha\). The graph representation addresses the similarity issue of node features by incorporating neighbouring node features into the analysis.
Each bid node \(x_{p}^{i}\) corresponds to a bid \(i\) from prosumer \(p\), with a feature vector [\(f_{p}^{i}\)] concatenating the bid price and items, i.e., \([f_{p}^{i}]=[v_{p}(S_{i})\,||\,S_{i}]\). Flexibility nodes \(\beta_{\tau}\) represent the total capacity of each interval in the flexibility curve \(\mathcal{F}\), with a distinctive feature vector \(u_{\tau}\in\mathbb{R}\). MUX nodes \(\alpha_{i}\) represent the XOR condition of prosumer \(i\) bids, with features set to unit vectors.
Edges between nodes capture the relational structure of the problem. In this graph representation, nodes of the same type are not connected, and it is categorized as a tri-partite graph. Bid nodes are connected to the corresponding flexibility nodes by undirected edges, with bid quantity [\(\sigma_{i\tau}\)] as the edge feature. Prosumer nodes are connected to the corresponding MUX nodes by undirected edges, with features set to unit vectors.
Thus the proposed graph consists of bid nodes X, flexibility nodes \(\mathbf{\beta}\), mutual exclusiveness nodes \(\mathbf{\alpha}\), and edges \(\mathcal{E}\) is denoted by \(\mathcal{G}(\text{X},\mathbf{\beta},\mathbf{\alpha},\mathcal{E})\). Its adjacency matrix, \(\textbf{A}\in\mathbb{R}^{n_{nodes}\times n_{nodes}}\), with \(n_{nodes}=\kappa+T+n\), serves as a measure of the LFM-WDP size.
## V GNN Model for the Neural Solver
We tackle the LFM-WDP as a multi-label node classification problem in which we aim to learn a function \(h\colon\mathcal{G}\to\mathcal{Y}\in\mathbb{Z}^{\kappa}\) given \(\mathcal{G}(\text{X},\mathbf{\beta},\mathbf{\alpha},\mathcal{E})\). Unlike conventional classification tasks with mutually exclusive class labels, in our case, multiple nodes can be assigned to the same class. Our approach is fully-supervised, where a model is trained to classify nodes across multiple labeled graphs. We propose a combinatorial auction graph neural network and discuss its design choices, including inter-node communication, message transformation and aggregation, network depth, learning objective and loss function. This approach differs from semi-supervised node classification problems, which focus on assigning labels to partially labeled nodes in a single massive graph.
Figure 3: LFM-WDP graph representation showing the three types of nodes with edges between them
Figure 2: LFM neural combinatorial auction design framework
### _Computation Graphs and Messages_
To demonstrate the computation of each node's features considering its neighborhood, we unfold our proposed graph representation to obtain a computation graph for each node. A simplified example of an LFM-WDP of 2 prosumers, each one submits 2 bids competing on 2 flexibility intervals is illustrated in Figure 4. Each node aggregates information from its neighbors to compute its feature representation. Note that our proposed heterogeneous tri-partite graph representation offers a simple yet expressive advantage, that is bid nodes communicate with their constraint-type neighbors through a two-hop computation. The immediate neighbors of a bid node are only constraint-type nodes, either \(\beta\) or \(\alpha\), and vice versa. Therefore, a two-hop depth is sufficient to capture the overall graph structure, as bid nodes can only communicate their information through constraint nodes. This simplifies our GNN model, as each bid node's state is updated after being aware of the constraints' state.
We use a message function \(\mathcal{M}(r)\) to convert the feature representation of a neighboring node \(r\) to a message \(m_{r}\). This message is sent to node \(q\), which has its own feature representation \(h_{q}\) and set of neighboring nodes \(r_{q}\). We also generate a self-message from node \(q\)'s features and incorporate it into the aggregation process using an order-invariant function \(AGG\) and non-linearity function \(\Gamma\). This process generates the message for step \(t+1\) which computed as:
\[h_{q}^{t+1}=\Gamma\bigg{(}\Big{\langle}\text{AGG}\big{\{}\mathcal{M}(h_{r}^{t }),\forall r\in r_{q}\big{\}},\mathcal{M}(h_{q}^{t})\Big{\rangle}\bigg{)} \tag{3}\]
The intuition behind this is that nodes in the graph seek a condensed representation of their neighbors' state, which is merged with their own state to form a new state.
### _GNN Design_
To update their states, nodes need to decide how many hops to use when aggregating messages from their neighbors to update their state. In a cyclic graph, this can lead to over-smoothing and saturation at the majority class. We avoid this issue by leveraging the unique connectivity of bids in our graph. Bids competing for the same flexibility interval are connected through representative flexibility nodes \(\beta\), while bids from the same prosumer are linked through mutual exclusiveness nodes \(\alpha\). This design ensures that two hops capture node dependencies and prevent over-smoothing.
We wrap our GNN model within a classifier function to classify nodes in the LFM-WDP problem. To address the targets imbalance, we introduced a weighting factor \(\lambda\) in the binary cross-entropy loss \(\ell\), which is the proportion of the minority class in the data. The loss function is defined as:
\[\ell=\frac{-1}{\kappa}\sum_{i=1}^{\kappa}\lambda_{i}\Big{[}y_{i}\log(p(y_{i})) +(1-y_{i})\log(1-p(y_{i}))\Big{]} \tag{4}\]
where \(\lambda_{i}\) is the weight of the class of node \(i\), and \(y_{i}\) is the true label of node \(i\). This loss term calculates the difference between the bids allocations generated by the GNN-based model and those generated by the expert solver.
Additionally, we included the mean square error between expert solver optimal value \(J\) and model optimal value \(J^{*}\) calculated from predicted optimal allocations in the loss function to optimize the learning process towards expert solver optimal solutions. The weighting factor \(\zeta\) balances the contribution of this term to the total loss. Then the total loss function be
\[\mathcal{L}=\frac{1}{Q}\sum_{i=1}^{Q}\ell_{i}+\zeta.(J_{i}-J_{i}^{*})^{2} \tag{5}\]
where \(\ell_{i}\) is the loss of the \(i\)-th problem instance and \(J_{i}\) and \(J_{i}^{*}\) are the expert solver optimal value and model optimal value, respectively, of the \(i\)-th problem instance. The total loss is minimized during training on \(Q\) problem instances.
Furthermore, in order to ensure compliance with the XOR constraint, we assign the bid with the highest probability value from the classifier within the conflicting set if conflicts arise.
## VI Numeric Evaluation
We quantitatively evaluate and analyze our approach using the solar home electricity dataset [8], which provides 30-minute which we resampled to 60-minute measurements of 300 homes' rooftop solar systems. We create LFM-WDP instances and their graph representations using the bottom-up generation procedure described in section IV-A, and use the Gurobi solver [9] as an expert to generate target solutions. We vary the number [100, 200, 300] of homes and bids per prosumer [1,2,3] to analyze model sensitivity, resulting in 18 cases in Figure 4(a). We produce a graph representation for each problem instance based on specific parameters that are the number of prosumers, the number of bids per prosumer, the prices, units of each bid and the DSO flexibility curve. We use the PyG library [10] for graph generation, and the model consists of a 2-layer feed-forward network with ReLU activation for state update, and a 2-layer classifier appended to the GNN output. Each case is split into training and test sets and trained with a learning rate of 0.0001.
To ensure a thorough evaluation, we have developed four evaluation metrics. Firstly, we evaluate our approach's ability to produce optimal allocations that match those produced by
Figure 4: Computation graph of two hops. To calculate the features of layer 2 output, we involve the corresponding \(\alpha\) from layer 1 output and the complementing x from the input. |
2304.13108 | Detecting HI Galaxies with Deep Neural Networks in the Presence of Radio
Frequency Interference | In neutral hydrogen (HI) galaxy survey, a significant challenge is to
identify and extract the HI galaxy signal from observational data contaminated
by radio frequency interference (RFI). For a drift-scan survey, or more
generally a survey of a spatially continuous region, in the time-ordered
spectral data, the HI galaxies and RFI all appear as regions which extend an
area in the time-frequency waterfall plot, so the extraction of the HI galaxies
and RFI from such data can be regarded as an image segmentation problem, and
machine learning methods can be applied to solve such problems. In this study,
we develop a method to effectively detect and extract signals of HI galaxies
based on a Mask R-CNN network combined with the PointRend method. By simulating
FAST-observed galaxy signals and potential RFI impacts, we created a realistic
data set for the training and testing of our neural network. We compared five
different architectures and selected the best-performing one. This architecture
successfully performs instance segmentation of HI galaxy signals in the
RFI-contaminated time-ordered data (TOD), achieving a precision of 98.64% and a
recall of 93.59%. | Ruxi Liang, Furen Deng, Zepei Yang, Chunming Li, Feiyu Zhao, Botao Yang, Shuanghao Shu, Wenxiu Yang, Shifan Zuo, Yichao Li, Yougang Wang, Xuelei Chen | 2023-04-25T19:24:57Z | http://arxiv.org/abs/2304.13108v1 | # Detecting HI Galaxies with Deep Neural Networks in the Presence of Radio Frequency Interference
###### Abstract
In neutral hydrogen (HI) galaxy survey, a significant challenge is to identify and extract the HI galaxy signal from observational data contaminated by radio frequency interference (RFI). For a drift-scan survey, or more generally a survey of a spatially continuous region, in the time-ordered spectral data, the HI galaxies and RFI all appear as regions which extend an area in the time-frequency waterfall plot, so the extraction of the HI galaxies and RFI from such data can be regarded as an image segmentation problem, and machine learning methods can be applied to solve such problems. In this study, we develop a method to effectively detect and extract signals of HI galaxies based on a Mask R-CNN network combined with the PointRend method. By simulating FAST-observed galaxy signals and potential RFI impacts, we created a realistic data set for the training and testing of our neural network. We compared five different architectures and selected the best-performing one. This architecture successfully performs instance segmentation of HI galaxy signals in the RFI-contaminated time-ordered data (TOD), achieving a precision of 98.64% and a recall of 93.59%.
methods: data analysis, methods: observational, techniques: image processing Vol.0 (20xx) No.0, 000-000 +
Footnote †: E-mail:[email protected]
## 1 Introduction
In recent years, a number of advanced radio telescopes and arrays have been constructed, including the Five-hundred-meter Aperture Spherical radio Telescope (FAST; Nan et al. 2011), the Australian Square Kilometre Array Pathfinder (ASKAP; Johnston et al. 2008), and MeerKat (Booth & Jonas 2012), among
others. In the coming decade, the next generation of radio telescope arrays, such as the Square Kilometre Array (SKA; Dewdney et al. 2009), are anticipated to be completed. The study of neutral hydrogen is one of the primary scientific goals of these telescopes, and HI galaxy surveys are key observations of them (Tolley et al. 2022). From the HI galaxy survey data, we can examine the HI content and mass function of the galaxies, gas accretion, the correlation between HI and star formation, and the influence of the environment on HI (Giovanelli & Haynes 2015). These sensitive and precise instruments demand more sophisticated observational techniques and signal-processing methods.
The HI Parkes All-Sky Survey (HIPASS; Meyer et al. 2004) and the Arecibo Legacy Fast ALFA Survey (ALFALFA; Giovanelli et al. 2005; Jones et al. 2018) are the most extensive HI surveys completed so far. The'multifind' peak-finding algorithm (Kilborn 2001) was employed to identify and filter data signal peaks in the HIPASS data processing. This method searches local maxima in data cubes and identifies potential signals by setting a threshold. The ALFALFA survey used a matched filtering algorithm (Saintonge 2007), which is sensitive to wide and weak signals. Although these algorithms have served these surveys successfully, they still exhibit some shortcomings. The multifind result is sensitive to the threshold, and has difficulty with overlapping signals, or signals with unusual shapes and features. The matched filtering algorithm also relies on assumptions about signal shapes, necessitating adjustments to algorithm parameters based on extensive experimentation and experience, it is prone to false alarms and missed detections when encountering multiple local maxima. The identification of radio frequency interference (RFI) is also far from perfect for these algorithms. More advanced and robust signal extraction methods are needed for future HI surveys.
The RFI is always a challenge for radio astronomical observations face. RFI sources can be artificial or natural, with the former including digital television, mobile and satellite communications, and so on (Fridman, P. A. & Baan, W. A. 2001). Efficient RFI mitigation algorithms which can identify the RFI are essential for radio astronomical observations. Many automatic RFI flagging algorithms have been developed, typically by looking for unusually large deviations in the sample. For example, the widely used Sum-Threshold method (Offringa et al. 2010) searches RFI of different possible time and frequency spread by scanning the data with a sliding window, and comparing the sum of the power of consecutive samples with a blocksize-dependent threshold.
RFI mitigation and celestial signal extraction are two sides of the same process. In the past and present HI observations, the usual practice is to first remove the various interferences, including standing waves and RFI, through a pipeline, and then extract the desired HI signal from the processed data. However, the identification of RFI is not absolute, and the extraction process still faces the influence of some interference. Moreover, RFI often superimposes HI signals, causing contamination and rendering the data unusable. Therefore, a major challenge in the data processing of HI galaxy surveys is to identify the extragalactic HI signals amidst a vast amount of data.
In recent years, there is much advancement in machine learning (ML), and it has been applied to various research directions in astronomy (Ball & Brunner 2010). In particular, these techniques have been applied to radio astronomical data processing tasks, such as RFI identification and mitigation, celestial source detection and classification, and analysis of observational data, among others (Baron 2019). Numerous deep learning-based models have been applied to identify and mitigate RFI, especially the Convolutional Neural Networks (CNN) (Sun et al. 2022; Pinchuk & Margot 2022), U-Net (Akeret et al. 2017a; Yang et al. 2020), and so on. Other ML-based image processing models have also been applied to astronomy, such as the new source finder developed by Riggi et al. (2023) based on the Mask R-CNN framework for detecting and classifying sources in radio continuum images.
Mask R-CNN is a CNN-based object detection and instance segmentation framework, which has achieved remarkable results in the field of computer vision (He et al. 2017). PointRend is a technique for improving image segmentation results by adding a rendering approach on top of the existing network, presenting fine-grained object boundaries through adaptive point sampling and label estimation (Kirillov et al. 2020). This technique enhances segmentation quality, producing more refined edges.
Inspired by these works, we apply the Mask R-CNN model and PointRend method to HI signal extraction in radio telescope data processing, hoping to more accurately detect and segment target objects in astronomical images. We develop an HI galaxy-searching method based on the Mask R-CNN model
and the PointRend method. The model can directly search for and identify HI galaxies in time order data contaminated by RFI, and can extract signals by segmenting the data. Using FAST as an example, we simulated the observed HI galaxies and potential RFI impacts, and then trained, refined, and selected different architectures of PointRend Mask R-CNN models, ultimately achieving a good performance in identifying galaxy signals.
The structure of this paper is as follows. Sec. 2 of the paper introduces the machine learning methods we used, including the principles of Mask R-CNN and PointRend, the network structure we employed, and the model evaluation method. In Sec. 3, the data preparation process is expounded. Sec. 4 presents the training and testing of the networks, while Sec. 5 presents the final results of our experiment. Sec. 6 provides further analysis and discussion of the results. Finally, Sec. 7 summarizes the entire paper.
## 2 Method
### Machine Learning Method: Mask R-CNN and PointRend
In this study, we develop the Mask R-CNN network by integrating it with the PointRend method to accomplish the instance segmentation task of identifying HI galaxies in astronomical observation data.
Mask R-CNN is an improved version of Faster R-CNN, which is a classic two-stage object detection network. The Faster R-CNN represents detected objects by generating bonding boxes and the corresponding class information (Ren et al. 2015). Mask R-CNN adds to the Faster R-CNN network a mask branch, which could generate the binary mask for each detected object. The additional mask branch significantly improves the network performance in instance segmentation tasks.
The PointRend method is an innovative strategy that can be integrated with various neural networks. By adding a PointRend head to the network, it improves the accuracy and resolution of image segmentation (Kirillov et al. 2020). After obtaining a preliminary coarse mask through other networks, PointRend generates some sampling points concentrated in the areas where the segmentation results are uncertain, then adopts a sub-network called PointRend head to predict the classification of these points based on the input feature map. The predicted information is then combined with the coarse mask to generate a more precise mask.
Figure 1 shows the network structure of the model we used. For single-channel two-dimensional data, the model first obtains a feature map through the backbone network, which serves as the input for
Figure 1: Schematic diagram of the PointRend Mask R-CNN model architecture.
the Region Proposal Network (RPN; Ren et al. 2015) and the final PointRend Head. The RPN generates a series of proposals from the input feature map, each with a specific region where the target object might be located. Each proposal is combined with the feature map to generate a series of RoIs (Regions of Interest), which are then passed through the RoI Align Layer (He et al. 2017) to obtain fixed-size feature maps of uniform size. There are two branches next. The branch at the bottom of the structure diagram is the RoI head branch, which has the same structure as the corresponding part of Faster R-CNN. It first transforms the fixed-size feature map into a series of smaller maps by going through convolutional layers, and then obtains the class and bounding box information through several fully connected layers. This branch essentially completes the object detection of the input data. The upper branch is the Mask branch. After passing the fixed-size feature map through a series of convolutional layers, we obtain a coarse mask.
Next is the PointRend part of the network. PointRend uses a sampling strategy based on uncertainty to generate some sampling points for refinement according to the coarse mask information. The model employs an additional sub-network called the PointRend Head, which receives the selected refinement points and the high-resolution feature map generated at the beginning of the entire network as input, and predicts the classification of each sampling point through a series of MLPs (Multi-Layer Perception). Finally, the predicted class information is combined with the coarse mask to obtain a more accurate final precision mask. In each iteration, the sub-network calculates the uncertainty of the class prediction for each point, and selects a certain portion of points for updating based on the uncertainty values. This makes the selected points mainly located in the detail areas of the segmentation result (i.e., the edge areas and texture-complex areas), which are the areas that need improvement the most in the segmentation results.
By undergoing these processes for each RoI, we can complete the instance segmentation of all targets in the input data. All parts of the model participate in training. By minimizing the loss function, the model updates the parameters of each network through backpropagation. We define the multi-task loss on each sampled RoI as
\[L_{\mathrm{total}}=L_{\mathrm{cls}}+L_{\mathrm{box}}+L_{\mathrm{mask}}+L_{ \mathrm{PointRend}}. \tag{1}\]
\(L_{\mathrm{cls}}\) is the classification loss defined as
\[L_{\mathrm{cls}}=-\log(p_{u}), \tag{2}\]
where \(p_{u}\) is the probability of an RoI belonging to the true class label \(u\) (\(u\in\{1,2,\cdots,C\}\)), calculated by the softmax function:
\[p_{u}=\frac{\exp(z_{u})}{\sum_{k=1}^{C}\exp(z_{k})},\]
where \(z_{i}\) represents the score of the RoI belonging to the \(i\)-th class for \(i\in\{1,2,\cdots,C\}\). \(L_{\mathrm{box}}\) is the bounding-box regression loss, which describes the bounding-box branch's ability to localize objects during bounding-box regression, defined as:
\[L_{\mathrm{box}}=\sum_{i\in\{x,y,w,h\}}\mathrm{Smooth}_{L_{1}}(t_{i}^{u}-v_{i }), \tag{3}\]
where \(v=(v_{x},v_{y},v_{w},v_{h})\) is the true bounding-box regression targets for class \(u\) and \(t^{u}=(t_{x}^{u},t_{y}^{u},t_{w}^{u},t_{h}^{u})\) is the predicted bounding box coordinates. \(\mathrm{Smooth}_{L_{1}}\) represents the Smooth \(L_{1}\) loss function:
\[\mathrm{Smooth}_{L_{1}}=\begin{cases}0.5x^{2}&\text{if}\;\;|x|<1\\ |x|-0.5&\text{otherwise}\end{cases}. \tag{4}\]
\(L_{\rm mask}\) is described as the average binary cross-entropy per pixel and describes the mask head's ability to classify each pixel, defined as
\[L_{\rm mask}=-\frac{1}{m^{2}}\sum_{i=1}^{m^{2}}[p_{i}\log(\hat{p_{i}})+(1-p_{i}) \log(1-\hat{p_{i}})], \tag{5}\]
where \(m^{2}\) is the total number of pixels in the mask (\(16\times 16\) in our network), \(p_{i}\) is the true value of the \(i\)-th pixel, and \(\hat{p_{i}}\) is the predicted value of the \(i\)-th pixel.
The PointRend loss, \(L_{\rm PointRend}\), calculates binary cross-entropy only on the sampled points that need to be refined:
\[L_{\rm PointRend}=-\frac{1}{N_{\rm point}}\sum_{i=1}^{N_{\rm point}}[p_{i} \log(\hat{p_{i}})+(1-p_{i})\log(1-\hat{p_{i}})], \tag{6}\]
where \(N_{\rm point}\) represents the number of sampled points that need refinement (set to 10 in our task), \(p_{i}\) is the true value of the \(i\)-th point, and \(\hat{p_{i}}\) is the predicted value of the \(i\)-th point. In addition, the RPN network used in the model has its own loss function and is trained independently during the training process.
### Model Evaluation
In our PointRend Mask R-CNN network, we selected five distinct backbones for obtaining feature maps and conducted a comparative analysis to evaluate the impact of different backbones on the network performance. Our choices are all based on Residual Networks (ResNet; He et al. 2016), which is a deep convolutional neural network. The ResNet introduces residual connections to solve the problem of gradient vanishing and explosion issues during the training of deep neural networks.
In our selection, ResNet-50-FPN and ResNet-101-FPN are Feature Pyramid Networks (FPN; Lin et al. 2017) based on 50-layer and 101-layer residual networks, respectively. These FPNs add a top-down pathway and lateral connections to the original ResNet, enabling the network to better capture features at different scales, and could improve object detection and instance segmentation performance by leveraging multi-scale features. ResNet-50-C5-Dilated and ResNet-101-C5-Dilated are dilated convolutional networks based on 50-layer and 101-layer residual networks, respectively, using dilated convolution in the last convolutional layer (C5) (Yu & Koltun 2016). This approach increases the receptive field size, thereby improving the detection and segmentation performance for large-scale objects. Our primary focus is on the ResNeXt-101-FPN backbone. ResNeXt-101 is an improved 101-layer ResNet network that employs grouped convolution on top of ResNet, dividing the input channels into multiple groups and performing convolution operations within each group. This enhances the network's expressiveness and parameter efficiency, allowing for improved performance with relatively low computational complexity (Xie et al. 2017). After combined with FPN, ResNeXt-101-FPN should perform slightly better than ResNet-101-FPN theoretically.
Generally, deeper network structures can usually learn more diverse feature representations, thereby improving the accuracy of instance segmentation. FPN can effectively capture multi-scale feature information by integrating features from different levels, resulting in better performance when dealing with objects of varying sizes. Dilated convolution, by expanding the receptive field of the convolution kernel, can better capture information from large-scale objects. For our project, an FPN with a deeper structure is theoretically more suitable.
We employed the Precision, Recall rate, and F1 Score, which are commonly used performance metrics for evaluating image segmentation models (Forsyth & Ponce 2002), to evaluate our method. The Precision in this case is the proportion of true galaxies among the samples classified as galaxies by the model, reflecting the accuracy of the model in recognizing galaxies.
\[{\rm Precision}=\frac{{\rm TP}}{{\rm TP}+{\rm FP}}, \tag{7}\]
where TP represents a correct segmentation (detection) that the instance is classified as a member of the class while FP represents an incorrect segmentation of such classification. Precision belongs to \([0,1]\), and a higher value indicates that the model is less likely to misidentify.
The Recall rate in the present case is the fraction of identified HI galaxies among all HI galaxies, representing the model's capability of detection. Recall belongs to \([0,1]\), and a higher value indicates a stronger recognition ability. It is defined as
\[\mathrm{Recall}=\frac{\mathrm{TP}}{\mathrm{TP}+\mathrm{FN}}, \tag{8}\]
where FN represents an incorrect segmentation that the instance is not classified as a member of the class.
The F1 Score is the harmonic mean of the Precision and the Recall, also belonging to \([0,1]\), providing a comprehensive evaluation of both Precision and Recall performance. It can serve as the standard for assessing the model, and a higher value indicates the model has a better performance. It is defined as
\[\mathrm{F1}=\frac{2\times\mathrm{Precision}\times\mathrm{Recall}}{\mathrm{ Precision}+\mathrm{Recall}}. \tag{9}\]
## 3 Mock Data
The PointRend Mask R-CNN is a supervised neural network model that requires data for training and testing. For our mission, the construction of datasets can be diverse. Referring to the FAST telescope, we simulated the HI galaxy signals it could observe and the possible RFI effects it might encounter.
### HI Galaxies Data Simulation
We generate the mock HI galaxies from the IllustrisTNG magnetohydrodynamical (Weinberger et al., 2017; Pillepich et al., 2018) simulation. It includes physical processes such as gas cooling, star formation, stellar evolution, metal enrichment, black hole growth, stellar winds, supernovae, and active galactic nuclei (AGNs), and is consistent with current observations. In this work, we use the TNG100-1 data set. The box size for TNG100 is \(75\,\mathrm{Mpc}/h\) and mass resolution is \(9.4\times 10^{5}\ M_{\odot}/h\) for baryon particle and \(5.1\times 10^{6}\ M_{\odot}/h\) for dark matter particle. The box size ensures that there are enough galaxies as training sets and test sets, and the mass resolution ensures that the structure of galaxies can be resolved. The IllustrisTNG adopted the Planck 2015 cosmological parameters (Ade et al., 2016), i.e., \(\Omega_{\Lambda}=0.6911\), \(\Omega_{m}=0.3089\), \(\Omega_{b}=0.0486\), \(\sigma_{8}=0.8159\), \(n_{s}=0.9667\), and \(h=0.6774\), and we adopt this model throughout the paper.
The total mass of atomic and molecular hydrogen for each gas particle can be obtained directly from the TNG100 catalog. However, these two parts are not separated in the simulation. Diemer et al. (2018) has separated the molecular and atomic hydrogen contents for galaxies in TNG100. However, we also need to take into account the velocity of each gas particle to get the spectral profile for each galaxy, which can not be obtained from the existing catalog. We calculate the HI mass for each gas particle based on the method of Gnedin & Kravtsov (2011). We refer readers to Deng et al. (2022) for details of the calculation. Following Diemer et al. (2018) we consider only galaxies with stellar mass or gas mass greater than \(2\times 10^{8}\ M_{\odot}\), which are well represented by particles in this simulation.
We assume that the properties of HI galaxies do not evolve significantly over the small redshift range considered in this work, and only use the simulation snapshot at \(z\approx 0\). The box is split into two boxes with size \(75\times 75\times 50\ (\mathrm{Mpc}/h)^{3}\) and \(75\times 75\times 25\ (\mathrm{Mpc}/h)^{3}\). Then we stack the two boxes to form a light cone volume as shown in Figure 2, where \(O\) is the observer. We have a rectangular field of view and the orange lines join the observer with the four corners of the field. We choose this configuration to ensure sufficient redshift coverage and field of view while avoiding the repeating of galaxy samples. The redshift range of the light cone extends to \(z\approx 0.05\), and its angular area is approximately \(28\times 19\ \mathrm{deg}^{2}\). We then deposit the gas particles into angular and frequency grids, where the angular grid has a size
of \(\Delta\theta=0.5\ {\rm arcmin}\), well below the beam resolution of FAST, and the frequency grid has a size of \(\Delta\nu=0.02\ {\rm MHz}\), to suit the purpose of the galaxy detection. The frequency of each gas particle is determined as \(\nu=\nu_{21}/(1+z)/(1+\beta)\), where \(\nu_{21}\approx 1420.4\ {\rm MHz}\) is the rest-frame frequency of 21 cm radiation, \(z\) is the cosmological redshift, and \(\beta\) is the line-of-sight component of peculiar velocity in units of the speed of light.
We calculate the brightness temperature for cell \(i\) in frequency \(\nu\) by
\[T_{b}^{i}(\nu)=\frac{3c^{2}}{32\pi\nu_{21}^{3}}A_{10}\frac{h\nu^{2}}{k_{B}m_{p }}\frac{\Delta M_{\rm HI}^{i}}{D_{A}(z)^{2}\Delta\theta^{2}\Delta\nu}, \tag{10}\]
where \(c\) is the speed of light, \(A_{10}\approx 2.85\times 10^{-15}\ {\rm Hz}\) is the spontaneous emission coefficient of the 21 cm transition, \(m_{p}\) is the mass of the proton, \(k_{B}\) is the Boltzmann constant, \(D_{A}(z)\) is the angular diameter distance, and \(\Delta M_{\rm HI}^{i}\) is the HI mass in cell \(i\). We ignored the velocity dispersion inside the gas particle in our calculation which may smooth the spectrum but cannot be obtained from the simulation. The spectrum of one simulated galaxy is shown in Figure 3. Its peak flux is about \(8\ {\rm mJy}\) and the line width is about 1 MHz with a characteristic 'double horn' profile. It is consistent with our knowledge about the HI profile in low redshift galaxies (Saintonge, 2007).
We model the beam of FAST as a Gaussian function with \(\sigma=0.518\lambda/(300\ {\rm m})\), though the real beam may have a more complicated dependence on the frequency. The 19 beams are rotated \(23.4^{\circ}\) w.r.t the configuration given in Jiang et al. (2020), to achieve a more uniform coverage in drift scan. We place the angular center of the grids at the zenith, and assume the sky is surveyed with the rotation of the Earth. The scan produces strips along the right ascension (RA) direction. According to Jiang et al. (2020), the 19 beams cover over 25 arcmin in the direction of declinations (DEC), so by repeatedly scanning different declinations with a separation of \(10\ {\rm arcmin}\), the whole available angular area (which is approximately \(28\times 19\ {\rm deg}^{2}\)) is surveyed. We set the time resolution as \(1.00663296\ {\rm s}\) and frequency resolution as \(0.02\ {\rm MHz}\).
Based on the sensitivity of FAST, we dropped the galaxies with fluxes below \(5\ {\rm mJy}\). Additionally, when training with data from a single beam, we also removed those galaxies that could be observed by other beams but were not visible to the specific beam we use. Ultimately, we obtained 4495 HI galaxies. These galaxies include both bright and faint ones, with varying shapes, and produce different brightness levels in the time order data, essentially covering various scenarios encountered in real observations. Figure 4 illustrates a piece of simulated HI galaxy signals, containing only galaxy signals without RFI, allowing us to label each galaxy easily and conveniently.
Figure 2: The configuration of the light cone by stacking two boxes. The size of two boxes is shown in the figure, \(O\) is the observer and the orange lines join the observer with the four corners of the field of view. The redshift range of the light cone extends to \(z\approx 0.05\), and its angular area is approximately \(28\times 19\ {\rm deg}^{2}\).
### RFI Simulation
There are a number of software packages that can be employed for simulating RFI. The HIDE (HI Data Emulator) is a software package for simulating HI observation data, and it could also generate mock RFI (Akeret et al. 2017; Yang et al. 2020). The Hera_sim(Kerrigan et al. 2019) is a Python software package developed for simulating the Hydrogen Epoch of Reionization Array (HERA) data, which can also generate RFI data (Sun et al. 2022). We integrated and adapted these two software packages to simulate RFI. We considered several types of RFI, including narrowband RFI, broadband RFI, and 'clump' RFI.
Broadband RFI is instantaneous and intense, typically originating from sources such as lightning and transmission cables, generally covering many frequency bands and manifesting as 'bright lines' spanning numerous frequencies in time-ordered data. Narrowband RFI is usually caused by digital television, satellite communications, and mobile communication. A typical narrowband RFI appears as a long-lasting and narrow frequency spread signal, presenting as intermittent stripes in time-ordered data. Another type of RFI, with a frequency spread and appearance time similar to galaxies, may stem from harmonics of satellite communications and certain short-term electromagnetic wave emissions. It exhibits a stain-like clump shape in time-ordered data and is more prone to confusion with galaxies. Ultimately, we successfully simulated these types of RFI. We then generated system noise following the method in Jiang et al. (2020) and added it to the data, Figure 5 shows a segment of the mock RFI and noise data, displaying different types of RFI.
Figure 4: A piece of simulated TOD data of HI galaxies, with the horizontal axis representing time, the vertical axis representing frequency, and the color representing the antenna temperature value. As one can see, there are three galaxies present in the figure.
Figure 3: The spectrum of one of our simulated galaxies. The peak flux is about \(8\rm~{}mJy\) and the line width is about \(1\rm~{}MHz\) with a characteristic ‘double horn’ profile.
## 4 Model Training and Testing
With the mock data generated above, we trained our network model. We divided the dataset into a training set, a validation set, and a test set with a 3:1:1 ratio, then we trained and tested the PointRend Mask R-CNN model with five different backbones.
We set the batch size to 16, and the maximum number of iterations to 50,000. Each training sample generates classes, bounding-boxes, and mask predictions after training. The network also parses the classes and bounding-boxes information from the true mask, which serves as the ground truth. By calculating and minimizing the loss function value according to the method presented in Sec. 2.1, the model parameters are updated through backpropagation, thereby training the model. We employed the SGDM (Stochastic Gradient Descent with Momentum, Qian 1999) method to update the parameters, which could help accelerate convergence, and set the momentum as 0.9 (Sutskever et al. 2013).
The base learning rate was set as 0.0005. We utilized a learning rate warmup strategy (Goyal et al. 2018), in which the learning rate will increase gradually and linearly from a lower value to the preset base learning rate during the initial stage of training. This strategy helps the model converge more stably in the early training phase and reduces the risk of gradient explosion. We also employed a multi-step learning rate decay strategy (Krizhevsky et al. 2012), decaying the learning rate at the 10,000-th and 30,000-th iterations by multiplying the preset decay factor (set as 0.6) with the current learning rate. This strategy assists in fine-tuning the model parameters during the later stages of training, providing more precise adjustments as the training progresses, thereby enhancing the model's performance. Both strategies have been widely used in deep learning. Weight decay was employed as a regularization technique to prevent overfitting and enhance the model's generalization capabilities, with the weight decay coefficient set as 0.0001. This is also a widely applied strategy (Goodfellow et al. 2016).
For the backbone of our main concern, ResNeXt-101-FPN, Figure 6 demonstrates the variations of the loss function values on the training and validation sets during the training process. As one can see, the loss function exhibits a decreasing trend and eventually converges on the training set. On the validation set, the loss function also displays a general decreasing and converging trend. Although slight oscillations in the loss function values on the validation set appear after 12,000 steps and intensify after 32,000 steps, the function values do not have an increasing trend, indicating that our model does not exhibit significant overfitting. Such oscillations are normal and can be attributed to the inherent randomness in the optimization process, and this intensification in the later stages of training may be related to the size of the validation set and the batch size settings. In our mission, if the model experiences underfitting, it may not effectively learn and recognize various galaxy signals within the data, resulting in a low recognition capability. In contrast, in the case of overfitting, the model could become overly focused on the features within the current training data, leading to a decline in generalization performance on new data. To minimize the occurrence of both underfitting and overfitting, we monitored the model's performance on the validation set and ensured that the oscillations are within an acceptable
Figure 5: A piece of simulated TOD data of RFI, with the horizontal axis representing time, the vertical axis representing frequency, and the color representing the antenna temperature value. Narrowband RFI manifests as discontinuous horizontal lines with width, broadband RFI appears as thin vertical lines and stain-like RFI presents as large and small radiating spots.
range. Ultimately, we chose the network trained to 32,000 steps as our model. Similar phenomena were observed in the training processes of the networks with other backbones, and we also selected the final model for each backbone at the training step where the loss function had relatively converged on the training set and before the intensification of oscillations on the validation set.
After training, we tested the model using the test set and calculated the model evaluation metrics according to the method in Sec. 2.2. The model was trained on NVIDIA GeForce RTX 2080 Ti GPU.
## 5 Results
For the training results of the PointRend Mask R-CNN model with different backbones, we calculated their Precision, Recall, and F1 Score respectively, as shown in Table 1.
\begin{table}
\begin{tabular}{l l l l} \hline \hline & Backbone & Precision & Recall & F1 Score \\ \hline PointRend Mask R-CNN & ResNet-50-FPN & 96.15\% & 94.93\% & 95.54\% \\ & ResNet-50-C5-Dilated & 100\% & 65.38\% & 79.07\% \\ & ResNet-101-FPN & 92.68\% & 97.43\% & 95.00\% \\ & ResNet-101-C5-Dilated & 98.14\% & 67.94\% & 80.29\% \\ & ResNeXt-101-FPN & 98.64\% & 93.59\% & 96.05\% \\ \hline \hline \end{tabular}
\end{table}
Table 1: Precision, Recall and F1 Score of Our PointRend Mask R-CNN Network with Different Backbones.
Figure 6: Loss function values of the model with ResNeXt-101-FPN backbone during the training process. The upper panel illustrates the variation of loss function values on the training set, while the lower panel shows the variation of loss function values on the validation set. The horizontal axis represents the training iteration steps, and the vertical axis indicates the function values. As can be seen, the loss function eventually converges on the training set. On the validation set, the loss function also exhibits a decreasing and converging trend. However, slight oscillations occur after 12,000 steps and intensify after 32,000 steps. Ultimately, we chose the model trained up to 32,000 steps.
The ResNet-50-C5-Dilated and ResNet-101-C5-Dilated backbones performed well in terms of Precision, but their Recall is quite poor, leading to a low F1 score. The F1 score for the ResNet with FPN is generally better than that of the C5-Dilated ResNet. Moreover, the performance of ResNeXt-101-FPN is slightly better than that of the ResNet-50-FPN and ResNet-101-FPN, which is consistent with our expectations. Ultimately, we selected ResNeXt-101-FPN as the backbone for our model.
Figure 7 illustrates some examples of galaxies correctly recognized by the model, with the yellow lines delineating the regions determined by the model's output mask, and the green lines representing the ground truth. In Figure 6(a), there is a bright RFI spot on the right side, with one galaxy contaminated by broadband RFI, but the model still accurately identifies all the two galaxies, successfully detects multiple targets. From Figure 6(b), one can see that our model can also effectively discern galaxy data contaminated by narrowband RFI.
When a bright galaxy is encountered, as illustrated in Figure 6(c), our model does not simply identify the 'bright' regions, but also captures the faint areas at the edges of the galaxy data, indicating that the model has learned the characteristics of HI galaxies during training. From an image processing perspective, the high gradient at the edges of such bright galaxies could easily lead to overfitting during training, but our model does not exhibit this issue. Figure 6(d) presents a typical situation where a galaxy and an easily confused RFI portion overlap. As can be seen, after sufficient training, the model can effectively handle such cases.
The RFI contamination is more intense in Figure 6(e), affecting almost half of the galaxy's signal, yet the model still recognizes the galaxy, demonstrating its strong identification capability. Figure 6(f) shows that the model can successfully identify faint galaxies with lower signal-to-noise ratios. Each of Figure 6(a) and Figure 6(f) contains more than one HI galaxy target, highlighting the necessity of performing instance segmentation for galaxies.
Considering the diverse morphology of galaxies and the variety of RFI patterns, our model demonstrates strong generalization capabilities, indicating that it can successfully accomplish the task of instance segmentation for finding HI galaxies in RFI-contaminated data.
We present the different recognition results of the same example using different backbones as well, with the outcomes of various models marked with distinct color contours. Figure 8 shows a rather faint galaxy, with the two ResNet models using C5-Dilated failing to detect the target. In contrast, ResNet-50-FPN produces a false detection, possibly due to the influence of some faint galaxies with a low signal-to-noise ratio in the training dataset, leading the model to misinterpret random noise fluctuations as galaxies. Figure 9 displays a galaxy contaminated by broadband RFI, with ResNet-101-C5-Dilated still missing the target, while ResNet-101-FPN produces a false detection. These examples illustrate that, in some cases, galaxy recognition can be quite challenging, and models may have limitations and areas for improvement.
## 6 Discussion
### Construction of the Dataset
In fact, to accomplish the task of finding galaxies through instance segmentation, there are various ways to construct a dataset. One of the most direct methods is to label galaxies in the actual telescope observational data and use the label masks as ground truth. This approach has two execution strategies: one is to use manual labels (similar to the process in many other instance segmentation tasks). Although simple and direct, this requires a certain amount of human labor for labeling and is not easily scalable. And manual labeling is always required each time when applying this machine-learning method to a new telescope. The second strategy is to use existing methods (e.g. template function method used by ALFALFA) for labeling, but this will cause a certain degree of 'distortion' in the ground truth. This is because the recall and accuracy of existing methods are not \(100\%\), which can lead to machine learning models becoming'similar to existing methods' after training.
Figure 7: Examples of the HI galaxies correctly recognized by our model. These images are all plotted by part of our final simulated TOD data, with the horizontal axis representing time, the vertical axis representing frequency, and the brightness representing the value of the antenna temperature. In the images, a brighter (whiter) pixel represents a higher temperature at that point, with the brightest areas reaching about \(3\ \mathrm{K}\). Yellow lines delineate the galaxy contour determined by the model’s output mask. The green lines represent the ground truth, which is the galaxy contour in our simulated TOD data. Other bright areas in the images correspond to various RFI and noise.
Besides using real observational data, another way to construct a dataset is to simulate data, such as using simulated galaxy data with real RFI, using real galaxy signals with simulated RFI, or using both simulated galaxy data and simulated RFI with noise background, etc.
In our work, we used simulated galaxy data and simulated RFI. The reason is, first, the differences between our simulated galaxy signals and the real galaxy signals are minimal, so it is feasible to use simulated galaxy data as a substitute for real galaxy data. Second, it is difficult to search for and label galaxies directly in the TOD data, which is also the reason why we used simulated RFI. It is challenging to separate 'pure' RFI without astronomical source signals from real data. Since the galaxy and RFI data are separately simulated, we can accurately and conveniently label them, which greatly assists us in our subsequent work.
It is worth mentioning that the construction of the dataset is very flexible. For example, if we want the model to have the ability to identify galaxies among specific RFIs, we can add these particular RFIs (either simulated or real signals) to the original data, allowing the model to 'learn' the ability to identify this type of RFI as interference. Furthermore, by adjusting the proportion of faint sources and bright sources in the dataset, the model can be more inclined to identify faint or bright source signals. The dataset construction method depends on the researcher, but all operations should be performed while ensuring the data is as realistic as possible.
Figure 8: An example from the results of our model with different backbones. The presentation way of these images is the same as in Fig. 7, and each image represents the recognition results of the model with the corresponding backbone. Specifically, the green lines represent the galaxy contour determined by the ground truth, while the yellow, blue, and pink lines represent the galaxy contour determined by our model with ResNeXt-101-FPN, ResNet-50-FPN and ResNet-101-FPN backbones, respectively.
### Model Generalization and Potential Improvements
Although our simulation can obtain observational data for all 19 beams of the FAST telescope for HI galaxies, we only used data from one beam for the final training. This is because, in practice, though the response of different beams to the same signal is actually one of the bases for distinguishing galaxies and RFIs, we have not found a convenient way to simulate the same RFI received by different beams. Using the same RFI data for all beams may cause some errors.
To further train the model, multi-beam data (e.g., FAST's 19-beam) can be used, inputting different beam data as different channels of two-dimensional data into the network. This allows the model to learn the response information of all beams for the same source and better search for galaxy signals.
In addition, in the real observational data, besides RFI, other influences such as standing waves and bandpass of the system will have impacts on the search results, which means that the real data may be more complex than our simulations and require better detection capability of the model.
Owing to the model's excellent generalization ability, one can also attempt to apply our network to the detection and extraction of signals from other astronomical sources. In fact, the PointRend Mask R-CNN can effectively perform instance segmentation tasks for numerous categories, while in our work it has only been used for a two-class (galaxies and interference) instance segmentation task. So our network can also be applied to data generated in other stages of telescope data processing for object detection and signal extraction (e.g., Riggi et al. 2023).
Figure 9: Another example from the results of our model with different backbones. The presentation way of these images is the same as in Fig. 7, and each image represents the recognition results of the model with the corresponding backbone. The meaning of lines in different colors is the same as in Fig. 8, and the red lines represent the galaxy contour determined by our model with ResNet-50-C5-Dilated as the backbone.
Additionally, when training the model, the weights of the different components in the total loss function can be adjusted to give the model a stronger 'inclination'. For instance, increasing the weights of the \(L_{cls}\) and \(L_{box}\) components in the total loss function can make the model more inclined towards accurate recognition rather than precise segmentation. For a further saying, new network structures can be explored and incorporated into our model to enhance its capabilities for other missions.
It should be noted that, limited by the accuracy of the numerical simulation, we currently consider galaxies with relatively high HI fluxes. Whether our models can perform better for galaxies with lower masses (galaxies fainter than those in our dataset) needs to be further investigated. Finally, we will apply our model to real observational data in our subsequent work, trying to perform galaxy searches in real data and comparing the results with other traditional methods.
## 7 Conclusion
In our work, we constructed a Mask R-CNN network integrated with the PointRend method, aiming to find and extract galaxy signals in radio telescope observational data contaminated by RFI. We simulated the galaxy signals observed by FAST and the potential RFI impact as realistically as possible, and built a dataset based on this simulation for training and testing our network. We compared five different network architectures and chose the best-performing one, ultimately achieving precision and recall of \(98.64\%\) and \(93.59\%\), respectively. This demonstrates that our network can successfully accomplish the instance segmentation task of HI galaxy signals in TOD data.
Moreover, thanks to the high-precision detailed performance of the PointRend method, our network can achieve more accurate segmentation when dealing with complex and subtle galaxy structures in astronomical images. We discussed the construction methods of the dataset and the possible generalizations and improvements of the model, believing that our network has excellent extensibility and can be applied to other scenarios.
For the extraction of HI galaxy signals, although existing search algorithms have achieved some success in previous projects, there are still some drawbacks and challenges in practical applications. Our bold attempt to find galaxy signals using a deep neural network is an innovative application of machine learning methods to this task, which helps provide more reliable basic data for subsequent astronomical analyses and lays a better foundation for the next step of scientific research.
###### Acknowledgements.
We thank Long Xu and Dong Zhao for their useful discussions. We acknowledge the support by National SKA Program of China, No.2022SKA0110100, the CAS Interdisciplinary Innovation Team (JCTD-2019-05). We also acknowledge the science research grants from the China Manned Space Project with NO.CMS-CSST-2021-B01.
|
2305.04749 | Toeplitz Neural Network for Sequence Modeling | Sequence modeling has important applications in natural language processing
and computer vision. Recently, the transformer-based models have shown strong
performance on various sequence modeling tasks, which rely on attention to
capture pairwise token relations, and position embedding to inject positional
information. While showing good performance, the transformer models are
inefficient to scale to long input sequences, mainly due to the quadratic
space-time complexity of attention. To overcome this inefficiency, we propose
to model sequences with a relative position encoded Toeplitz matrix and use a
Toeplitz matrix-vector production trick to reduce the space-time complexity of
the sequence modeling to log linear. A lightweight sub-network called relative
position encoder is proposed to generate relative position coefficients with a
fixed budget of parameters, enabling the proposed Toeplitz neural network to
deal with varying sequence lengths. In addition, despite being trained on
512-token sequences, our model can extrapolate input sequence length up to 14K
tokens in inference with consistent performance. Extensive experiments on
autoregressive and bidirectional language modeling, image modeling, and the
challenging Long-Range Arena benchmark show that our method achieves better
performance than its competitors in most downstream tasks while being
significantly faster. The code is available at
https://github.com/OpenNLPLab/Tnn. | Zhen Qin, Xiaodong Han, Weixuan Sun, Bowen He, Dong Li, Dongxu Li, Yuchao Dai, Lingpeng Kong, Yiran Zhong | 2023-05-08T14:49:01Z | http://arxiv.org/abs/2305.04749v1 | # Toeplitz Neural Network for
###### Abstract
Sequence modeling has important applications in natural language processing and computer vision. Recently, the transformer-based models have shown strong performance on various sequence modeling tasks, which rely on attention to capture pairwise token relations, and position embedding to inject positional information. While showing good performance, the transformer models are inefficient to scale to long input sequences, mainly due to the quadratic space-time complexity of attention. To overcome this inefficiency, we propose to model sequences with a relative position encoded Toeplitz matrix and use a Toeplitz matrix-vector production trick to reduce the space-time complexity of the sequence modeling to log linear. A lightweight sub-network called relative position encoder is proposed to generate relative position coefficients with a fixed budget of parameters, enabling the proposed Toeplitz neural network to deal with varying sequence lengths. In addition, despite being trained on 512-token sequences, our model can extrapolate input sequence length up to 14K tokens in inference with consistent performance. Extensive experiments on autoregressive and bidirectional language modeling, image modeling, and the challenging Long-Range Arena benchmark show that our method achieves better performance than its competitors in most downstream tasks while being significantly faster. The code is available at [https://github.com/OpenNLPLab/Tnn](https://github.com/OpenNLPLab/Tnn).
## 1 Introduction
Figure 1: The left figure shows the training speed (\(x\)-axis), performances (\(y\)-axis), and GPU memory footprints (circle sizes) of the TNN and competing methods on Long-Range Arena benchmark. The TNN beats the competitors with a clear margin. The right figure plots the extrapolation results with different sequence lengths, where the \(x\)-axis denotes sequence lengths, and the \(y\)-axis denotes \(log\) PPL. It demonstrates that regardless of the sequence length, the PPL of the TNN remains constant.
Sequence modeling is a fundamental problem in natural language processing, speech processing, and computer vision. Various sequence modeling methods have been proposed in the literature, including recurrent (Hochreiter & Schmidhuber, 1997), convolutional architectures (LeCun et al., 1989), and transformers (Vaswani et al., 2017). These models utilize various properties of sequential data for their modeling. For example, recurrent models (Hochreiter & Schmidhuber, 1997) mimic the sequential property by sequentially processing the input while maintaining hidden states through steps. Convolutional models (LeCun et al., 1989) enforce the locality bias sequentially and only interact elements within local patches. Transformers use attention matrices to model pairwise relations regardless of the distance between them. Recently, Transformers (Vaswani et al., 2017; Dosovitskiy et al., 2021) show strong performance on a wide range of applications across domains and become arguably one of the most successful architectures for sequence modeling in general.
There are two main components in transformers: the attention mechanism that learns pairwise correlations of tokens from data, and the position embedding to introduce positional inductive biases. The vanilla attention mechanism requires quadratic space-time complexity, which precludes Transformers from handling long sequences. Numerous attention variants have been proposed recently to reduce the complexity, including linear transformers (Katharopoulos et al., 2020), and Performer (Choromanski et al., 2021). Although the types of attention vary, the position embedding remains in every method, which indicates the importance of position information in sequence modeling. This motivates us to ask the following question: since position information is important, can we design a model that relies entirely on the position information of its elements regardless of their content, thus alleviating the quadratic computation cost of the vanilla attention mechanism?
In this paper, we give an affirmative answer to this question by introducing Toeplitz neural network, a new efficient architecture that solely exploits relative position relations for sequence modeling. In specific, instead of attention matrices, the Toeplitz neural network uses Toeplitz matrices to capture relations between each token pair. There are two motivations for selecting the Toeplitz matrix. One is that it compactly represents relative positional relations between tokens with much fewer parameters, _i.e.,_\(2n-1\) parameters for an \(n\times n\) Toeplitz matrix. The other is that the Toeplitz matrix-vector production can be efficiently processed in \(O(n\log n)\) complexity, which is exactly what we used in our token mixing operation. In this way, we avoid computing content similarities between tokens and effectively reduce the quadratic computation complexity of transformers to log linear, rendering a more efficient sequence modeling architecture.
We further propose _relative position encoder_, a lightweight module that generates relative position parameters to assemble the Toeplitz matrices, so that the number of the TNN's parameters will no longer depend on the sequence length. Moreover, it allows TNN to deal with varying sequence lengths without retraining. In addition, the input sequence length extrapolation becomes an important ability in sequence modeling as training on longer sequences can be prohibitively expensive (Press et al., 2022). We propose an exponential decay bias that directly applies to the Toeplitz matrix. Our model achieves a consistent performance to a sequence length of 14K tokens in inference when training on sequences of 512 tokens. We also show analytically that the Toeplitz neural network represents a general form of sequence modeling methods, and derives transformers, CNNs, and the recently proposed State-space-based methods (Gu et al., 2022) as its special forms.
We validate our model on a wide range of sequence modeling tasks and benchmarks. These include auto-regressive language modeling, text classification, image classification, and the Long-Range Arena benchmark. As illustrated in Fig. 1, our model achieves state-of-the-art performance on most tasks at a favorable log linear space-time complexity. It also demonstrates superior extrapolation capabilities when training on shorter sequences and evaluating on longer ones off-the-shelf.
## 2 Preliminary
In this section, we introduce concepts used throughout the paper, including positional embedding, token and channel mixing, and the Toeplitz matrix. Notations used can be found in Appendix A.
**Positional embedding** is introduced in transformers (Vaswani et al., 2017) to inject positional inductive bias. It often uses fixed or learned parameters to encode position-specific information, thus making the model position-aware. There are mainly two types of positional embeddings: the absolute positional embedding (Vaswani et al., 2017) and the relative position embedding (Shaw et al., 2018). In this work, we focus on the relative position embedding to emphasize pair-wise token
relations. A typical relative positional embedding (Raffel et al., 2020) is formulated as:
\[e_{ij}=\mathbf{q}_{i}^{\top}\mathbf{k}_{j}/\sqrt{d}+w_{i-j}, \tag{1}\]
where \(j,i\) are two positional indices, \(e_{ij}\) denotes the attention score before softmax. The \(\mathbf{q}_{i},\mathbf{k}_{j}\) represents the queries and keys in the attention. The \(w_{i-j}\) is a positional coefficient. In this case, the relative position information is added to the attention as a bias.
**Token and channel mixing** are used by (Yu et al., 2022) to refer to the two main procedures in sequence modeling. The token mixing refers to the process of mixing information between token pairs and the channel mixing for those between feature channels. In the Transformers, given the attention matrix \(\mathbf{A}\in\mathbb{R}^{n\times n}\) and token matrix \(\mathbf{X}\in\mathbb{R}^{n\times d}\), the attention operation \(\mathbf{A}\mathbf{X}\) can be regarded as a token mixing process and the FFN module is used for channel mixing.
Researchers often classify various sequence modeling techniques based on the token mixing techniques used. MLP-based methods (Liu et al., 2021; Tolstikhin et al., 2021) use matrix multiplication on the sequence dimension for token mixing. FFT-based methods (Lee-Thorp et al., 2022) utilize the FFT on the sequence dimension to mix token-wise information. The State-space-based methods (Gu et al., 2022) leverage the state equations and hidden states to model sequences, as well as perform interactions between tokens.
**Toeplitz matrix** is a special form of a matrix that has constant values along each diagonal running from left to right, _i.e.,_
\[\mathbf{T}_{ij}=\mathbf{T}_{i+1,j+1}=t_{i-j},\mathbf{T}\in\mathbb{R}^{n\times n}. \tag{2}\]
There are two nice properties of a Toeplitz matrix: 1). For an \(n\times n\) Toeplitz matrix, we can efficiently describe it with \(2n-1\) parameters. 2). The Toeplitz matrix-vector production is faster than standard matrix-vector production. In particular, we have:
**Theorem 2.1**.: _For a Toeplitz matrix \(\mathbf{T}\in\mathbb{R}^{n\times n}\) and any vector \(\mathbf{x}\in\mathbb{R}^{n}\), the time complexity of \(\mathbf{T}\mathbf{x}\) is \(O(n\log n)\)._
We provide detailed proof in Appendix B. This property enables us to use the Toeplitz matrices to perform efficient token mixing.
## 3 Toeplitz neural network
In this section, we provide a detailed design and analysis of our proposed Toeplitz Neural Network (TNN) by giving a glance at the overall structure of our model first and then describing each of its components. We also discuss the connection between the TNN and other sequence modeling methods at the end of this section.
### The overall architecture
Our model consists of a stack of Gated Toeplitz Units (GTU) and GLU (Shazeer, 2020). GTU is a modified GLU layer injected with the proposed Toeplitz Neural Operator (TNO), as illustrated in Fig. 2. A TNO is used to perform token mixing with a Toeplitz matrix. To generate relative position coefficients for the Toeplitz matrix, we propose a Relative Position Encoder (RPE), a lightweight fully-connected sub-network to encode the relative position information. An exponential decay bias is also added to the Toeplitz matrix to enable extrapolation on longer inputs.
### Toeplitz neural operator
Here, we will show how to use a Toeplitz matrix to represent relative positional information. Let us consider \(i,j\) to be two positions in a 1D sequence, by using the relative position embedding in Eq. 1, we can define a Toeplitz matrix \(\mathbf{T}\in\mathbb{R}^{n\times n}\), where \(\mathbf{T}_{ij}=t_{i-j}\). Specifically, given a sequence \(\mathbf{x}\) of \(n\) tokens, \(\mathbf{x}=[x_{0},x_{1},\dots,x_{n-1}]^{\top}\in\mathbb{R}^{n}\), we use a scalar \(t_{i-j}\) to represent the relative position coefficients between \(x_{i}\) and \(x_{j}\). Then a Toeplitz matrix \(\mathbf{T}\in\mathbb{R}^{n\times h}\) can be formed by gathering \(t_{i-j}\)
for every token pair:
\[\mathbf{T}=\left[\begin{array}{cccc}t_{0}&t_{-1}&\cdots&t_{-n+1}\\ t_{1}&t_{0}&&\vdots\\ \vdots&&t_{0}&t_{-1}\\ t_{n-1}&\ldots&t_{1}&t_{0}\end{array}\right]\in\mathbb{R}^{n\times n}. \tag{3}\]
Let us define a token mixing operation as:
\[\mathbf{y}=\mathbf{T}\mathbf{x}\in\mathbb{R}^{n}, \tag{4}\]
where \(\mathbf{y}\) is the token mixing result. For any \(d\)-dimensional sequences, the token mixing is performed on each dimension individually.
As aforementioned in Theorem 2.1, the computation complexity of Eq. 4 is \(O(n\log n)\). As we need to perform token mixing on \(d\) dimensions, our TNO has a computation complexity of \(O(nd\log n)\). One following question is how to calculate the relative position coefficients in \(\mathbf{T}\). A naive solution is to make the coefficients learnable parameters, such that the model can directly learn them from training data. However, this solution has some drawbacks: 1). Parameter explosion. For a \(d\)-dimensional sequence of \(n\) tokens, there are a total of \((2n-1)d\) learnable parameters, which can be prohibitively large as \(n\) increases. It also shows an unsatisfactory performance in our ablation studies in Sec. 4.3. 2). Fixed input sequence length. Since the sequence length \(n\) is fixed in training, we are unable to adjust the sequence length during inference, _i.e.,_ it will cause a crucial performance drop when the sequence length changes. To address these drawbacks, we propose a relative position encoder to generate the relative position coefficients.
### Relative position encoder
We illustrate the network structure of our RPE in Fig. 2, which is a fully connected network with \(K\) layers. The input of the network is a 1-dimensional scalar, _i.e.,_ the value of \(-(n-1),\ldots,(n-1),\forall n\in\mathbb{N}^{+}\), and output a \(d\) dimension vector, which is used to assemble the Toeplitz matrix. In this case, the number of the TNN's parameters will no longer depend on the input sequence length and the TNN will have the flexibility to deal with various sequence lengths in the inference stage.
Note that recent literature (Mildenhall et al., 2021) claims that projecting the scalar input to a higher dimensional space with high frequency functions, _i.e.,_\(\sin\) and \(\cos\) functions, before passing a network can lead to better performance. However, in our ablations, we find that using the original integer achieves better performance.
**Exponential decay bias** Previous models (Vaswani et al., 2017; Qin et al., 2022) often use a fixed sequence length in both training and inference. If we need to infer a longer sequence, the model
Figure 2: Network structure overview of the proposed Toeplitz Neural Network. The proposed sequence modeling block is composed of a Gated Toeplitz Unit and a GLU Shazeer (2020) and. We propose the TNO to perform token mixing with only relative position information. We use a small fully-connected network named RPE to encode relative position information.
needs to be retrained on the longer sequence length to maintain the performance, which can be prohibitively expensive in the application.
ALiBi (Press et al., 2022) shows that by applying a simple penalty to the query-key attention scores, the Transformer can handle longer sequence length in inference without compromising the performance. The penalty is a linear bias that is proportional to the distance between tokens. Inspired by this technique, we propose an exponential decay bias that directly applies to the Toeplitz matrix to achieve the same goal. In specific, let us define a decay rate of \(\lambda\in[0,1]\), and the new relative position coefficients \(\bar{t}_{i-j}\) in \(\mathbf{T}\) can be expressed as:
\[\bar{t}_{i-j}=\lambda^{|i-j|}t_{i-j}. \tag{5}\]
ALiBi can be seen as a special case of our method. Given the equation of ALiBi:
\[\bar{s}_{ij}=\mathbf{q}_{i}^{\top}\mathbf{k}_{j}/\sqrt{d}+m|i-j|,\quad\exp( \bar{s}_{ij})=\exp(\mathbf{q}_{i}^{\top}\mathbf{k}_{j}/\sqrt{d})\exp(m|i-j|), \tag{6}\]
and
\[s_{ij}=\mathbf{q}_{i}^{\top}\mathbf{k}_{j}/\sqrt{d},\quad\lambda\triangleq \exp(m), \tag{7}\]
we have:
\[\exp(\bar{s}_{ij})=\exp(s_{ij})\lambda^{|i-j|}. \tag{8}\]
It means the ALiBi applies an exponential decay on the softmax attention matrices whereas ours applies it on the Toeplitz matrices.
### Relation to other sequence modeling models
In this section, we will show the relationship between our model and other sequence modeling models such as the Transformers (Vaswani et al., 2017), CNNs (LeCun et al., 1989), and the State space (Gu et al., 2022). We also compare the theoretical space-time complexity of our model with previous sequence modeling models in Table. 1.
**Transformers** A Transformer with relative position embedding can be expressed as:
\[\mathbf{O}=\mathrm{Softmax}(\mathbf{Q}\mathbf{K}^{\top}/\sqrt{d}+\mathbf{T}) \mathbf{V}. \tag{9}\]
Comparing it with Eq. 4, the TNN can be regarded as an attention-free transformer, _i.e.,_ removing the \(\mathbf{Q},\mathbf{K}\), and the \(\mathrm{Softmax}\), while only keeping the relative position matrices \(\mathbf{T}\).
**CNNs** A convolutional layer can be viewed as a Toeplitz matrix of a special structure. Considering a 1D convolution:
\[\mathbf{y}=\mathbf{h}*\mathbf{x},\mathbf{y}_{i}=\sum_{j=0}^{i}\mathbf{h}_{i-j} \mathbf{x}_{j},\mathbf{h}\in\mathbb{R}^{m},\mathbf{x}\in\mathbb{R}^{n}, \mathbf{y}\in\mathbb{R}^{n+m-1}. \tag{10}\]
Let's define a Toeplitz matrix \(\mathbf{T}\in\mathbb{R}^{(n+m-1)\times(n+m-1)}\):
\[\mathbf{T}_{st}=\begin{cases}\mathbf{h}_{t-s}&0\leq t-s\leq m-1,0\leq t\leq n -1\\ 0&\mathrm{others},\end{cases},\mathbf{z}=\left[\begin{array}{c}\mathbf{x}\\ \mathbf{0}_{m-1}\end{array}\right]\in\mathbb{R}^{n+m-1}. \tag{11}\]
Then:
\[\mathbf{y}=\mathbf{T}\mathbf{z}\in\mathbb{R}^{n+m-1}. \tag{12}\]
Therefore, a 1D CNN can be viewed as a special case of the TNN with a zero-padded input. For better illustration, we provide a matrix form of CNN operation in Appendix C.1.
**State space** The equation of the State space can be expressed as:
\[\mathbf{u}_{i}=\mathbf{A}\mathbf{u}_{i-1}+\mathbf{B}\mathbf{x}_{i},\mathbf{y }_{i}=\mathbf{C}\mathbf{u}_{i},\mathbf{A}\in\mathbb{R}^{h\times h},\mathbf{B} \in\mathbb{R}^{h\times 1},\mathbf{C}\in\mathbb{R}^{1\times h},i=1,\ldots,n \tag{13}\]
where \(\mathbf{x}_{i}\) is the input, \(\mathbf{y}_{i}\) is the output,\(\mathbf{u}_{i}\) is the intermediate state. According to (Gu et al., 2022), the output of the State space is:
\[\mathbf{y}_{i}=\sum_{j=0}^{i}\mathbf{k}_{i-j}\mathbf{x}_{j},\mathbf{k}=\left( \mathbf{C}\mathbf{B},\mathbf{C}\mathbf{A}\mathbf{B},\ldots,\mathbf{C}\mathbf{A }^{n-1}\mathbf{B}\right)^{\top}\in\mathbb{R}^{n}. \tag{14}\]
Let's define the Toeplitz matrix \(\mathbf{T}\in\mathbb{R}^{n\times n}\):
\[\mathbf{T}_{i-j}=\begin{cases}\mathbf{k}_{i-j},i\geq j\\ 0,i<j\end{cases}. \tag{15}\]
Then:
\[\mathbf{y}=\mathbf{Tx},\mathbf{x}\in\mathbb{R}^{n},\mathbf{y}\in\mathbb{R}^{n}. \tag{16}\]
In this case, the State space can be regarded as a special form of TNN with the coefficients that are calculated by the State space. We also provide the matrix form in Appendix C.2 for better illustration.
## 4 Experiment
We compare our method to four kinds of sequential modeling methods including attention-based methods, MLP-based methods, FFT-based methods, and State-space-based methods. In particular, we select the following methods:
* Attention-based: Vanilla transformer(Vaswani et al., 2017), Transformer-LS(Zhu et al., 2021), FLASH, (Hua et al., 2022), 1+elu (Katharopoulos et al., 2020), Performer (Choromanski et al., 2020), cosFormer (Qin et al., 2022).
* MLP-based: gMLP(Liu et al., 2021), Synthesizer (Random), Synthesizer (Dense) (Tay et al., 2021).
* FFT-based: FNet(Lee-Thorp et al., 2022), GFNet (Rao et al., 2021), AFNO(Guibas et al., 2021).
* State-space-based: S4(Gu et al., 2022), DSS (Gupta et al., 2022), GSS(Mehta et al., 2022).
We evaluate our methods on the WikiText-103 (Merity et al., 2017) for autoregressive language modeling and the input length extrapolation ability, and the GLUE benchmark (Wang et al., 2018) for bidirectional language modeling. We also validate the accuracy and efficiency of our methods in handling long-range dependencies on the Long-Range Arena benchmark (Tay et al., 2020). To demonstrate the robustness of our model, we implement our model in DeiT (Touvron et al., 2021) structure and compare its performance with the vanilla DeiT (Touvron et al., 2021) on the ImageNet-1K (Deng et al., 2009) for image classification.
### Setting
We implement our models in Pytorch (Paszke et al., 2019) and train them on 8 V100 GPUs. We adopt the same training configuration for all competitors, including batch size, learning rate, training epochs/updates, _etc._ More detailed hyper-parameters are listed in Appendix D.
For the autoregressive language modeling, all models are trained on the WikiText-103 dataset (Merity et al., 2017) for 50K steps with a learning rate of \(0.005\). We use perplexity (PPL) as the evaluation metric.
For the bidirectional language modeling, we choose the Roberta (Liu et al., 2019) model as the base model structure for all methods. All models are pre-trained on the WikiText-103 (Merity et al., 2017) for 50K steps with lr=0.005 and fine-tuned on the GLUE dataset (Wang et al., 2018). We use different learning rates among 1e-5, 3e-5, 6e-5, 1e-4 and choose the best result after fine-tuning for 3 epochs.
\begin{table}
\begin{tabular}{c|c c c c c c c|c} \hline \hline Method & CNN & RNN & \begin{tabular}{c} Vanilla \\ Attention \\ \end{tabular} & \begin{tabular}{c} Linear \\ Attention \\ \end{tabular} & MLP & FFT & \begin{tabular}{c} State \\ space \\ \end{tabular} & TNN \\ \hline \begin{tabular}{c} Time \\ complexity \\ \end{tabular} & \(ned\) & \(nd^{2}\) & \(n^{2}d\) & \(nd^{2}\) & \(n^{2}d\) & \(nd\log n\) & \(nd\log n\) & \(nd\log n\) \\ \hline \begin{tabular}{c} Space \\ complexity \\ \end{tabular} & \(nd\) & \(nd\) & \(n^{2}d\) & \(nd\) & \(n^{2}d\) & \(nd\) & \(nd\) & \(nd\) \\ \hline
\begin{tabular}{c} Parallel \\ \end{tabular} & True & False & True & True & True & True & True \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Comparison of theoretical space-time complexity of several models.** Parallel indicates whether parallel training is possible, \(n\) indicates the sequence length, and \(d\) indicates the feature dimension, \(e\) indicates the CNN kernel size. Here we only list about 1D CNN.
For the Long-Range Arena benchmark, we adopt the same experimental configurations from the Skyformer Chen et al. (2021). We ensure that performances and efficiencies of all methods are obtained with a similar parameter size and the same training hyperparameters.
For the image classification on the ImageNet-1k dataset, we adopt the Deit (Touvron et al., 2021) network structure and replace the transformer layers with our model.
### Results
**Autoregressive language modeling** Autoregressive language modeling is a crucial task that requires the models to estimate causal probability distribution given the previously seen tokens. In Table 2, we compare the proposed TNN with competing sequence modeling models. First, compared to existing Mlp-based methods, TNN shows better performances with a clear margin on both val set and test set. Transformer-based methods are currently dominant sequence modeling methods. As a strong baseline, Transformer adopts a standard self-attention module with quadratic complexity, TNN still outperforms it on both val and test sets. in addition, TNN achieves better results than most efficient transformers including FLASH, 1+elu, Performer, and cosFormer. Finally, compared with recent emerging State-space-based sequence modeling methods, TNN achieves superior performance to all competing methods. it proves the effectiveness of our method in causal models.
Further, we also compared the extrapolation capabilities of each method. In Figure 1, we show that our method outperforms all other methods and is comparable to ALBi (Press et al., 2022). Complete results can be found in Appendix 15.
**Bidirectional language modeling** We benchmark bidirectional modeling methods on the GLUE datasets in Table. 3. TNN achieves competitive results across all tasks. Further, it is worth noting that TNN boosts the results of CoLA by a significant margin, showing the ability to reason logistic information from sequences. It demonstrates the effectiveness of TNN in bidirectional language modeling.
**Long-Range Arena benchmark** As shown in Table 4, we compare TNN with competing methods across five tasks of the LRA benchmark. The results before the Transformer-LS are taken from Skyformer (Chen et al., 2021). As demonstrated, TNN achieves the best scores on three tasks and the second places on the left two tasks. In terms of overall results, TNN outperforms all other competing methods including S4 (Gu et al., 2022) 1
Footnote 1: We re-run the S4 experiments with the new configuration to match the number of parameters. For the sake of completeness, we also compare TNN with S4 in the original size of S4 using the suffix “-Large” in Table14, which validates our ability to encode long sequences.
For speed comparison, we compare the training speed of the TNN with other methods in Table 5. For a fair and comprehensive comparison, we follow exactly the same configurations of the Skyformer Chen et al. (2021) and report step per second under different sequence lengths. Timing is conducted on an Nvidia A6000 GPU with 48G GPU memory.
**Image modeling** We report classification results on the ImageNet-1k dataset in Table 6. As shown, under similar parameter sizes, TNN achieves better results than Deit-Tiny and comparable results with Deit-Small. It demonstrates the capability of our method in encoding visual signals.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Method & \begin{tabular}{c} PPL \\ (val) \\ \end{tabular} & \begin{tabular}{c} PPL \\ (test) \\ \end{tabular} &
\begin{tabular}{c} Params \\ (m) \\ \end{tabular} \\ \hline \hline \multicolumn{4}{l}{_Attn-based_} \\ \hline Trans & 24.40 & 24.78 & 44.65 \\ LS & **23.56** & **24.05** & 47.89 \\ FLASH & 25.92 & 26.70 & 42.17 \\
1+elu & 27.44 & 28.05 & 44.65 \\ Performer & 62.50 & 63.16 & 44.65 \\ cosFormer & 26.53 & 27.06 & 44.65 \\ \hline \multicolumn{4}{l}{_MLP-based_} \\ \hline Syn(D) & 31.31 & 32.43 & 46.75 \\ Syn(R) & 33.68 & 34.78 & 44.65 \\
8MLP & 28.08 & 29.13 & 47.83 \\ \hline \multicolumn{4}{l}{_S-based_} \\ \hline S4 & 38.34 & 39.66 & 45.69 \\ DSS & 39.39 & 41.07 & 45.73 \\ GSS & 29.61 & 30.74 & 43.84 \\ \hline \multicolumn{4}{l}{_Ours_} \\ \hline TNN & 23.98 & 24.67 & 48.68 \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Performances comparison of autoregressive language modeling on the Wikitext-103 dataset.** The best result is highlighted in **bold** and the second in underline. \(\downarrow\) means _lower is better_. Attn stands for Attention, Ss stands for State space, Trans stands for Transformer, LS stands for Transformer-LS.
### Ablation study
**Network structure configuration** We ablate different structure configurations on the autoregressive language modeling task in Table 7. We consider three options of configuration: the GTU+GLU, GTU only, and attention+GLU. We empirically find that the GTU+GLU one achieves better performance than other options and choose it as our structure in TNN.
**Input of relative position encoder** In Table 8, we ablate different RPE inputs on language modeling. _(-(n-1),...,(n-1))_ denotes that we feed \(2n-1\) constants into the RPE. _(-(n-1),...,(n-1))_/_n_ denotes normalized constants. The _sin, cos_ denotes the absolute position embedding method used in (Vaswani et al., 2017). We empirically find that using the original integers as the input for the RPE leads to better performance.
\begin{table}
\begin{tabular}{l|c c c c c c} \hline Model & Text & ListOps & Retrieval & Pathfinder & Image & AVG. \\ \hline Transformer & 61.95 & 38.37 & 80.69 & 65.26 & 40.57 & 57.37 \\ \hline Kernelized Attention & 60.22 & 38.78 & 81.77 & 70.73 & 41.29 & 58.56 \\ Nystromformer & 64.83 & 38.51 & 80.52 & 69.48 & 41.30 & 58.93 \\ Linformer & 58.93 & 37.45 & 78.19 & 60.93 & 37.96 & 54.69 \\ Informer & 62.64 & 32.53 & 77.57 & 57.83 & 38.10 & 53.73 \\ Performer & 64.19 & 38.02 & 80.04 & 66.30 & 41.43 & 58.00 \\ Reformer & 62.93 & 37.68 & 78.99 & 66.49 & 48.87 & 58.99 \\ BigBird & 63.86 & 39.25 & 80.28 & 68.72 & 43.16 & 59.05 \\ Skyformer & 64.70 & 38.69 & 82.06 & 70.73 & 40.77 & 59.39 \\ LS & 66.62 & 40.30 & 81.68 & 69.98 & 47.60 & 61.24 \\ cosFormer & 67.70 & 36.50 & 83.15 & 71.96 & 51.23 & 62.11 \\ FLASH & 64.10 & 38.70 & 86.10 & 70.25 & 47.40 & 61.31 \\ S4 & 85.92 & **50.60** & 67.30 & 72.44 & **78.07** & 70.87 \\ \hline TNN & **86.39** & 47.33 & **89.40** & **73.89** & 77.84 & **74.97** \\ \hline \end{tabular}
\end{table}
Table 4: **Performances Comparison on the Long Range Arena benchmark. We use bold and underline to highlight the best and the second result of each task respectively. The proposed TNN achieves the best performances and outperforms all competing methods.**
\begin{table}
\begin{tabular}{l l l l l l l l l} \hline Method & MNLI & QNLI & QQP & SST-2 & MRPC & CoLA & AVG & Params(m) \\ \hline _Attn-based_ & & & & & & & & & \\ \hline Trans & 79.37/79.07 & 87.79 & 88.04 & 90.25 & 88.35 & 38.63 & **78.79** & 124.70 \\ LS & 77.01/76.78 & 84.86 & 86.85 & 90.25 & 82.65 & 40.65 & 77.01 & 128.28 \\ FLASH & 79.45/80.08 & 87.10 & 88.83 & 90.71 & 82.50 & 29.40 & 76.87 & 127.12 \\
1+elu & 74.87/75.37 & 82.59 & 86.90 & 87.27 & 83.03 & - & 70.00 & 124.70 \\ Performer & 58.85/59.52 & 63.44 & 79.10 & 81.42 & 82.11 & 19.41 & 63.41 & 124.70 \\ cosFormer & 75.10/75.95 & 82.61 & 86.12 & 89.45 & 81.93 & 33.03 & 74.88 & 124.70 \\ \hline _MLP-based_ & & & & & & & & \\ \hline Syn(D) & 50.93/51.02 & 62.80 & 81.33 & 82.34 & 81.79 & - & 58.60 & 131.00 \\ Syn(R) & 52.82/52.13 & 62.29 & 78.11 & 82.22 & 81.38 & 4.63 & 59.08 & 129.42 \\ gMLP & 73.30/73.60 & 80.56 & 86.48 & 90.25 & 82.30 & 36.06 & 74.65 & 131.08 \\ \hline _FFT-based_ & & & & & & & & \\ \hline FNet & 62.45/64.71 & 73.31 & 79.43 & 81.88 & 82.91 & - & 63.53 & 124.70 \\ GFNet & 66.75/67.45 & 65.42 & 80.25 & 84.40 & 82.44 & 9.62 & 65.19 & 130.06 \\ AFNO & 68.79/69.28 & 73.20 & 85.12 & 88.88 & 82.35 & 36.19 & 71.97 & 121.57 \\ \hline _Ss-based_ & & & & & & & & \\ \hline S4 & 68.45/68.42 & 72.14 & 84.61 & 87.04 & 83.36 & 23.01 & 69.58 & 131.79 \\ DSS & 35.46/35.22 & 50.80 & 65.18 & 65.37 & 80.95 & 6.14 & 48.45 & 123.76 \\ GSS & 50.53/51.58 & 62.58 & 80.98 & 85.67 & 82.11 & 6.56 & 60.00 & 122.80 \\ \hline _Ours_ & & & & & & & \\ \hline TNN & 76.72/76.06 & 85.06 & 88.30 & 90.60 & 82.96 & 49.85 & 78.51 & 126.40 \\ \hline \end{tabular}
\end{table}
Table 3: **Performances comparison of bidirectional sequence modeling on the GLUE benchmark. MNLI is reported by the match/mismatch splits. MRPC is reported by F1 score. CoLA is reported by Matthews correlation coefficient. All the other tasks are measured by accuracy. The best result is highlighted in bold and the second in underline. The larger the better for all metrics. ”-” means unconverted. Attn stands for Attention, Ss stands for State space, Trans stands for Transformer, LS stands for Transformer-LS.**
**Relative position encoder** There are two ways to generate relative position coefficients for the Toeplitz matrix. One is to set these coefficients as learnable parameters and allow TNN to learn them from data. The other is to use our proposed RPE network to generate these coefficients. We compare these two strategies in Table 9. The TNN with our RPE network achieves an improvement of 2.47 PPL in language modeling.
**Exponential decay rate** We ablate different exponential decay rates in Table 10 on the language modeling. We train these model variants with a fixed sequence length of 512 and test them on a series of sequence lengths from 512 to 14336 and compute the average PPL. When there is no exponential decay, the model fails to extrapolate to a longer sequence length. We also test our model with a learnable decay rate, but it does not show better performance. We empirically select 0.99 as the exponential decay rate in our method.
## 5 Conclusion
In this paper, we propose Toeplitz neural network, a new efficient architecture that relies entirely on relative positional information for sequence modeling. The proposed model enjoys a favorable log linear space-time complexity. Thanks to the proposed relative position encoder and exponential decay techniques, Toeplitz neural network generalizes to long sequences with a fixed budget of parameters while obtaining consistently superior performance than competing methods across multiple challenging tasks, including language modeling, image modeling, and sequence modeling on long inputs, _i.e.,_ the Long-Range Arena benchmark. Toeplitz neural network is also a generic sequence modeling approach, which renders various popular architectures, such as Transformers, CNNs, and State-space-based methods, as its special forms, offering a unified view for sequence modeling.
\begin{table}
\begin{tabular}{l|c c|c c} \hline & \multicolumn{3}{c}{DeiT-Tiny} & \multicolumn{3}{c}{DeiT-Small} \\ \hline Model & Acc & Param & Acc & Param \\ \hline Transformer & 72.20 & 5.7M & 79.90 & 22.0M \\ TNN & 72.29 & 6.4M & 79.20 & 23.4M \\ \hline \end{tabular}
\end{table}
Table 10: **Ablation of exponential decay rates in input length extrapolation.** The model variants are trained on a fixed sequence length of 512 and tested on a series of sequence lengths ranging from 512 to 14336. We compute the average PPL for all sequence lengths.
\begin{table}
\begin{tabular}{l|c} \hline Method & PPL(val) \\ \hline (-(n-1),...,(n-1)) & 23.98 \\ (-(n-1)),...,(n-1))/n & 24.11 \\ sin, cos & 24.04 \\ \hline \end{tabular}
\end{table}
Table 9: **Performances comparison of RNN with and without RPE.** RPE brings an improvement in language modeling.
\begin{table}
\begin{tabular}{l|c} \hline & Speed(steps per sec) \\ \hline model & **1K**, **2K** & **3K** & **4K** & **5K** \\ \hline Transformer & 15.34 & 3.05 - & - - \\ \hline FLASH & 20.49 & 11.06 & 8.47 & 7.23 & 6.93 \\ LS & 15.43 & 8.68 & 6.28 & 5.24 & 4.76 \\ Performer & 28.41 & 16.23 & 12.02 & 10.04 & 9.06 \\ cosFormer & 22.94 & 12.82 & 9.19 & 7.79 & 7.14 \\ Linformer & 27.17 & 15.63 & 11.26 & 8.77 & 7.42 \\ Reformer & 20.16 & 10.87 & 7.46 & 5.69 & 4.70 \\ Nystom & 14.12 & 9.62 & 7.46 & 6.11 & 5.26 \\ State space & 25.99 & 14.88 & 8.35 & 6.66 & 5.40 \\ FNet & 24.61 & 14.37 & 9.18 & 8.39 & 7.44 \\ \hline TNN & 25.72 & 15.35 & 9.90 & 8.07 & 7.00 \\ \hline \end{tabular}
\end{table}
Table 5: **Speed comparison on Long-Range Arena benchmark.** We mark it with a dash if a method exhausts GPU memory. The higher the better for all metrics. The **1K**,...,**5K** represent the input sequence length.
\begin{table}
\begin{tabular}{l|c} \hline Method & PPL(val) \\ \hline (-(n-1),...,(n-1)) & 23.98 \\ (-(n-1)),...,(n-1))/n & 24.11 \\ sin, cos & 24.04 \\ \hline \end{tabular}
\end{table}
Table 8: **Results comparison with different RPE inputs.**
\begin{table}
\begin{tabular}{l|c} \hline Method & PPL(val) \\ \hline (-(n-1),...,(n-1)) & 23.98 \\ (-(n-1)),...,(n-1))/n & 24.11 \\ sin, cos & 24.04 \\ \hline \end{tabular}
\end{table}
Table 9: **Performances comparison of RNN with and without RPE.** RPE brings an improvement in language modeling. |
2305.18360 | Sharing Leaky-Integrate-and-Fire Neurons for Memory-Efficient Spiking
Neural Networks | Spiking Neural Networks (SNNs) have gained increasing attention as
energy-efficient neural networks owing to their binary and asynchronous
computation. However, their non-linear activation, that is
Leaky-Integrate-and-Fire (LIF) neuron, requires additional memory to store a
membrane voltage to capture the temporal dynamics of spikes. Although the
required memory cost for LIF neurons significantly increases as the input
dimension goes larger, a technique to reduce memory for LIF neurons has not
been explored so far. To address this, we propose a simple and effective
solution, EfficientLIF-Net, which shares the LIF neurons across different
layers and channels. Our EfficientLIF-Net achieves comparable accuracy with the
standard SNNs while bringing up to ~4.3X forward memory efficiency and ~21.9X
backward memory efficiency for LIF neurons. We conduct experiments on various
datasets including CIFAR10, CIFAR100, TinyImageNet, ImageNet-100, and
N-Caltech101. Furthermore, we show that our approach also offers advantages on
Human Activity Recognition (HAR) datasets, which heavily rely on temporal
information. | Youngeun Kim, Yuhang Li, Abhishek Moitra, Ruokai Yin, Priyadarshini Panda | 2023-05-26T22:55:26Z | http://arxiv.org/abs/2305.18360v1 | # Sharing Leaky-Integrate-and-Fire Neurons for Memory-Efficient Spiking Neural Networks
###### Abstract
Spiking Neural Networks (SNNs) have gained increasing attention as energy-efficient neural networks owing to their binary and asynchronous computation. However, their nonlinear activation, that is Leaky-Integrate-and-Fire (LIF) neuron, requires additional memory to store a membrane voltage to capture the temporal dynamics of spikes. Although the required memory cost for LIF neurons significantly increases as the input dimension goes larger, a technique to reduce memory for LIF neurons has not been explored so far. To address this, we propose a simple and effective solution, EfficientLIF-Net, which shares the LIF neurons across different layers and channels. Our EfficientLIF-Net achieves comparable accuracy with the standard SNNs while bringing up to \(\sim 4.3\times\) forward memory efficiency and \(\sim 21.9\times\) backward memory efficiency for LIF neurons. We conduct experiments on various datasets including CIFAR10, CIFAR100, TinyImageNet, ImageNet-100, and N-Caltech101. Furthermore, we show that our approach also offers advantages on Human Activity Recognition (HAR) datasets, which heavily rely on temporal information.
Neuromorphic Computing, Memory Efficiency, Spiking Neural Networks
## I Introduction
Spiking Neural Networks (SNNs) have gained significant attention as a promising candidate for low-power machine intelligence [1, 2, 3, 4, 5, 6]. By mimicking biological neuronal mechanisms, Leaky-Integrate-and-Fire (LIF) neurons in SNNs convey visual information with temporal binary spikes over time. The LIF neuron [7] considers temporal dynamics by accumulating incoming spikes inside a membrane potential, and generates output spikes when the membrane potential voltage exceeds a firing threshold. Such binary and asynchronous operation of SNNs incurs energy-efficiency benefits on low-power neuromorphic hardware [8, 9, 10, 11].
Although SNN brings computational efficiency benefits, memory overhead caused by LIF neurons can be problematic. As shown in Fig. 1, LIF neurons require additional memory for storing the membrane potential value which changes over time. This is not the case for the traditional Artificial Neural Networks (ANNs) where most non-linear activation functions are parameter-free (_e.g._, ReLU, Sigmoid). At the same time, LIF neurons occupy a large portion of memory with the high-resolution input image (Fig. 1). For instance, the LIF memory takes \(53\%\) of memory overhead in the case of ResNet19 [12] with a \(224\times 224\) image. Unfortunately, the LIF memory overhead has been overlooked so far in SNN studies.
To address this, we propose EfficientLIF-Net where we share the LIF neurons across different layers and channels. By sharing the memory, we do not need to assign separate memory for each layer and channel. For layer-wise sharing, we use common LIF neurons across layers having the same activation size, such as layers in one ResNet block [12]. For channel-wise sharing, we equally divide the LIF neurons into multiple groups through the channel dimension and share common LIF neurons across different groups. Surprisingly, our EfficientLIF-Net provides similar performance as the standard SNN models where each layer and channel has independent LIF neurons. We show the gradient can successfully flow back through all layers, thus the weight can be trained to consider the temporal dynamics of spike information.
Furthermore, the proposed EfficientLIF-Net brings huge benefits to saving memory costs during training. Spatio-temporal operation inside SNNs incurs a huge computational graph for computing backward gradients. Each LIF neuron needs to store membrane potential to make gradients flow back, where the training memory increases as the SNN goes deeper and uses larger timesteps. This huge computational graph often is difficult to be trained on the limited GPU mem
Fig. 1: Motivation of our work. **Top:** Comparison between neurons in ANNs and SNNs: Unlike ReLU neurons, which do not require any parameters, LIF neurons maintain a membrane potential with voltage values that change across timesteps. **Bottom:** Memory cost breakdown for the Spiking-ResNet19 architecture during inference on an image with a resolution of \(224\times 224\).
ory [13, 14, 15]. In this context, since our architecture shares the membrane potential across all layers, we can compute each layer's membrane potential from the next layer's membrane potential real-time during backward step. This enables us to perform backpropagation without the need for storing/caching the membrane potentials of all layers in memory (from the forward step).
Our contributions can be summarized as follows:
* We pose the memory overhead problem of LIF neurons in SNNs, where the memory cost significantly increases as the image size goes larger.
* To address this, we propose a simple and effective architecture, EfficientLIF-Net where we share the LIF neurons across different layers and channels.
* EfficientLIF-Net also reduces memory cost during training by computing each layer's (channel's) membrane potential from the next layer's (channel's) membrane potential real-time during backward step, drastically reducing the caching of membrane potentials.
* We conduct experiments on five public datasets, validating EfficientLIF-Net can achieve comparable performance as the standard SNNs while bringing up to \(\sim 4.3\times\) forward memory efficiency and up to \(\sim 21.9\times\) backward memory efficiency for LIF neurons.
* We also observe that the LIF memory cost problem exists in pruned SNNs and in fact the LIF memory overhead percentage goes higher when the weight sparsity goes higher. Our EfficientLIF-Net successfully reduces the LIF memory cost to \(\sim 23\%\) in pruned SNNs while achieving iso-accuracy compared to the pruned baseline.
## II Related Work
### _Spiking Neural Networks_
Different from the standard Artificial Neural Networks (ANNs), Spiking Neural Networks (SNNs) convey temporal spikes [1, 2]. Here, Leaky-Integrate-and-Fire (LIF) neuron plays an important role as the non-linear activation. The LIF neurons have a "memory" called membrane potential, where the incoming spikes are accumulated. Output spikes are generated if the membrane potential exceeds a firing threshold, then the membrane potential resets to zero. This firing operation of LIF neurons is non-differentiable, so the previous SNN literature has focused on resolving the gradient problem. A widely-used training technique is converting pre-trained ANNs to SNNs using weight or threshold balancing [16, 17, 18, 19, 20]. However, such methods require large number of timesteps to emulate float activation using binary spikes. Recently, a line of works propose to circumvent the non-differentiable backpropagation problem by defining a surrogate function [21, 22, 23, 24, 25, 26, 27, 28, 29, 30]. As the weight is trained to consider temporal dynamics, they show both high performance and short latency. Although the previous methods have made huge advances in terms of improving the performance, they assume that SNNs have different LIF neurons for different layers and channels, which imposes a huge memory overhead in both forward and backward.
### _Compression Methods for Efficient SNNs_
Due to the energy-efficiency benefit of SNNs, they can be suitably implemented on edge devices with limited memory storage [31, 32, 33]. Therefore, a line of work has proposed various methods to reduce the memory cost for SNNs using compression techniques. Neural pruning is one of the effective methods for SNN compression. Several works [34, 35] have proposed a post-training pruning technique using a threshold value. Unsupervised online adaptive weight pruning [36] dynamically prunes trivial weights over time. Shi _et al._[37] prune weight connections during training with a soft mask. Recently, deeper SNNs are pruned with ADMM optimization tool [38], gradient-based rewiring [39], and lottery ticket hypothesis [40]. Meanwhile, various quantization techniques also have been proposed to compress SNNs [41, 42, 43, 44]. Schaefer and Joshi [45] propose integer fixed-point representations for neural dynamics, weights, loss, and gradients. The recent work [46] performs quantization through temporal dimension for low-latency SNNs. Lui and Neftci propose a quantization technique based on the Hessian of weights [47]. Nonetheless, no prior work has explicitly addressed the memory overhead caused by LIF neurons. Our method effectively reduces memory overhead by modifying the architecture, and is orthogonal to previous methods. Thus, combining EfficientLIF-Net with compression techniques will further compound the benefits.
## III Preliminaries
### _Leaky Integrate-and-Fire Neuron_
In our paper, we mainly address the memory cost from a Leaky-Integrate-and-Fire (LIF) neuron, which is widely adopted in SNN works [22, 25, 26, 27, 28, 24, 25, 28]. Suppose LIF neurons in \(l\)-th layer have membrane potential \(U_{l}^{t}\) at timestep \(t\), we can formulate LIF neuron dynamics as:
\[U_{l}^{t}=\lambda U_{l}^{t-1}+W_{l}O_{l-1}^{t}, \tag{1}\]
where \(W_{l}\) is weight parameters in layer \(l\), \(O_{l-1}^{t}\) represents the spikes from the previous layer, \(\lambda\) is a decaying factor in the membrane potential. Note, we use uppercase letters for matrix notation. The LIF neuron generates an output spike \(O_{l}^{t}\) when the membrane potential exceeds the firing threshold \(\theta\). Here, we define the spike firing function as:
\[f(U_{l}^{t})=O_{l}^{t}=\begin{cases}1&\text{if }U_{l}^{t}>\theta\\ 0&\text{otherwise}\end{cases}. \tag{2}\]
After firing, the membrane potential can be reset to zero (_i.e._, hard reset), or reduced by the threshold value (_i.e._, soft reset). Thus, an LIF neuron always stores the membrane potential to capture the temporal information of spikes. The memory cost for LIF neurons is proportional to the input image dimension, which poses huge memory overhead for high-resolution data such as ImageNet [49].
### _Gradient Backpropagation in SNNs_
For the class probability prediction, we accumulate the final-layer activation across all timesteps, followed by the Softmax function. We apply cross-entropy loss \(L\) for training
the weights parameters. The backward gradients are calculated in both spatial and time axis [23, 3] according to the chain rule:
\[\frac{\partial L}{\partial W_{l}}=\sum_{t}(\frac{\partial L}{\partial O_{l}^{t}} \frac{\partial O_{l}^{t}}{\partial U_{l}^{t}}+\frac{\partial L}{\partial U_{l}^ {t+1}}\frac{\partial U_{l}^{t+1}}{\partial U_{l}^{t}})\frac{\partial U_{l}^{t} }{\partial W_{l}}. \tag{3}\]
Here, the gradient of output spikes with respect to the membrane potential \(\frac{\partial O_{l}^{t}}{\partial U_{l}^{t}}\) is non-differentiable. Following previous work [6], we use \(arctan()\) to approximate gradients, _i.e._, we use an approximate function \(f(x)=\frac{1}{\pi}arctan(\pi x)+\frac{1}{2}\) for computing gradients of \(\frac{\partial O_{l}^{t}}{\partial U_{l}^{t}}\). The overall computational graph is illustrated in Fig. 3(a).
## IV EfficientLIF-Net
In this section, we first describe the details of how we reduce the memory cost of LIF neurons across layers and channels. The overall concept of EfficientLIF-Net is illustrated in Fig. 2. After that, we provide the analysis of the backward gradient in EfficientLIF-Net for training, which shows our EfficientLIF-Net successfully considers the entire time horizon. Finally, we show the memory advantage of our EfficientLIF-Net during backpropagation.
### _Sharing Memory of LIF neurons_
**Cross-layer Sharing.** The key idea here is sharing the LIF neurons across different layers where they have the same output activation size. Thus, LIF neurons are shared across multiple subsequent layers before the layer increases channel size or reduces spatial resolution. Such architecture design can be easily observed in CNN architectures such as ResNet [12].
Let's assume the networks have the same activation size from the \(l\)-th layer to the \((l+m)\)-th layer. The membrane potential of the \((l+1)\)-th layer is calculated by adding the previous layer's membrane potential and weighted spike output from the previous layer:
\[U_{l+1}^{t}=\lambda(U_{l}^{t}-O_{l}^{t})+W_{l+1}O_{l}^{t}. \tag{4}\]
Here the previous layer's membrane potential \(U_{l}^{t}\) decreases its value by the threshold for soft reset (firing threshold is set to \(1\)) after it generates spikes \(O_{l}^{t}\). After that, decay factor \(\lambda\) is applied to the previous layer's membrane potential, since we aim to dilute the previous layers' information as networks go deeper. The layer \((l+1)\) generates output spike following Eq. 2:
\[O_{l+1}^{t}=f(U_{l+1}^{t}). \tag{5}\]
In the same timestep, the spike information goes through all layers (from \(l\)-th layer to \(l+m\)-th layer) with Eq. 4 and Eq. 5 dynamics. Then, the membrane potential of layer \(l+m\) is shared with layer \(l\) at the next timestep (purple arrow in Fig. 3(b)).
\[U_{l}^{t+1}=\lambda(U_{l+m}^{t}-O_{l+m}^{t})+W_{l}O_{l-1}^{t+1}, \tag{6}\]
where the soft reset and decaying is applied to \(U_{l+m}^{t}\), and the weighted input comes from layer \(l-1\).
Overall, we require only one-layer LIF memory for layer \(l\sim\text{layer }(l+m)\) computation, which is shared across all layers and timesteps. Thus, LIF memory of layers \(l\sim(l+m)\) can be reduced by \(\frac{1}{m}\). The overall computational graph is illustrated in Fig. 3(b).
**Cross-channel Sharing.** We also explore the neuron sharing scheme in the channel dimension. Let \(X_{l}\) be the weighted input spike, _i.e._, \(X_{l}=W_{l}O_{l-1}^{t}\), then we first divide the weighted input spike tensor into \(N\) groups in channel axis.
\[X_{l}^{t}\rightarrow[X_{l}^{t,(1)},X_{l}^{t,(2)},...,X_{l}^{t,(N)}]. \tag{7}\]
Suppose \(X_{l}^{t}\in\mathbb{R}^{C\times H\times W}\), then the spike of each group can be represented as \(X_{l}^{t,(i)}\in\mathbb{R}^{\frac{C}{N}\times H\times W}\), \(i\in\{1,2,...,N\}\). Then, the LIF neurons can be sequentially shared across different groups (_i.e._, different channels) of weighted input spike. The membrane potential of \((i+1)\)-th group at layer \(l\) can be formulated as:
\[U_{l}^{t,(i+1)}=\lambda(U_{l}^{t,(i)}-O_{l}^{t,(i)})+X_{l}^{t,(i+1)}, \tag{8}\]
where \(U_{l}^{t,(i)}\) is the membrane potential of the previous group, and \(X_{l}^{t,(i+1)}\) is the incoming weighted spike input of the \((i+1)\)-th group from the previous layer. Here, soft reset and decaying also applied. The output spikes of each group are
Fig. 2: Illustration of the proposed EfficientLIF-Net. (a) Conventional SNNs where each layer and channel has separate LIF neurons. (b)-(d) is our proposed EfficientLIF-Net which shares LIF neurons across layer, channel, and layer & channel.
generated by standard firing dynamics (Eq. 2):
\[O_{l}^{t,(i)}=f(U_{l}^{t,(i)}). \tag{9}\]
We concatenate the output spikes of each groups through channels in order to compute the output at timestep \(t\):
\[O_{l}^{t}=[O_{l}^{t,(1)},O_{l}^{t,(2)},...,O_{l}^{t,(N)}]. \tag{10}\]
After completing the LIF sharing in timestep \(t\), we share the last group's (_i.e_., group \(N\)) membrane potential to the first group in the next timestep \(t+1\).
\[U_{l}^{t+1,(1)}=\lambda(U_{l}^{t,(N)}-O_{l}^{t,(N)})+X_{l}^{t+1,(1)}. \tag{11}\]
By using cross-channel sharing, the memory cost for LIF neuron of one layer can be reduced by \(\frac{1}{N}\), where \(N\) is the number of groups. Thus, memory-efficiency will increase as we use larger group number.
**Cross-layer&channel Sharing.** The cross-layer and cross-channel sharing methods are complementary to each other, therefore they can be used together to bring further memory efficiency. The LIF neurons are shared across channels and layers as shown in Fig. 2(d). The neuron-sharing mechanism can be obtained by combining cross-layer and cross-channel sharing methods.
Let's assume the networks have the same activation size from the \(l\)-th layer to the \((l+m)\)-th layer. The sharing mechanism in one layer is same as channel sharing. Let \(X_{l}\) be the weighted input spike, _i.e_., \(X_{l}=W_{l}O_{l-1}^{t}\), then we first divide the weighted input spike tensor into \(N\) groups in channel axis.
\[X_{l}^{t}\rightarrow[X_{l}^{t,(1)},X_{l}^{t,(2)},...,X_{l}^{t,(N)}]. \tag{12}\]
Suppose \(X_{l}^{t}\in\mathbb{R}^{C\times H\times W}\), then the spike of each group can be represented as \(X_{l}^{t,(i)}\in\mathbb{R}^{\frac{C}{N}\times H\times W}\), \(i\in\{1,2,...,N\}\). Then, the LIF neurons can be sequentially shared across different groups (_i.e_., different channels) of weighted input spike. The membrane potential of \((i+1)\)-th group at layer \(l\) can be formulated as:
\[U_{l}^{t,(i+1)}=\lambda(U_{l}^{t,(i)}-O_{l}^{t,(i)})+X_{l}^{t,(i+1)}, \tag{13}\]
where \(U_{l}^{t,(i)}\) is the membrane potential of the previous group, and \(X_{l}^{t,(i+1)}\) is the incoming weighted spike input of the \((i+1)\)-th group from the previous layer. Here, soft reset and decaying is also applied. The output spikes of each group are generated by standard firing dynamics:
\[O_{l}^{t,(i)}=f(U_{l}^{t,(i)}). \tag{14}\]
We concatenate the output spikes of each group through channels in order to compute the output at timestep \(t\):
\[O_{l}^{t}=[O_{l}^{t,(1)},O_{l}^{t,(2)},...,O_{l}^{t,(N)}]. \tag{15}\]
After completing the LIF sharing at layer \(l\), we share the last group's (_i.e_., group \(N\)) membrane potential of \(l\)-th layer to the first group of \(l+1\)-th layer.
\[U_{l+1}^{t,(1)}=\lambda(U_{l}^{t,(N)}-O_{l}^{t,(N)})+X_{l+1}^{t+1,(1)}. \tag{16}\]
In the same timestep, the spike information goes through all layers (from \(l\)-th layer to \(l+m\)-th layer) dynamics. Then, the last group's (_i.e_., group \(N\)) membrane potential of layer \(l+m\) is shared with the first group of layer \(l\) at the next timestep.
\[U_{l}^{t+1,(1)}=\lambda(U_{l+m}^{t,(N)}-O_{l+m}^{t,(N)})+X_{l}^{t+1,(1)}. \tag{17}\]
By using cross-channel sharing, the memory cost of LIF neuron for layer \(l\sim\) layer \((l+m)\) computation can be reduced by \(\frac{1}{mN}\), where \(N\) is the number of groups. Our experimental results show that although we combine two sharing methods, we still get iso-accuracy as the standard SNNs.
### _Gradient Analysis_
Sharing LIF neurons leads to different gradient paths compared to standard SNNs. Therefore, we provide the gradient analysis for EfficientLIF-Net.
**Gradient of Cross-layer Sharing.** Suppose that we compute the gradients for \(m\) subsequent layers where they have the same activation size. For simplicity, we call these \(m\) subsequent layers as a "sharing block". The unrolled computational graph is illustrated in Fig. 3(b).
Fig. 3: Illustration of a unrolled computational graph for the backpropagation. Black solid arrows and gray dotted arrows represent forward and backward paths, respectively. For simplicity, we omit the reset path from the spike output.
For the intermediate layers of the sharing block, the gradients flow back from the next layer (marked as 1 in Fig. 3(b)), which can be formulated as:
\[\frac{\partial L}{\partial W_{l}}=\sum_{t}\left(\frac{\partial L}{\partial O_{l}^ {t}}\frac{\partial O_{l}^{t}}{\partial U_{l}^{t}}+\frac{\partial L}{\partial U _{l+1}^{t}}\frac{\partial U_{l+1}^{t}}{\partial U_{l}^{t}}\right)\frac{\partial U _{l}^{t}}{\partial W_{l}}, \tag{18}\]
where both terms are derived by the forward dynamics in Eq. 4. For the final layer of the sharing block, the gradients flow back through both layer and temporal axis:
\[\frac{\partial L}{\partial W_{l+m}}\!\!=\!\!\sum_{t}\left(\frac{\partial L}{ \partial O_{l+m}^{t}}\frac{\partial O_{l+m}^{t}}{\partial U_{l+m}^{t}}+\frac{ \partial L}{\partial U_{l}^{t+1}}\frac{\partial U_{l}^{t+1}}{\partial U_{l+m} ^{t}}\right)\frac{\partial U_{l+m}^{t}}{\partial W_{l+m}}. \tag{19}\]
The first term shows the gradient from the next layer (marked as 2 in Fig. 3(b)), and the second term is from the first layer of the sharing block at the next timestep (marked as 3 in Fig. 3(b)). The last layer of the sharing block obtains the gradients from the next timestep (marked as 3) which is then, propagated through the intermediate layers. This allows the weight parameters to be trained with temporal information, achieving similar performance as the standard SNN architecture.
**Gradient of Cross-channel Sharing.** Assume that we divide the channel into \(N\) groups. We define an index set \(G=\{1,2,...,N\}\). Then, the gradients of weight parameters in layer \(l\) can be computed as:
\[\frac{\partial L}{\partial W_{l}} =\sum_{t}\sum_{i\in G}\frac{\partial L}{\partial O_{l}^{t,(i)}} \frac{\partial O_{l}^{t,(i)}}{\partial U_{l}^{t,(i)}}\frac{\partial U_{l}^{t, (i)}}{\partial X_{l}^{t}}\frac{\partial X_{l}^{t}}{\partial W_{l}} \tag{20}\] \[+\sum_{t}\sum_{i\in G\setminus\{N\}}\frac{\partial L}{\partial U _{l}^{t,(i+1)}}\frac{\partial U_{l}^{t,(i+1)}}{\partial U_{l}^{t,(i)}}\frac{ \partial U_{l}^{t,(i)}}{\partial X_{l}^{t}}\frac{\partial X_{l}^{t}}{\partial W _{l}}\] \[+\sum_{t}\frac{\partial L}{\partial U_{l}^{t+1,(1)}}\frac{ \partial U_{l}^{t+1,(1)}}{\partial U_{l}^{t,(N)}}\frac{\partial U_{l}^{t,(N)} }{\partial X_{l}^{t}}\frac{\partial X_{l}^{t}}{\partial W_{l}}.\]
The first term represents the gradient from the next layer (marked as 1 in Fig. 3(c)). The second term is the gradients from the next group's membrane potential except for the last group (marked as 2 in Fig. 3(c)). The last term represents the gradients from the first group of the next timestep (marked as 3 in Fig. 3(c)). Thus, the gradients propagate through both temporal and spatial dimension, training weight parameters to consider the temporal information.
### _Memory-Efficient Backpropagation_
In addition to the memory efficiency in forward propagation, our EfficientLIF-Net saves memory costs during backward gradient computation. As shown in Fig. 4(a), the standard SNNs need to store all membrane potential to compute the gradient such as \(\frac{\partial U_{l}^{t+1}}{\partial U_{l}^{t}}\) in Eq. 3. However, saving the full-precision membrane potential of LIF neurons is costly.
**Backpropagation in Cross-layer Sharing.** The key idea here is that the membrane potential of the previous layer can be computed from the next layer's membrane potential in a reverse way (Fig. 4(b)). Thus, without storing the membrane potential of the intermediate layers during forward, we can compute the backward gradient. By reorganizing Eq. 4 and Eq. 6, we obtain the membrane potential of the previous layer or the previous timestep.
\[\left\{\begin{array}{ll}&U_{l}^{t}=\frac{1}{\lambda}(U_{l+1}^{t}-W_{l+1}O_ {l}^{t})+O_{l}^{t}.&\text{from Eq.~{}\ref{eq:1}}\\ &U_{l+m}^{t}=\frac{1}{\lambda}(U_{l}^{t+1}-W_{l}O_{l-1}^{t+1})+O_{l+m}^{t}.& \text{from Eq.~{}\ref{eq:1}}\end{array}\right. \tag{21}\]
Based on this, we can compute \(\frac{\partial U_{l}^{t+1}}{\partial U_{l}^{t}}\) in Eq. 18, and \(\frac{\partial U_{l}^{t+1}}{\partial U_{l+m}^{t}}\) in Eq. 19, without storing the intermediate membrane potential.
**Backpropagation in Cross-channel Sharing.** In a similar way, we can also reduce memory cost through channel dimension by performing a reverse computation on the membrane potential of channel groups (Fig. 4(c)). Instead of storing a memory for all channels, we use a partial memory for storing the membrane potential of the last group channel of each layer. From Eq. 8 and Eq. 11, we calculate the membrane potential of the previous channel group or the previous timestep.
\[\left\{\begin{array}{ll}&U_{l}^{t,(i)}=\frac{1}{\lambda}(U_{l}^{t,(i+1)}-X_{l }^{t,(i+1)})+O_{l}^{t,(i)}.&\text{from Eq.~{}\ref{eq:1}}\\ &U_{l}^{t,(N)}=\frac{1}{\lambda}(U_{l}^{t+1,(1)}-X_{l}^{t+1,(1)})+O_{l}^{t,(N)}. &\text{from Eq.~{}\ref{eq:1}}\end{array}\right. \tag{22}\]
This reverse computation allows us to compute \(\frac{\partial U_{l}^{t,(i+1)}}{\partial U_{l}^{t,(i)}}\) and \(\frac{\partial U_{l}^{t+1,(i)}}{\partial U_{l}^{t,(N)}}\) in Eq. 20, without storing the intermediate membrane potential.
### _Hardware Discussion_
In this section, we aim to provide insights into the role that efficientLIF-Net will play during the hardware deployment.
**Cross-layer Sharing.** One of the major contributions that the cross-layer sharing EfficientLIF-Net can make to the hardware is the reduction of memory communications. When deploying an SNN on the hardware, one can either choose to either first process through all the layers and then repeat for all timesteps (standard) or first process through all timesteps then proceed to the next layer (tick-batch [50]). While the tick-batch can help to reduce the number of memory communications across timesteps, it requires more hardware resources. On the other
Fig. 4: Memory-efficient backpropagation. Compared to baseline, we do not need to store an intermediate membrane potential for backpropagation. Instead, we perform a reverse computation on the membrane potential from the next layers/channels.
hand, with a proper processing pipeline across layers, the standard way of processing SNNs will have smaller hardware resource requirements and a larger throughput. And cross-layer sharing can further reduce the memory communication overheads of the standard SNN processing.
As we show in Fig. 5(a), instead of writing the membrane potential to the memory for every layer and timestep, layer-sharing EfficientLIF-Net requires only one time of writing to memory for each shared layer for each timestep.
**Cross-channel Sharing.** Due to the high level of parallelism and data reuse in these designs, we are focusing on examining the effects of cross-channel sharing on EfficientLIF-Net for ASIC systolic array-based inference accelerators for SNNs [15, 50, 51].
The key idea behind this group of designs is to broadcast input spikes and weights to an array of processing elements (PEs), where accumulators perform convolution operations. Each post-synaptic neuron's entire convolution operation is
\begin{table}
\begin{tabular}{l|l c c c} \hline \hline & \multicolumn{4}{c}{VGG16} \\ \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Methods} & \multirow{2}{*}{Acc (\(\%\))} & LIF Forward & LIF Backward \\ & & & Memory (MB) & Memory (MB) \\ \hline \hline \multirow{4}{*}{CIFAR10} & Baseline & 91.31 & 1.80 & 9.0 \\ & EfficientLIF-Net [L] & 90.23 & 1.23 & 1.23 \\ & EfficientLIF-Net [C\#2] & 90.30 & 0.90 & 0.90 \\ & EfficientLIF-Net [L+C\#2] & 90.09 & 0.62 & 0.62 \\ \hline \multirow{4}{*}{CIFAR100} & Baseline & 66.83 & 1.80 & 9.0 \\ & EfficientLIF-Net [L] & 65.01 & 1.23 & 1.23 \\ & EfficientLIF-Net [C\#2] & 64.92 & 0.90 & 0.90 \\ & EfficientLIF-Net [L+C\#2] & 64.85 & 0.62 & 0.62 \\ \hline \multirow{4}{*}{TinyImageNet} & Baseline & 56.11 & 7.22 & 36.1 \\ & EfficientLIF-Net [L] & 55.14 & 4.91 & 4.91 \\ & EfficientLIF-Net [C\#2] & 55.43 & 3.61 & 3.61 \\ & EfficientLIF-Net [L+C\#2] & 55.36 & 2.46 & 2.46 \\ \hline \multirow{4}{*}{ImageNet-100} & Baseline & 73.81 & 88.43 & 442.15 \\ & EfficientLIF-Net [L] & 73.22 & 60.10 & 60.10 \\ & EfficientLIF-Net [C\#2] & 72.65 & 44.21 & 44.21 \\ & EfficientLIF-Net [L+C\#2] & 72.14 & 30.05 & 30.05 \\ \hline \multirow{4}{*}{N-Caltech101} & Baseline & 64.40 & 4.06 & 40.6 \\ & EfficientLIF-Net [L] & 63.50 & 2.76 & 2.76 \\ & EfficientLIF-Net [C\#2] & 64.02 & 2.03 & 2.03 \\ & EfficientLIF-Net [L+C\#2] & 63.10 & 1.38 & 1.38 \\ \hline \hline \multirow{4}{*}{Dataset} & \multirow{2}{*}{Methods} & \multirow{2}{*}{Acc (\(\%\))} & LIF Forward & LIF Backward \\ & & & Memory (MB) & Memory (MB) \\ \hline \hline \multirow{4}{*}{CIFAR10} & Baseline & 92.26 & 2.88 & 14.40 \\ & EfficientLIF-Net [L] & 91.99 & 1.31 & 1.31 \\ & EfficientLIF-Net [C\#2] & 91.92 & 1.44 & 1.44 \\ & EfficientLIF-Net [L+C\#2] & 91.73 & 0.66 & 0.66 \\ \hline \multirow{4}{*}{CIFAR100} & Baseline & 70.89 & 2.88 & 14.40 \\ & EfficientLIF-Net [L] & 70.14 & 1.31 & 1.31 \\ & EfficientLIF-Net [C\#2] & 70.01 & 1.44 & 1.44 \\ & EfficientLIF-Net [L+C\#2] & 69.99 & 0.66 & 0.66 \\ \hline \multirow{4}{*}{TinyImageNet} & Baseline & 56.74 & 11.5 & 57.5 \\ & EfficientLIF-Net [L] & 55.20 & 5.25 & 5.25 \\ & EfficientLIF-Net [C\#2] & 55.44 & 5.75 & 5.75 \\ & EfficientLIF-Net [L+C\#2] & 55.10 & 2.63 & 2.63 \\ \hline \multirow{4}{*}{ImageNet-100} & Baseline & 79.38 & 140.88 & 704.4 \\ & EfficientLIF-Net [L] & 79.44 & 64.31 & 64.31 \\ & EfficientLIF-Net [C\#2] & 78.92 & 70.44 & 70.44 \\ & EfficientLIF-Net [L+C\#2] & 78.88 & 32.16 & 32.16 \\ \hline \multirow{4}{*}{N-Caltech101} & Baseline & 66.27 & 6.47 & 64.7 \\ & EfficientLIF-Net [L] & 65.82 & 2.95 & 2.95 \\ \cline{1-1} & EfficientLIF-Net [C\#2] & 66.01 & 3.24 & 3.24 \\ \cline{1-1} & EfficientLIF-Net [L+C\#2] & 65.45 & 1.48 & 1.48 \\ \hline \hline \end{tabular}
\end{table} TABLE I: Accuracy and LIF memory cost (Forward & Backward) comparison between baseline (_i.e._, standard SNN) and our EfficientLIF-Net. Here, _EfficientLIF-Net[L], EfficientLIF-Net[C\#2], EfficientLIF-Net[L+C\#2]_ denote EfficientLIF with cross-layer, cross-channel (\(\#\)group=2), and cross-layer&channel sharing, respectively.
mapped to one dedicated PE. Once the convolution results are ready, they are sent to the LIF units inside the PE to generate the output spikes.
LIF units are notorious for their high hardware overheads. This is because we need at least one buffer to hold the full precision membrane potential for each neuron. These buffers heavily contribute to the hardware cost of LIF units. Originally, all the prior designs [15, 50, 51] equipped each of the 128 PEs with an LIF unit inside to match the design's throughput requirements. These LIF units contribute significantly to the entire PE arrays. Even if the number of LIF units is reduced, there is no way to reduce the number of buffers required to hold the unique membrane potentials for each LIF neuron.
Based on this design problem, we can instantly realize one role that cross-channel sharing EfficientLIF plays in these hardware platforms. Depending on the number of cross-channel shared LIF neurons, we can have the same ratio of LIF units and buffer reduction at the hardware level, as we show in Fig. 5(b). For example, in the case of C#4 shared networks, we can manage to reduce the 128 LIF units in [15, 50, 51] to 32. However, the shared LIF units will bring longer latency as a trade-off. In the case of C#4, originally, one cycle was needed to generate spikes from 128 post-synaptic neurons for one timestep. Now, we will need 4 cycles instead. However, the major portion of latency still lies in the convolution and memory operations, which is typically hundreds of times larger than the cycles needed for generating spikes through LIF units. We provide experimental results in Section V.C to further illustrate the effects of EfficientLIF-Net on hardware.
## V Experiments
### _Implementation Details_
We evaluate our method on four static image datasets (_i.e._, CIFAR10 [52], CIFAR100 [52], TinyImageNet [49], ImageNet-100 [49]), and one spiking dataset (_i.e._, N-Caltech101 [53]). Here, ImageNet-100 is the subset of ImageNet-1000 dataset [49]. We use VGG16 [54] and ResNet19 [12]. For both architectures, we use the scaled-up channel size following previous SNN works [55, 56]. We train the SNNs with 128 batch samples using SGD optimizer with momentum 0.9 and weight decay 5e-4. The initial learning rate is set to 0.1 and decayed with cosine learning rate scheduling [57]. We set the total number of epochs to 300 for CIFAR10, CIFAR100, and N-Caltech101, and 140 for TinyImageNet and ImageNet-100, respectively. We use timesteps \(T=5\) across all experiments.
### _Performance Comparison_
Across all experiment sections, _EfficientLIF-Net[L]_ denotes the cross-layer sharing scheme, _EfficientLIF-Net[C#N]_ stands for the cross-channel sharing scheme with \(N\) channel groups. _EfficientLIF-Net[L+C#N]_ means the cross-layer & channel sharing method. In Table I, we show the memory benefit from EfficientLIF-Net. We assume a 32-bit representation for membrane potential in LIF neurons. Regarding the backward LIF memory of baseline, we consider the standard backpropagation method which stores membrane potential across entire timesteps [13, 14, 15].
The experimental results show the following observations: (1) The EfficientLIF-Net based on ResNet19 achieves a similar performance compared to the baseline, which implies that the proposed membrane sharing strategy still can learn temporal information in spikes. (2) The EfficientLIF-Net also can be applied to the DVS dataset. (3) The ResNet19 EfficientLIF-Net achieves less performance degradation compared to VGG16, which implies that skip connection improves training capability in EfficientLIF-Net. Furthermore, ResNet19 brings higher memory efficiency since it has more layers with similar sized activation. (4) As expected, a large-resolution image dataset has more benefits compared to a small-resolution image dataset. For instance, _EfficientLIF-Net [L+C#2]_ saves \(108.72\)MB and \(672.24\)MB for forward and backward path, respectively, on ImageNet-100 which consists of \(224\times 224\) resolution images, on the other hand, the same architecture saves \(2.22\)MB (forward) and \(13.74\)MB (backward) on CIFAR10.
### _Experimental Analysis_
**Analysis on Training Dynamics.** In our method section, we showed that the backward gradients of each method are different. To further analyze this, we investigate whether the trained weight parameters can be compatible with other architectures. We expect that the transferred weights to different architectures may show performance degradation since each architecture has different training dynamics (_e.g._, gradient path). To this end, we train standard ResNet19-SNN (_i.e._, baseline), EfficientLIF-Net [L], EfficientLIF-Net [C#2], EfficientLIF-Net [L+C#2], In Fig. 6, we report the accuracy of various weights-architecture configurations on CIFAR10 and TinyImageNet. We observe the following points: (1) As we expected, transferring weights to a different architecture brings performance degradation. This supports our statement that each architecture has different training dynamics. (2) Especially, baseline shows a huge performance drop as compared to other architectures. Thus, EfficientLIF-Net needs to be trained
Fig. 5: Visualization of the potential hardware mapping of the two sharing methods. We provide some hardware insights on the potential hardware benefits we can get from the EfficientLIF-Net.
from scratch with gradient-based training. (3) The trained weights from EfficientLIF-Net [L+C#2] show reasonable performance on EfficientLIF-Net [L] and EfficientLIF-Net [C] as it contains the feature from both cross-layer and cross-channel sharing.
**Ablation Studies on #Group.** In the cross-layer sharing scheme, we can further reduce LIF memory cost by increasing _#group_. Table II shows the accuracy and LIF memory cost with respect to _#group_. Interestingly, EfficientLIF-Net with high _#group_ almost maintains the performance while minimizing the LIF memory cost significantly. For example, on the ImageNet-100 dataset, EfficientLIF-Net [C#8] incurs only \(0.8\%\) accuracy drop with \(75\%\) higher memory saving. Thus, one can further reduce LIF memory cost by increasing _#group_ based on the hardware requirements.
**Combining with Group Convolution.** To further enhance the efficiency in cross-channel sharing, we explore the feasibility of combining a group convolution layer with cross-layer sharing. Since group convolution splits input channels and output channels into multiple groups, they can be applied to each channel spike (\(O_{l}^{t,(i)}\) in Eq. 10). In Table III, we observe the accuracy does not show a huge drop with two convolution groups. However, as the number of groups increases, the performance goes down drastically due to lesser number of parameters available for training convergence.
**Soft Reset vs. Hard Reset.** We also conduct experiments on the reset scheme in our EfficientLIF-Net. The membrane potential can be reset to zero (_i.e._, hard reset), or decreased by the threshold value (_i.e._, soft reset). In Table V, we compare the accuracy of both reset schemes on ResNet19 architecture, where we observe the hard reset achieves similar accuracy as the soft reset. However, using the hard reset does not allow computation of the previous layer's or timestep's membrane potential in a reverse way (Eq. 21 and Eq. 22) during backpropagation. This is because the hard reset removes the residual membrane potential which can be used in the reverse computation. Therefore, our EfficientLIF-Net is based on the soft reset such that we get memory savings both during forward and backward.
**Analysis on Spike Rate.** In Fig. 7, we compare the spike rate across all different LIF sharing schemes in ResNet19. We conduct experiments on four datasets. Note, a high spike rate implies the networks require larger computational cost. The experimental results show that all LIF sharing schemes have a similar spike rate as the baseline. This demonstrates that EfficientLIF-Net does not bring further computational overhead while saving memory cost by sharing the membrane potential.
**Time Overhead Analysis.** We measured the time overhead on a V100 GPU with a batch size of 128. We used VGG16 with CIFAR10 and ImageNet-100 datasets with image sizes of 32x32 and 224x224, respectively. Tabel IV shows the latency results for each method. Interestingly, we found that our method improves computation time, implying that our LIF layer-sharing method reduces the time required to access DRAM, which originally takes a significant percentage of computational time. As a result, our method can be implemented without a huge computational burden.
**Memory Cost Breakdown.** In Fig. 8, we compare the memory cost breakdown between the SNN baseline and EfficientLIF-Net in both forward and backward. In the memory cost comparison, we consider memory for weight parameters (32-bit), spike activation (1-bit), and LIF neurons (32-bit). In the baseline SNN, LIF neurons take a dominant portion for both forward and backward memory cost. Especially, for backward, LIF neurons occupy around \(7\times\) larger memory than weights or
\begin{table}
\begin{tabular}{c|c|c c} \hline \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Methods} & \multirow{2}{*}{Acc (\(\%\))} &
\begin{tabular}{c} LIF Memory for \\ Fw \& Bw (MB) \\ \end{tabular} \\ \hline \hline \multirow{3}{*}{CIFAR10} & EfficientLIF-Net [C\#2] & 91.92 & 1.44 \\ & EfficientLIF-Net [C\#4] & 91.73 & 0.72 \\ & EfficientLIF-Net [C\#8] & 91.21 & 0.36 \\ \hline \multirow{3}{*}{TinyImageNet} & EfficientLIF-Net [C\#2] & 55.44 & 5.75 \\ & EfficientLIF-Net [C\#4] & 55.06 & 2.88 \\ & EfficientLIF-Net [C\#8] & 54.84 & 1.44 \\ \hline \multirow{3}{*}{ImageNet-100} & EfficientLIF-Net [C\#2] & 78.92 & 70.44 \\ & EfficientLIF-Net [C\#4] & 78.24 & 35.22 \\ \cline{1-1} & EfficientLIF-Net [C\#8] & 78.12 & 17.61 \\ \hline \hline \end{tabular}
\end{table} TABLE II: Ablation on the number of groups in cross-channel EfficientLIF-Net with ResNet19 architecture.
\begin{table}
\begin{tabular}{c|c c c} \hline \hline Dataset & Methods & \#Conv. Group & Acc (\(\%\)) \\ \hline \hline \multirow{3}{*}{CIFAR10} & EfficientLIF-Net [C\#2] & 2 & 91.42 \\ & EfficientLIF-Net [C\#4] & 4 & 90.45 \\ & EfficientLIF-Net [C\#8] & 8 & 87.38 \\ \hline \multirow{3}{*}{CIFAR100} & EfficientLIF-Net [C\#2] & 2 & 69.26 \\ & EfficientLIF-Net [C\#4] & 4 & 66.42 \\ & EfficientLIF-Net [C\#8] & 8 & 60.20 \\ \hline \multirow{3}{*}{TinyImageNet} & EfficientLIF-Net [C\#2] & 2 & 53.65 \\ & EfficientLIF-Net [C\#4] & 4 & 51.39 \\ \cline{1-1} & EfficientLIF-Net [C\#8] & 8 & 42.86 \\ \hline \hline \end{tabular}
\end{table} TABLE III: Performance of combing cross-layer sharing and group convolution on ResNet 19 architecture.
Fig. 6: Analysis on Training Dynamics. Unit: accuracy (%). We investigate whether the trained weight parameters can be compatible with other architectures.
\begin{table}
\begin{tabular}{c|c c} \hline \hline Dataset & Methods & Acc (\(\%\)) &
\begin{tabular}{c} LIF Memory for \\ Fw \& Bw (MB) \\ \end{tabular} \\ \hline \hline \multirow{3}{*}{CIFAR10} & EfficientLIF-Net [C\#2] & 91.92 & 1.44 \\ & EfficientLIF-Net [C\#4] & 91.73 & 0.72 \\ & EfficientLIF-Net [C\#8] & 91.21 & 0.36 \\ \hline \multirow{3}{*}{TinyImageNet} & EfficientLIF-Net [C\#2] & 55.44 & 5.75 \\ & EfficientLIF-Net [C\#4] & 55.06 & 2.88 \\ \cline{1-1} & EfficientLIF-Net [C\#8] & 54.84 & 1.44 \\ \hline \multirow{3}{*}{ImageNet-100} & EfficientLIF-Net [C\#2] & 78.92 & 70.44 \\ \cline{1-1} & EfficientLIF-Net [C\#4] & 78.24 & 35.22 \\ \cline{1-1} & EfficientLIF-Net [C\#8] & 78.12 & 17.61 \\ \hline \hline \end{tabular}
\end{table} TABLE II: Ablation on the number of groups in cross-channel EfficientLIF-Net with ResNet19 architecture.
activation memory. Our EfficientLIF-Net significantly reduces the LIF memory cost, resulting in less memory overhead compared to weight parameters (in both forward and backward) and activation (in backward only).
**EfficientLIF-Net with Weight Pruning.** As pruning for SNNs is popular due to its usage on edge devices [34, 36, 37, 39, 40], it is important to figure out whether the advantage from EfficientLIF-Net remains in sparse SNNs. Before exploring the effectiveness of the LIF sharing method in sparse SNNs, we first investigate if LIF neurons still require a huge memory in sparse SNNs. This is because a number of LIF neurons might not generate output spikes in the high weight sparsity regime (\(\geq 90\%\)), then, the memory cost for such dead neurons can be reduced. To this end, we prune the SNN model to varied sparsity using magnitude-based pruning [58]. Interestingly, as shown in Fig. 9 **Left**, only \(\sim 3\%\) neurons do not generate spikes (_i.e._, dead neuron) across all sparsity levels. This implies that the LIF memory cost is still problematic in sparse SNNs. Based on the observation, we prune EfficientLIF-Net and compare the memory cost and accuracy with the standard SNN baseline. Here, we prune all architectures to have \(94.94\%\) weight sparsity. In Fig. 9 **Right**, the baseline architecture requires 2.9 MB for LIF neurons, which is equivalent to \(\sim 60\%\) of the memory cost for weight parameters. With cross-layer (denoted as \(L\) in Fig. 9) and cross-channel sharing (denoted as _C#2_ in Fig. 9), we can reduce the LIF memory cost by about half compared to the baseline. Cross-layer& channel sharing (denoted as _L+C#2_ in Fig. 9) further reduces the memory cost, which takes only \(\sim 23\%\) memory compared to the baseline. Overall, the results demonstrate that LIF memory reduction is not only important for high-resolution images but also for relatively low-resolution images such as CIFAR10 especially when considering pruned SNNs.
**Hardware Evaluation.** As discussed in Section IV-D, both cross-channel and cross-layer sharing can significantly enhance hardware efficiency during deployment. From the top portion of Fig.10, it is evident that cross-channel sharing in EfficientLIF-Net can considerably decrease the number of required LIF units. Specifically, our approach reduces the area portion of LIF units from 61.6% to 28.6% of total computation units when employing C#4 cross-channel sharing.
The bottom part of Fig.10 indicates that cross-layer sharing can effectively minimize the number of DRAM accesses, which is the most energy-consuming operation during on-chip SNN inference. For single-batch scenarios, the reduction is not significant, since weight data movement dominates the DRAM accesses, as outlined in [15]. However, when employing mini-batches, the reduction becomes more substantial. We note a 23% and 25% reduction in total DRAM accesses on CIFAR10 and TinyImageNet, respectively, for 64 mini-batches. This reduction trend continues to rise with larger mini-batch numbers.
### _Evaluation on Human Activity Recognition (HAR) datasets_
To further validate our method on datasets that rely heavily on temporal information, we conduct experiments using Human Activity Recognition (HAR) datasets obtained from
\begin{table}
\begin{tabular}{c|l c c} \hline \hline Dataset & Methods & Reset Scheme & Acc (\(\%\)) \\ \hline \hline \multirow{8}{*}{CIFAR10} & EfficientLIF-Net [L] & Soft & 91.99 \\ & EfficientLIF-Net [L] & Hard & 91.66 \\ & EfficientLIF-Net [C\#2] & Soft & 91.92 \\ & EfficientLIF-Net [C\#2] & Hard & 91.67 \\ & EfficientLIF-Net [L+C\#2] & Soft & 91.73 \\ & EfficientLIF-Net [L+C\#2] & Hard & 91.65 \\ \hline \multirow{8}{*}{CIFAR100} & EfficientLIF-Net [L] & Soft & 70.14 \\ & EfficientLIF-Net [L] & Hard & 70.05 \\ & EfficientLIF-Net [C\#2] & Soft & 70.01 \\ & EfficientLIF-Net [C\#2] & Hard & 68.93 \\ & EfficientLIF-Net [L+C\#2] & Soft & 69.99 \\ & EfficientLIF-Net [L+C\#2] & Hard & 69.74 \\ \hline \hline \end{tabular}
\end{table} TABLE V: Ablation on the reset methods.
Fig. 8: Comparison of the memory breakdown between the baseline SNN and the EfficientLIF-Net in both forward and backward. We use ResNet19 architecture on ImageNet-100.
Fig. 7: Spike rate analysis on four public datasets.
Fig. 9: Experiments on ResNet19 EfficientLIF-Net with weight pruning methods on CIFAR10. **Left:** Most LIF neurons generate output spikes although the weight sparsity increases. Therefore, the LIF memory cost cannot be reduced by weight pruning. **Right:** Accuracy and LIF memory cost comparison across baseline and EfficientLIF-Net. The weight memory cost across all models is \(\sim 5MB\) indicated with a grey dotted line.
wearable devices. Descriptions of these datasets are provided below:
* _UCI-HAR_[59] consists of 10.3k instances collected from 30 subjects, involving six different activities: walking, walking upstairs, walking downstairs, sitting, standing, and lying. The dataset employs sensors such as a 3-axis accelerometer and a 3-axis gyroscope (both at 50Hz) from a Samsung Galaxy SII.
* _HHAR_[60] is collected from nine subjects and encompasses six daily activities: biking, sitting, standing, walking, stair ascent, and stair descent. The dataset utilizes accelerometers from eight smartphones and four smartwatches (with sampling rates ranging from 50Hz to 200Hz).
Following previous work, we split both datasets into 64% for the training set, 16% for the validation set, and 20% for the test set. We report test accuracy when the model achieves its best validation accuracy.
In Table VI, we compare our method with the baseline model, which consists of six 1D-convolutional layers, \(i.e.\), \(Conv1D(InputChannel,128)-4\times Conv1D(128,128)\)-\(Conv1D(128,\#Class)\). In addition, we provide the performance of other methods [61, 62, 63] on HHAR and UCI-HAR. Aviles _et al._[61] uses a CNN, Mukherjee _et al._[62] uses a combination of CNN and LSTM, and Wang _et al._[63] uses an LSTM. From the table, we can observe the following results: (1) The baseline Spiking MLP achieves an accuracy of 97.68% on the HHAR dataset and 96.06% on the UCI-HAR dataset, which is comparable accuracy with the previous methods. (2)Comparing the different configurations of EfficientLIF-Net to the baseline Spiking MLP, we can see that the EfficientLIF-Net maintains a similar level of accuracy as the baseline on both datasets. These results suggest that our LIF-sharing method also works well with tasks that heavily rely on temporal information. Overall, our empirical results support the observation that gradients propagate through both temporal and spatial dimensions, effectively training the weight parameters to account for temporal information, as demonstrated in Eq. 18, 19, and 20.
## VI Conclusion
In this paper, we highlight and tackle the problem of LIF memory cost in SNNs. This problem becomes severe as the image resolution increases. To address this, we propose EfficientLIF-Net where we share the membrane potential across layers and channels, which can effectively reduce memory usage. During backpropagation, our EfficientLIF-Net also enables reverse computation on the previous layer and channel. Therefore, we only need to store the membrane potential of the last layer/channel during forward. In our experiments, EfficientLIF-Net achieves similar performance and computational cost while significantly reducing memory cost compared to standard SNN baseline. We also found that the LIF memory problem exists in sparse-weight SNNs where even a small resolution dataset causes LIF memory overhead. The memory benefit of EfficientLIF-Net is shown in pruned SNNs, which implies our method is complementary to previous compression methods.
## Acknowledgements
This work was supported in part by CoCoSys, a JUMP2.0 center sponsored by DARPA and SRC, Google Research Scholar Award, the National Science Foundation CAREER Award, TII (Abu Dhabi), the DARPA AI Exploration (AIE) program, and the DoE MMICC center SEA-CROGS (Award #DE-SC0023198).
|
2307.00430 | WaveMixSR: A Resource-efficient Neural Network for Image
Super-resolution | Image super-resolution research recently been dominated by transformer models
which need higher computational resources than CNNs due to the quadratic
complexity of self-attention. We propose a new neural network -- WaveMixSR --
for image super-resolution based on WaveMix architecture which uses a
2D-discrete wavelet transform for spatial token-mixing. Unlike
transformer-based models, WaveMixSR does not unroll the image as a sequence of
pixels/patches. It uses the inductive bias of convolutions along with the
lossless token-mixing property of wavelet transform to achieve higher
performance while requiring fewer resources and training data. We compare the
performance of our network with other state-of-the-art methods for image
super-resolution. Our experiments show that WaveMixSR achieves competitive
performance in all datasets and reaches state-of-the-art performance in the
BSD100 dataset on multiple super-resolution tasks. Our model is able to achieve
this performance using less training data and computational resources while
maintaining high parameter efficiency compared to current state-of-the-art
models. | Pranav Jeevan, Akella Srinidhi, Pasunuri Prathiba, Amit Sethi | 2023-07-01T21:25:03Z | http://arxiv.org/abs/2307.00430v1 | # WaveMixSR: A Resource-efficient Neural Network for Image Super-resolution
###### Abstract
Image super-resolution research recently been dominated by transformer models which need higher computational resources than CNNs due to the quadratic complexity of self-attention. We propose a new neural network - WaveMixSR - for image super-resolution based on WaveMix architecture which uses a 2D-discrete wavelet transform for spatial token-mixing. Unlike transformer-based models, WaveMixSR does not unroll the image as a sequence of pixels/patches. It uses the inductive bias of convolutions along with the lossless token-mixing property of wavelet transform to achieve higher performance while requiring fewer resources and training data. We compare the performance of our network with other state-of-the-art methods for image super-resolution. Our experiments show that WaveMixSR achieves competitive performance in all datasets and reaches state-of-the-art performance in the BSD100 dataset on multiple super-resolution tasks. Our model is able to achieve this performance using less training data and computational resources while maintaining high parameter efficiency compared to current state-of-the-art models.
super-resolution single image token-mixer wavelet transform
## 1 Introduction
Single image super-resolution (SR) aims to enhance the resolution of single images, which makes it an ill-posed problem. This task has various applications in fields such as medical imaging, remote sensing, digital photography, and surveillance. Image SR is a challenging task due to the lack of information to completely reconstruct the high-resolution
Figure 1: Comparison of PSNR and SSIM for \(2\times\) SR on BSD100 dataset shows WaveMixSR surpasses the previous state-of-the-art methods such as HAT [1] and SwinFIR [2].
output. It is a low-level vision task along with other single image reconstruction tasks, such as image denoising and JPEG image artifact removal.
In recent years, various deep learning techniques have shown remarkable progress in image super-resolution and have achieved state-of-the-art performance in terms of both quantitative metrics and visual quality. During the initial application of deep learning to SR, convolutional neural network (CNN) based methods were used widely. Recently, after the widespread success of attention-based transformer [3] models in natural language processing tasks, they have also been used for computer vision tasks. Vision transformer and its variants have been outperforming CNNs in most of the vision tasks, including SR. Transformer-based models such as Swin transformers have been used in both high-level and low-level vision tasks. The recently developed SwinIR [4] and Hybrid Attention Transformer (HAT) [1] have outperformed the previous convolutional models in SR. The success of transformer-based models showed that long-range spatial token-mixing by self-attention can outperform convolutional networks in super-resolution tasks.
The application of transformers in computer vision has been limited by the quadratic complexity of self-attention with respect to the input sequence length. Images, when unrolled into long sequences impose a large computational and memory burden on GPUs. Moreover, transformers do not have an inductive bias unlike CNNs. Therefore they require a large number of training images to give good generalization compared to the convolutional models. Vision transformers also needs extra architectural changes, such as the inclusion of positional information through learned or fixed positional embeddings. This demand for a large amount of data and GPU resources is not suitable for resource-constrained scenarios where data and GPU are limited, such as in green or edge computing.
On the other hand, CNNs have inductive priors, such as translational equivariance due to convolutional weight sharing and partial scale invariance due to pooling, to handle 2D images which enables them to learn from smaller datasets with less computational expenditure. But, they fail to capture long-range dependencies compared to transformers and several layers to increase their receptive fields.
Intuitively, SR is a low-level vision task that needs more local information than global context for generating HR images from LR images. The better performance of transformer models compared to CNN can perhaps be attributed to the token-mixing ability of self-attention rather than modeling long-range interactions.
Recently, new token-mixing models have emerged, replacing the self-attention in transformers with other mechanisms such as Fourier transforms, depth-wise convolutions, wavelet transforms, and spatial multi-layer perceptrons. Their models have been shown to perform on par with self-attention-based transformer models without suffering the large data and GPU requirements. For instance, the WaveMix [5] architecture which uses 2-Dimensional discrete wavelet transform (2D DWT) for token-mixing has been shown to perform well in high-level vision tasks such as image classification and semantic segmentation while consuming low computational resources. Wavemix uses self-similar WaveMix blocks to process images and is found to be versatile as the same model can be used for different tasks by just changing the final output layer. The performance of 2D DWT token-mixing for image SR has not been tested.
We use the WaveMix blocks introduced in [5] to create a new token-mixing architecture for image SR. Equipped with spatial token-mixing using discrete wavelet transform and the inductive bias of CNNs, WaveMixSR can surpass the state-of-the-art methods in SR of the BSD100 dataset as shown in Table 1. The contributions in this paper are:
* We propose a new network that employs spatial token-mixing using 2D discrete wavelet transform for single image super-resolution.
* Our model is efficient in terms of computation resource utilization and parameter count. WaveMixSR consumes less than half the parameters and resources compared to other state-of-the-art models.
* WaveMixSR achieves state-of-the-art performance in multiple SR tasks on the BSD100 dataset. It achieves this performance without the need for large training data and pre-training procedures employed in other models.
## 2 Related Works
### CNN-based methods
SRCNN [6] was one of the first deep learning models developed for single image SR to directly learn an end-to-end mapping between the low and high-resolution images. Enhanced Deep Residual Networks [7] (EDSR) was an improvement over SRCNN which produced refined results by using residual blocks and removed unnecessary modules to stabilize training. Residual Channel Attention Networks [8] (RCAN) uses a channel attention mechanism that adaptively recalibrates channel-wise features by explicitly modeling inter-dependencies between channels. It also employs a residual in residual (RIR) structure to form deep networks that have long and short skip connections for better gradient flow. Holistic Attention Network [9] (HAN) enhances the representation of the image features using a layer
attention module (LAM) and a channel-spatial attention module (CSAM). LAM adaptively emphasizes hierarchical features by considering correlations among layers while CSAM learns the confidence at all the positions of each channel to selectively capture more informative features. The CNN-based models were state-of-the-art till transformer-based models started being used for computer vision tasks after the coming of vision transformers [10].
### Transformer-based methods
After the success of transformers in Natural language processing, they have been used for computer vision tasks with the advent of vision transformers [10]. They have been outperforming CNNs in classification, segmentation, and detection tasks, especially when large datasets and compute are available. This has led to the application of transformer-based models in low-level vision tasks such as SR. Image processing transformer [11] (IPT) introduced multi-task pre-training on a vision transformer model for low-level vision tasks but suffered from a large number of parameters (116 M) and computational costs. Encoder-decoder-based transformer [12] (EDT) tried to overcome the large computational requirements of IPT by employing multi-related-task pre-training.
SwinIR [4] used the Swin transformer for deep feature extraction and outperformed CNN-based models in multiple image restoration tasks such as SR, denoising, and compression artifact reduction. SwinIR consists of shallow and deep feature extraction modules composed of Swin transformer layers and a high-quality image reconstruction module. SwinFIR [2] improves upon SwinIR by using Fast Fourier Convolution (FFC) components which improves the efficiency in capturing global information due to its image-wide receptive field. It also employed advanced data augmentations, pre-training, and feature ensemble to improve image reconstruction.
HAT [1] combined self-attention, channel attention, and overlapping cross-attention for giving the state-of-the-art performance in SR across various benchmark datasets.
### Wavelet-based methods
Recognizing that wavelet transform provides an _approximation_ as well as _detail_ information of an image, deep wavelet super resolution [13] (DWSR) used a deep CNN to reconstruct the wavelet coefficients of LR images to obtain the corresponding SR outputs. They showed that wavelets provided an image representation that simplifies the mapping between LR and HR images, aiding in faster image reconstruction. [14] uses a CNN to estimate wavelet detail coefficients of a desired high resolution (HR) image on the given low resolution (LR) image.
Wavelet-based super-resolution net [15] (Wavelet-SRNet) was developed for human face SR. It predicted the wavelet coefficients of HR images before reconstructing HR images from them. Different loss functions were also employed to capture both global topology information and local texture details of human faces.
Multi-level wavelet CNN [16] (MWCNN) was proposed to increase receptive field size and computational efficiency by replacing pooling with wavelet transform to reduce the resolution of feature maps. MWCNN for image restoration tasks was based on the U-Net architecture and used inverse wavelet transform (IWT) to reconstruct the high-resolution (HR) feature maps.
The invertibility of wavelet transforms ensures that none of the image information or features are lost during the down-sampling operation.
## 3 Architecture
As shown in Figure 2, the network consists of two paths for generating the HR image from the LR image. Feature map expansion of the input is done primarily by the non-parametric upsampling operations. Model focuses on sharpening the up-sampled images by reconstructing details which are missing.
### WaveMixSR
The LR input image in RGB space is converted to YCbCr space before sending to the model. The YCbCr color space separates the image into three channels: Y (luma), Cb (blue-difference chroma), and Cr (red-difference chroma). The Y channel represents the luminance information of the image, while Cb and Cr channels represent the chrominance information. WaveMixSR model has two paths - one for handling the Y channel and another for the CbCr channels of the input image. Typically, in SR techniques only the Y channel is used for the path with parametric learning because the Y channel contains most of the image details and is less affected by color changes. This approach has been shown to improve the performance of deep learning-based SR methods [6]. We follow the same idea in WaveMixSR, where
only the Y channel is sent to the path with WaveMix blocks to focus on improving the resolution of the LR image while ignoring the color changes.
For \(2\times\) SR, the first path takes the Y-channel feature map, \(\mathbf{y}\in\mathbb{R}^{H/2\times W/2\times 1}\) of the input image and sends it through a parameter-free upsampling layer which resizes the low-resolution Y-channel image to high resolution using bilinear or bicubic interpolation. For \(3\times\) and \(4\times\) SR, we use corresponding upsampling blocks that upsample the image to HR. The output of upsampling block, \(\mathbf{y}\in\mathbb{R}^{H\times W\times 1}\) is sent to a convolutional layer to increase the number of feature maps from \(1\) to \(C\). The output feature maps from the convolutional layer, \(\mathbf{x}_{in}\in\mathbb{R}^{H\times W\times C}\) is sent to the WaveMix blocks. Four WaveMix blocks are connected in series in our model to create high-resolution feature maps. The output \(\mathbf{x}_{out}\in\mathbb{R}^{H\times W\times C}\) from the final WaveMix block is then passed through a convolutional layer which reduces the channel dimension \(C\) and returns a single channel output \(\mathbf{y}_{out}\in\mathbb{R}^{H\times W\times 1}\).
Figure 2: Architecture of WaveMixSR along with the details of the WaveMix block. Architecture is shown for \(2\times\) super-resolution. For \(3\times\) and \(4\times\) super-resolution, we only modify the upsample blocks in the architecture keeping the other blocks intact.
The second parallel path in WaveMixSR takes the two channels of CbCr and simply passes it through an upsampling layer where the channel resolution is increased to the final resolution required. This HR CbCr channel is finally concatenated with the Y-channel output \(\textbf{y}_{out}\) from the first path, thereby creating the 3-channel YCbCr output. This output is then converted to an RGB color space to obtain the final high-resolution output image.
### WaveMix block
We use the WaveMix block with a single level of 2D-discrete wavelet transform [5]. As shown in Figure 2, the design of the WaveMix block is such that it does not collapse the spatial resolution of the feature maps, unlike CNN blocks that use pooling operations [17]. And yet, it reduces the number of computations required by reducing the spatial dimensions of the feature maps using 2D-DWT, which translates to a reduction in GPU RAM, training time, and inference time. However, unlike pooling or strided convolutions, a 2D-DWT is lossless as it expands the number of channels by the same factor by which it reduces spatial resolution. Furthermore, it has additional energy compaction (sparsification) properties that are not offered by random filters or Fourier basis.
Denoting input and output tensors of the WaveMix block by \(\textbf{x}_{in}\) and \(\textbf{x}_{out}\), respectively; the four wavelet filters along with their downsampling operations at each level by \(w_{aa},w_{ad},w_{da},w_{dd}\) (\(a\) for approximation, \(d\) for detail); convolution, multi-layer perceptron (MLP), transposed convolution (upconvolution), and batch normalization operations by \(c\), \(m\), \(t\), and \(b\), respectively; and their respective trainable parameter sets by \(\xi\), \(\theta\), \(\phi\), and \(\gamma\), respectively; concatenation along the channel dimension by \(\oplus\), and point-wise addition by \(+\), the operations inside a WaveMix block can be expressed using the following equations:
\[\textbf{x}_{0}=c(\textbf{x}_{in},\xi); \textbf{x}_{in}\in\mathbb{R}^{H\times W\times C},\textbf{x}_{0} \in\mathbb{R}^{H\times W\times C/4} \tag{1}\]
\[\textbf{x}=[w_{aa}(\textbf{x}_{0})\oplus w_{ad}(\textbf{x}_{0})\oplus w_{da}( \textbf{x}_{0})\oplus w_{dd}(\textbf{x}_{0})]; \textbf{x}\in\mathbb{R}^{H/2\times W/2\times 4C/4} \tag{2}\]
\[\tilde{\textbf{x}}=b(t(m(\textbf{x},\theta),\phi),\gamma); \tilde{\textbf{x}}\in\mathbb{R}^{H\times W\times C} \tag{3}\]
\[\textbf{x}_{out}=\tilde{\textbf{x}}_{1}+\textbf{x}_{in}; \textbf{x}_{out}\in\mathbb{R}^{H\times W\times C} \tag{4}\]
The WaveMix block extracts learnable and space-invariant features using a convolutional layer, followed by spatial token-mixing and downsampling for scale-invariant feature extraction using 2D-DWT [18], followed by channel-mixing using a learnable MLP (1\(\times\)1 conv) layer, followed by restoring spatial resolution of the feature map using a transposed-convolutional layer. The use of trainable convolutions _before_ the wavelet transform allows the extraction of only those
\begin{table}
\begin{tabular}{l l c c c c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Training dataset} & \multicolumn{4}{c}{Testing metrics on BSD100} \\ \cline{3-6} & & \multicolumn{2}{c}{\(2\times\)} & \multicolumn{2}{c}{\(3\times\)} & \multicolumn{2}{c}{\(4\times\)} \\ \cline{3-6} & & PSNR & SSIM & PSNR & SSIM & PSNR & SSIM \\ \hline EDSR [7] & DIV2K & 32.32 & 0.9013 & 29.25 & 0.8093 & 27.71 & 0.7420 \\ RCAN [8] & DIV2K & 32.41 & 0.9027 & 29.32 & 0.8111 & 27.77 & 0.7436 \\ SAN [1] & DIV2K & 32.42 & 0.9028 & 29.33 & 0.8112 & 27.78 & 0.7436 \\ IGNN [1] & DIV2K & 32.41 & 0.9025 & 29.31 & 0.8105 & 27.77 & 0.7434 \\ HAN [9] & DIV2K & 32.41 & 0.9027 & 29.32 & 0.8110 & 27.80 & 0.7442 \\ NLSN [1] & DIV2K & 32.43 & 0.9027 & 29.34 & 0.8117 & 27.78 & 0.7444 \\ RCN-it [1] & DF2K & 32.48 & 0.9034 & 29.39 & 0.8125 & 27.87 & 0.7459 \\ SwinIR [4] & DF2K & 32.53 & 0.9041 & 29.46 & 0.8145 & 27.92 & 0.7489 \\ EDT [12] & DF2K & 32.52 & 0.9041 & 29.44 & 0.8142 & 27.91 & 0.7483 \\ HAt [1] & DF2K & 32.62 & 0.9053 & 29.54 & 0.8167 & 28.00 & 0.7517 \\ SwinIR* [2] & DF2K & 32.64 & 0.9054 & 29.55 & 0.8169 & 28.03 & 0.7520 \\ HAT-L* [1] & DF2K & 32.74 & 0.9066 & **29.63** & **0.8191** & **28.09** & 0.7551 \\ WaveMixSR & DIV2K & **33.08** & **0.9322** & 28.38 & 0.8043 & 27.65 & **0.7605** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Quantitative comparison with state-of-the-art methods on the BSD100 dataset shows that WaveMixSR performs better using less training data (* indicates models that were pre-trained on ImageNet).
feature maps that are suitable for the chosen wavelet basis functions. The convolutional layer \(c\) decreases the embedding dimension \(C\) by a factor of four so that the concatenated output **x** after 2D-DWT has the same number of channels as the input \(\textbf{x}_{in}\) (Eq. 1 and Eq.22). That is since 2D-DWT is a lossless transform, it expands the number of channels by the same factor (using concatenation) by which it reduces the spatial resolution by computing an approximation sub-band (low-resolution approximation) and three detail sub-bands (spatial derivatives) [19] for each input channel (Eq.22). The use of this image-appropriate and lossless downsampling using 2D-DWT allows WaveMix to use fewer layers and parameters.
The output \(\hat{\textbf{x}}\) is then passed to an MLP layer \(m\), which has two \(1\times 1\) convolutional layers with an inverse bottleneck design (multiplication factor \(>1\)) separated by a GELU non-linearity. After this, the feature map resolution is doubled using transposed-convolutions \(t\) followed by batch normalization \(b\) (Eq. 3). A residual connection is used to ease the flow of the gradient [20] (Eq.4 4).
Among the different types of mother wavelets available, we used the Haar wavelet (a special case of the Daubechies wavelet [19], also known as Db1), which is frequently used due to its simplicity and faster computation. Haar wavelet is both orthogonal and symmetric in nature and has been extensively used to extract basic structural information from images [21]. For even-sized images, it reduces the dimensions exactly by a factor of \(2\), which simplifies the designing of the subsequent layers.
Since the upsampling is done using parameter-free interpolation techniques like bilinear and bicubic interpolation in Pytorch, WaveMix blocks can focus on reconstructing the HR details and sharpening of the images. The same network can be used for performing any SR task (\(2\times,3\times\) or \(4\times\)) by only modifying the upsampling blocks without any architectural changes or increasing the number of parameters.
## 4 Experiments and Results
### Datasets
We used DIV2K dataset [22] for training WaveMixSR. We did not employ any pre-training on larger datasets such as DF2K [22] or ImageNet [23] to compare the performance in training data-constrained settings. The performance of WaveMixSR was tested on four benchmark datasets - BSD100 [24], Urban100 [25], Set5 [26], and Set14 [27],.
### Implementation Details
All experiments were done with a single 80 GB Nvidia A100 GPU. We used AdamW optimizer (\(\alpha=0.001,\beta_{1}=0.9,\beta_{2}=0.999,\epsilon=10^{-8}\)) with a weight decay of 0.01 during initial epochs and then used SGD with a learning rate of \(0.001\) and \(\text{momentum}=0.9\) during the final 50 epochs [29; 30]. A dropout of 0.3 is used in our experiments. A batch size of 1 was used when the full-resolution images were passed to the model and a batch size of 432 was used when images were passed as \(64\times 64\) resolution patches.
The low-resolution images were generated from the HR images by using bicubic down-sampling in Pytorch. We used the full-resolution HR image as the target and generated the input LR image using down-sampling for each of the SR
\begin{table}
\begin{tabular}{l c c} \hline \hline Model & \#Params. & \#Multi-Adds. \\ \hline SwinIR [4] & 11.8 M & 49.6 G \\ HAT [28] & 20.8 M & 103.7 G \\
**WaveMixSR** & **1.7 M** & **25.8 G** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Model complexity comparison of WaveMixSR with other state-of-the-art methods such as SwinIR [4] and HAT [28] on \(4\times\) SR of \(64\times 64\) input patch.
\begin{table}
\begin{tabular}{l l c c c c c} \hline \hline \multirow{2}{*}{Scale} & \multirow{2}{*}{Training dataset} & \multicolumn{2}{c}{Set5} & \multicolumn{2}{c}{Set14} & \multicolumn{2}{c}{Urban100} \\ \cline{3-7} & & PSNR & SSIM & PSNR & SSIM & PSNR & SSIM \\ \hline \(\times 2\) & DIV2K & 35.80 & 0.9543 & 31.27 & 0.9044 & 29.14 & 0.9078 \\ \(\times 3\) & DIV2K & 31.92 & 0.9145 & 28.77 & 0.8413 & 25.82 & 0.8193 \\ \(\times 4\) & DIV2K & 29.37 & 0.8627 & 26.25 & 0.7511 & 23.57 & 0.7300 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Quantitative results of WaveMixSR on other benchmark SR datasets
tasks. No data augmentations were used while training the WaveMixSR models. Huber loss was used to optimize the parameters. We used automatic mixed precision in PyTorch during training. For the quantitative results, PSNR and SSIM (calculated on the Y channel) are reported.
The embedding dimension of 144 was used in WaveMix blocks. 4 WaveMix blocks were connected in series in the Y channel path. The convolutions layers before and after the WaveMix blocks which were used to vary channel dimensions employed \(3\times 3\) kernels with stride and padding set to 1 to maintain the feature resolution.
Figure 3: Visual results of \(2\times\) SR on BSD100 dataset. Each column from the left shows a patch from the HR image (shown as a small image near the corner), the same patch extracted from the LR image, and a patch taken from the model output respectively. The filename of the image is given below the HR image and the PSNR/SSIM of the model output is reported at below the model output. The values displayed are computed for the whole image and not just the patch.
### Results
Table 1 shows the quantitative performance comparison of WaveMixSR with other models such as EDSR [7], RCAN, SAN, IGNN, HAN, NLSN, RCAN-it, EDT, SwinIR [4], SwinFIR and HAT on BSD100 dataset. We can see that WaveMixSR significantly outperforms all the other models. On BSD100 for \(2\times\) SR, the performance is 0.34 dB greater than the prior state-of-the-art HAT-L. It also outperforms HAT-L by 0.0256 for \(2\times\) SR and 0.0054 for \(4\times\) SR on BSD100 in terms of the SSIM metric.
Furthermore, WaveMixSR achieves these performance gains without needing ImageNet pre-training like HAT-L and SwinFIR or training on a much larger DF2K dataset. BSD100 is a classical dataset composed of a large variety of images ranging from natural images to object-specific such as plants, people, and scenery. Unlike other datasets like Set5 and Set14 which has a low number of images (5 and 14 respectively), BSD100 has 100 images which give a more accurate metric of the model capability on common images. Urban100 has images of only urban settings with buildings, which mostly have a periodic lattice structure with windows and straight lines. BSD100 images have more variation in shapes since it contains a larger number of images of objects, scenery, animals, and people. All the images in BSD100 dataset are of \(480\times 320\) resolution. The performance of WaveMixSR on BSD100 demonstrates our model's ability to generate natural HR images.
The computational and parametric complexity of our method was compared with the other state-of-the-art transformer-based methods such as SwinIR and HAT as shown in Table 2. The number of Multiply-Add operations is counted at the input size of \(64\times 64\). We see that WaveMixSR is highly parameter-efficient compared to other models, due to the presence of parameter-free token-mixing using 2D-DWT. The self-attention, channel attention, and cross-attention in transformer-based methods introduce \(20-10\times\) more parameters compared to WaveMixSR. WaveMixSR also saves on computing as it uses half of the multi-add operations compared to SwinIR and one-fourth that of HAT.
We provide the visual results of WaveMixSR in Figure 3. In "img_067", we can see that WaveMixSR was able to regenerate the star emblen on the side of the airplane and sharpen the image. On "img_063" and "img_035" we can see the ability of our model to regenerate facial features from LR images. Both eyes and eyebrows were reconstructed without blurring in "img_063". The wrinkles and texture of the face and around the eyelids which were completely absent in the LR image were reconstructed successfully by our model. The blurry individual in "img_017" was fully reconstructed and sharpened along with the cliff rocks. WaveMixSR is also able to reconstruct stripes on the bird in "img_093" and recover lattice content such as windows on the sky-scrapper in "img_095".
We expect further improvement in the performance of our model when trained using the larger DF2K datasets and even with ImageNet pre-training as used in the previous state-of-the-art models. Use of adversarial training [31] can also be tested for further improvements.
### Ablation Studies
We observed that using bilinear interpolation and bicubic interpolation for upsampling the images in both the Y channel and CbCr channel path in WaveMixSR was giving similar SR performance.
We experimented with various loss functions and their linear combinations to identify the most suitable loss function for SR. As shown in Table 4, we found that Huber loss, which is the same as the L2 loss when the loss value is high and changes to L1 loss when the loss value becomes low, is most effective in optimizing the model parameters.
We also experimented with a single path model where all three channels (YCbCr) are sent to the WaveMix blocks without having a separate path for CbCr channels. The SR results showed that sending CbCr channels does not increase performance. Experiments where the image was processed by WaveMix blocks in RGB space instead of YCbCr space also lowered the PSNR values by \(\sim 3\)dB.
We varied the number of layers, embedding dimension of WaveMix blocks, levels of wavelet decomposition and dropout. Increasing the layers showed an increase in reconstruction quality initially but performance stagnated after four layers. The number of channels in WaveMix blocks (embedding dimension) too followed a similar pattern in performance after 144. We found that we do not need to use more than one level of 2D DWT for image super-resolution task as the receptive field expansion with just one level of Haar wavelet is enough for low-level vision tasks. Adding more levels did not improve the performance of the model. We also experimented with different values of dropout used inside the WaveMix blocks and found that a dropout of 0.3 gave an optimum performance.
## 5 Conclusion
In this paper, we propose WaveMixSR, a wavelet-based token-mixing model for image super-resolution. Our model uses 2D-DWT as a token-mixing operation for high-resolution image reconstruction. Experiments show that WaveMixSR performs well with natural images and achieves state-of-the-art results in the BSD100 dataset. It consumes only a fraction of the computational resources and is highly parameter efficient compared to other state-of-the-art models. Our model was able to outperform other models without resorting to pre-training on large datasets or adversarial training[31]. Our work shows that token-mixers can be employed in low-level vision tasks such as image reconstruction and can outperform transformer-based models using less resources.
|
2305.12895 | DEGREE: Decomposition Based Explanation For Graph Neural Networks | Graph Neural Networks (GNNs) are gaining extensive attention for their
application in graph data. However, the black-box nature of GNNs prevents users
from understanding and trusting the models, thus hampering their applicability.
Whereas explaining GNNs remains a challenge, most existing methods fall into
approximation based and perturbation based approaches with suffer from
faithfulness problems and unnatural artifacts, respectively. To tackle these
problems, we propose DEGREE \degree to provide a faithful explanation for GNN
predictions. By decomposing the information generation and aggregation
mechanism of GNNs, DEGREE allows tracking the contributions of specific
components of the input graph to the final prediction. Based on this, we
further design a subgraph level interpretation algorithm to reveal complex
interactions between graph nodes that are overlooked by previous methods. The
efficiency of our algorithm can be further improved by utilizing GNN
characteristics. Finally, we conduct quantitative and qualitative experiments
on synthetic and real-world datasets to demonstrate the effectiveness of DEGREE
on node classification and graph classification tasks. | Qizhang Feng, Ninghao Liu, Fan Yang, Ruixiang Tang, Mengnan Du, Xia Hu | 2023-05-22T10:29:52Z | http://arxiv.org/abs/2305.12895v1 | # DEGREE: Decomposition Based Explanation
###### Abstract
Graph Neural Networks (GNNs) are gaining extensive attention for their application in graph data. However, the black-box nature of GNNs prevents users from understanding and trusting the models, thus hampering their applicability. Whereas explaining GNNs remains a challenge, most existing methods fall into approximation based and perturbation based approaches with suffer from faithfulness problems and unnatural artifacts, respectively. To tackle these problems, we propose DEGREE (Decomposition based Explanation for GRaph nEural nEworks) to provide a faithful explanation for GNN predictions. By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction. Based on this, we further design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods. The efficiency of our algorithm can be further improved by utilizing GNN characteristics. Finally, we conduct quantitative and qualitative experiments on synthetic and real-world datasets to demonstrate the effectiveness of DEGREE on node classification and graph classification tasks.
## 1 Introduction
Graph Neural Networks (GNNs) play an important role in modeling data with complex relational information (Zhou et al., 2018), which is crucial in applications such as social networking (Fan et al., 2019), advertising recommendation (Liu et al., 2019), drug generation (Liu et al., 2020), and agent interaction (Casas et al., 2019). However, GNN suffers from its black-box nature and lacks a faithful explanation of its predictions.
Recently, several approaches have been proposed to explain GNNs. Some of them leverage gradient or surrogate models to approximate the local model around the target instance (Huang et al., 2020; Baldassarre and Azizpour, 2019; Pope et al., 2019). Some other methods borrow the idea from perturbation based explanation (Ying et al., 2019; Luo et al., 2020; Lucic et al., 2021), under the assumption that removing the vital information from input would significantly reduce output confidence. However, approximation based methods do not guarantee the fidelity of the explanation obtained, as Rudin (2019) states that a surrogate that mimics the original model possibly employs distinct features. On the other hand, perturbation based approaches may trigger the adversarial nature of deep models. Chang et al. (2018) reported this phenomenon where masking some parts of the input image introduces unnatural artifacts. Additionally, additive feature attribution methods (Vu and Thai, 2020; Lundberg and Lee, 2017) such as gradient based methods and GNNExplainer only provide a single heatmap or subgraph as explanation. The nodes in graph are usually semantically individual and we need a fine-grained explanation to the relationships between them. For example, in organic chemistry, the same functional group combined with different structures can exhibit very different properties.
To solve the above problems, we propose DEGREE (Decomposition based Explanation for GRaph nEural nEworks), which measures the contribution of components in the input graph to the GNN prediction. Specifically, we first summarize the intuition behind the Context Decomposition (CD)
algorithm (Murdoch et al., 2018) and propose that the information flow in GNN's message propagation mechanism is decomposable. Then we design the decomposition schemes for the most commonly used layers and operations in GNNs, so as to isolate the information flow from distinct node groups. Furthermore, we explore the subgraph-level explanation via an aggregation algorithm that utilizes DEGREE and the structural information of the input graph to construct a series of subgraph sets as the explanation. DEGREE guarantees explanation fidelity by directly analyzing GNN feed-forward propagation, instead of relying on input perturbation or the use of alternative models. DEGREE is non-additive and can therefore uncover non-linear relationships between nodes. We quantitatively and qualitatively evaluate the DEGREE on both synthetic and real-world datasets to validate the effectiveness of our method. The contributions of this work are summarized as follows:
* We propose a new explanation method (DEGREE) for GNNs, from the perspective of decomposition. By elucidating the feed-forward propagation mechanism within GNN, DEGREE allows capturing the contribution of individual components of the input graph to the final prediction.
* We propose an aggregation algorithm that provides important subgraphs as explanation in order to mine the complex interactions between graph components. We combine the property of the message propagation mechanism to further reduce the computation.
* We evaluate DEGREE on both synthetic and real-world datasets. The quantitative experiments show that our method could provide faithful explanations. The qualitative experiments indicate that our method may capture the interaction between graph components.
## 2 Related Work
Despite the great success in various applications, the black-box nature of deep models has long been criticized. Explainable Artificial Intelligence (XAI) tries to bridge the gap by understanding the internal mechanism of deep models (Du et al., 2019). Meanwhile, the need to tackle non-Euclidean data, such as geometric information, social networks, has given rise to the development of GNNs. Similar to the tasks on image or text data, GNNs focus on node classification (Henaff et al., 2015), graph classification (Xu et al., 2018; Zhang et al., 2018), and link prediction (Zhang and Chen, 2018; Cai and Ji, 2020). Message passing mechanism allows the information to flow from one node to another along edges, and empowers GNNs with convolutional capabilities for graph data.
While the explainability in image and text domains is widely studied (Shrikumar et al., 2017; Sundararajan et al., 2017; Simonyan et al., 2013), the explainability of GNN is on the rise. First, some recent work adapts the interpretation methods used for traditional CNNs to GNNs (Baldassarre and Azizpour, 2019; Pope et al., 2019). They employ gradient values to investigate the contribution of node features to the final prediction. However, these methods ignore the topological information, which is a crucial property of graph data. Second, some methods trace the model prediction back to the input space in a backpropagation manner layer by layer (Schwarzenberg et al., 2019; Schnake et al., 2020). Third, some methods define a perturbation-based interpretation whereby they perturb node, edge, or node features and identify the components that affect the prediction most. Specifically, GNNExplainer and PGExplainer (Ying et al., 2019) maximize the mutual information between perturbed input and original input graph to identify the important features. Causal Screening (Wang et al., 2021) searches for the important subgraph by monitoring the mutual information from a cause-effect standpoint. CF-GNNExplainer (Lucic et al., 2021) proposes to generate counterfactual explanations by finding the minimal number of edges to be removed such that the prediction changes. In addition, XGNN (Yuan et al., 2020) builds a model-level explanation for GNNs by generating a prototype graph that can maximize the prediction. Moreover, due to the discrete and topological nature of graph data, XGNN defines graph generation as a reinforcement learning task instead of gradient ascent optimization.
Many previous explanation methods for GNNs suffer from adversarial triggering issues, faithfulness issues and additive assumptions. To this end, we propose a decomposition based explanation for GNNs (DEGREE) to remedy these problems. DEGREE enables to track the contribution of the components from the input graph to the final prediction by decomposing a trained GNN. Thus, DEGREE guarantees the integrity of the input and eliminates the adversarial triggering issue of the perturbation-based approach. Since no surrogate models are used, DEGREE guarantees its faithfulness. Meanwhile, by integrating the decomposition to the normal layer, DEGREE does not have any additional training process.
## 3 DEGREE: Decomposition Based Explanation for GNNs
In this section, we introduce the details of the proposed explanation method. First, we introduce the notations and problem definition. Then, we discuss the general idea of decomposition based explanation. Finally, we develop concrete decomposition schemes for different GNNs layers.
### Problem Definition
We first introduce the notations used in this work. Given a graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), where \(\mathcal{V}\) is the set of nodes and \(\mathcal{E}\) is the set of edges between nodes. The adjacency matrix of \(\mathcal{G}\) is denoted as \(\mathbf{A}\in\mathbb{R}^{N\times N}\), where \(N=|\mathcal{V}|\) is the number of nodes, so \(\mathcal{V}=\{v_{1},v_{2},...,v_{N}\}\). The nodes are associated with features, and the feature matrix is denoted as \(\mathbf{X}\in\mathbb{R}^{N\times F}\), where \(F\) is the feature dimension.
In this work, we focus on explaining GNN-based classification models. Let \(f\) denote the target GNN model. \(f\) computes the likelihood of a node or graph belonging to the target class, where \(f:\mathcal{G}\mapsto\mathbb{R}^{|\mathcal{V}|}\) or \(f:\mathcal{G}\mapsto\mathbb{R}\), for node or graph classification respectively.
The _goal_ of explanation is to find the most important subgraph in \(\mathcal{G}\) given \(f(\mathcal{G})\), which requires measuring the contribution of all possible subgraphs and find the ones with high contribution scores. However, there are two challenges to be addressed. (1) Given any subgraph of interest, how to estimate its contribution without breaking up the input graph? (2) The number of all subgraphs in \(\mathcal{G}\) is usually very large, so how to choose candidate subgraphs for improving explanation efficiency? We tackle the first challenge in the following part of Sec 3 and solve the second one in Sec 4.
### Decomposition Based Explanation
In general, a prediction model \(f\) contains multiple layers \(L_{t},t\in\{1,\dots,T\}\):
\[f(\mathbf{X})=L_{T}\circ L_{T-1}\circ\dots\circ L_{2}\circ L_{1}(\mathbf{X}). \tag{1}\]
Let \(\mathbf{X}[t]\) denotes the input to \(L_{t}\), so \(\mathbf{X}[t+1]=L_{t}(\mathbf{X}[t])\) and \(\mathbf{X}[1]=\mathbf{X}\). The symbol \(\circ\) denotes function composition. Here \(\mathbf{X}[t]\in\mathbb{R}^{N\times F_{t}}\) is the embedding matrix at \(t\)-th layer, where \(F_{t}\) is the latent dimension. The embedding vector of the \(i\)-th node at \(t\)-th layer is denoted as \(\mathbf{X}_{i}[t]\).
The core idea of decomposition based explanation is that, given a target node group (or subgraph) of interest, we estimate its contribution score to model prediction merely through feed-forward propagation. We call the information propagated from the target group as _target_ portion, and the rest of information is called _background_ portion. It is worth noting that, a node is in the target group does not necessarily mean it is important, while it only means we are interested in its importance score. In the following, we use \(\gamma\) and \(\beta\) to denote the target and background portion, respectively. Let \(\mathbf{m}\in\{0,1\}^{N}\), where \(\mathbf{m}_{i}=1\) means \(v_{i}\) belongs to the target group and otherwise \(\mathbf{m}_{i}=0\)
Figure 1: Illustration of the DEGREE for decomposing GCN. Node features or latent embeddings contain _target_ portion (orange hemisphere) and an _background_ portion (blue hemisphere). (a)-(c) show the workflow of the GCN, exhibiting only the messages aggregation for node A. (d) demonstrates message aggregation after decomposition. (e) demonstrates the decomposed message flow.
Then, the decomposition is initialized from the layer of node features, where the target portion and background portion of the input feature matrix are: \(\mathbf{X}^{\gamma}=diag(\mathbf{m})\mathbf{X}\) and \(\mathbf{X}^{\beta}=(\mathbf{I}-diag(\mathbf{m}))\mathbf{X}\), respectively. In a neural network, information from different parts of input are merged in the feed-forward process into latent representations, which poses challenges for explanation. Suppose the target and background portion in \(\mathbf{X}[t]\) are known from prior layer, we could explain the model if we can still distinguish the information flows of the two portions inside \(L_{t}\). That is, at layer \(L_{t}\), suppose its input can be decomposed as \(\mathbf{X}[t]=\mathbf{X}^{\gamma}[t]+\mathbf{X}^{\beta}[t]\), the following relations need to hold for explanation:
\[L_{t}^{D}(\mathbf{X}^{\gamma}[t],\mathbf{X}^{\beta}[t])=\left(\frac{\Gamma( \mathbf{X}^{\gamma}[t],\mathbf{X}^{\beta}[t])}{\mathbf{X}^{\gamma}[t+1]}, \frac{B(\mathbf{X}^{\gamma}[t],\mathbf{X}^{\beta}[t])}{\mathbf{X}^{\beta}[t+ 1]}\right) \tag{2}\]
\[L_{t}(\mathbf{X}[t])=\mathbf{X}[t+1]=\mathbf{X}^{\gamma}[t+1]+\mathbf{X}^{ \beta}[t+1], \tag{3}\]
where \(L_{t}^{D}(\cdot,\cdot)\) denotes the decomposed version of layer \(L_{t}\). \(\Gamma(\cdot,\cdot)\) and \(B(\cdot,\cdot)\) corresponds to the contribution of the target and the background portion to layer \(L_{t}\). \(\mathbf{X}^{\gamma}[t+1]\) and \(\mathbf{X}^{\beta}[t+1]\) denotes the target and background portion of \(\mathbf{X}[t+1]\) as the input to the next layer. The decomposition above goes from the input, through all intermediate layers, to the final prediction. If a target node group or subgraph is important, then it should contributes to most of the prediction, meaning that \(\Gamma(\mathbf{X}^{\gamma}[T],\mathbf{X}^{\beta}[T])\approx f(\mathbf{X})\).
### Intuitions Behind Decomposition Based Explanation For GNN
The intuition behind decomposition based explanation could be summarized as two rules: (1) the target and background portion at a higher layer mainly comes from the target and background portion at the lower layer respectively; (2) ideally there should be little interaction between the target portion and the background portion. Please note that the partition is not dimension-wise, meaning that each latent dimension may contain information from both target and background portions.
Figure 1 briefly illustrates the working principle of GNNs: the model computes neural message for each node pair and aggregates message for them from their neighbors. A major step of decomposing GNNs is that: the target and background portion of a node are aggregated from the target and background portion of its neighbours, respectively. This can be easily illustrated by the distributive nature of the GNN information aggregation mechanism:
\[\mathbf{X}[t+1]=\mathbf{A}\mathbf{X}[t]=\mathbf{A}\left(\mathbf{X}^{\gamma} [t]+\mathbf{X}^{\beta}[t]\right)=\underbrace{\mathbf{A}\mathbf{X}^{\gamma}[t ]}_{\mathbf{X}^{\gamma}[t+1]}+\underbrace{\mathbf{A}\mathbf{X}^{\beta}[t]}_{ \mathbf{X}^{\beta}[t+1]}. \tag{4}\]
Nevertheless, the above equation is only a conceptual illustration. A real GNN model could consist of various layers, such as graph convolution layers, fully connected layers, activation layers and pooling layers. Several challenges still need to be tackled to develop an effective explanation method. First, how to design the decomposition scheme for different types of layers? Second, how to efficiently find out the important nodes and subgraphs, by choosing the appropriate target/background group given all possible node combinations?
### Decomposing GNN Layers
In this work, we consider the decomposition scheme for two commonly used GNN architectures: GCN (Kipf and Welling, 2016) and GAT (Velickovic et al., 2017).
#### 3.4.1 Decomposing GCNs
The GCN architecture consists of graph convolution, fully connected layers, ReLU and maxpooling.
**Graph Convolution Layer:** The graph convolution operation pass messages between nodes:
\[\mathbf{X}[t+1]=\tilde{\mathbf{D}}^{-\frac{1}{2}}\tilde{\mathbf{A}}\tilde{ \mathbf{D}}^{-\frac{1}{2}}\mathbf{X}[t]\mathbf{W}+\mathbf{b}, \tag{5}\]
where \(\mathbf{W}\) and \(\mathbf{b}\) denote the trainable weights and bias. Here \(\mathbf{b}\) is optional. \(\tilde{\mathbf{A}}=\mathbf{A}+\mathbf{I}\) denotes the adjacency matrix with self loop. The matrix \(\tilde{\mathbf{D}}_{i,i}=\sum_{j}\tilde{\mathbf{A}}_{i,j}\) is the diagonal degree matrix of \(\tilde{\mathbf{A}}\). The corresponding decomposition can be designed as follows:
\[\mathbf{\gamma}[t]=\tilde{\mathbf{D}}^{-\frac{1}{2}}\tilde{\mathbf{A}} \tilde{\mathbf{D}}^{-\frac{1}{2}}\mathbf{X}^{\gamma}[t]\mathbf{W}\,\ \mathbf{\beta}[t]=\tilde{\mathbf{D}}^{-\frac{1}{2}}\tilde{\mathbf{A}}\tilde{\mathbf{D }}^{-\frac{1}{2}}\mathbf{X}^{\beta}[t]\mathbf{W}, \tag{6}\] \[\mathbf{X}^{\gamma}[t+1]=\mathbf{\gamma}[t]+\mathbf{b}\cdot\frac{ \left|\mathbf{\gamma}[t]\right|}{\left|\mathbf{\gamma}[t]\right|+\left|\beta[t]\right| }\,\ \mathbf{X}^{\beta}[t+1]=\mathbf{\beta}[t]+\mathbf{b}\cdot\frac{\left|\mathbf{ \beta}[t]\right|}{\left|\mathbf{\gamma}[t]\right|+\left|\mathbf{\beta}[t]\right|}, \tag{7}\]
where \(\mathbf{X}^{\gamma}[t]\) and \(\mathbf{X}^{\beta}[t]\) is the target and background portion of \(\mathbf{X}[t]\), respectively. The derivation of \(\mathbf{\gamma}[t]\) and \(\mathbf{\beta}[t]\) is intuitive since graph convolution is a linear operation. Motivated by (Singh et al., 2018), \(\mathbf{\gamma}[t]\) and \(\mathbf{\beta}[t]\) have to compete for their share of \(\mathbf{b}\) as in Eq 7. \(\left|\mathbf{\gamma}[t]\right|\in\mathbb{R}^{F_{t+1}}\) measures the dimension-wise magnitude of \(\mathbf{X}^{\gamma}[t]\) after the linear mapping \(\left\langle\mathbf{\beta}[t]\right|\) is defined similarly).
**Fully Connected Layer:** A fully connected layer prevalent in the model is shown below:
\[\mathbf{X}[t+1]=\mathbf{X}[t]\Theta+\mathbf{b}, \tag{8}\]
where \(\Theta\) and \(\mathbf{b}\) denote trainable weights and bias. Structure-wise, it is very similar to the GCN. The decomposition can be designed as:
\[\mathbf{X}^{\gamma}[t+1]=\mathbf{X}^{\gamma}[t]\Theta+\mathbf{b}\cdot\frac{ \left|\mathbf{X}^{\gamma}[t]\Theta\right|}{\left|\mathbf{X}^{\gamma}[t]\Theta \right|+\left|\mathbf{X}^{\beta}[t]\Theta\right|},\mathbf{X}^{\beta}[t+1]= \mathbf{X}^{\beta}[t]\Theta+\mathbf{b}\cdot\frac{\left|\mathbf{X}^{\beta}[t] \Theta\right|}{\left|\mathbf{X}^{\gamma}[t]\Theta\right|+\left|\mathbf{X}^{ \beta}[t]\Theta\right|}. \tag{9}\]
**ReLU Activation:** For the activation operator ReLU, we use the telescoping sum decomposition from Murdoch and Szlam (2017). We update the target term first and then update the background term by subtracting this from total activation:
\[\mathbf{X}^{\gamma}[t+1]=\mathrm{ReLU}\left(\mathbf{X}^{\gamma}[t]\right),\ \mathbf{X}^{\beta}[t+1]=\mathrm{ReLU}\left(\mathbf{X}^{\gamma}[t]+\mathbf{X}^{ \beta}[t]\right)-\mathrm{ReLU}\left(\mathbf{X}^{\gamma}[t]\right). \tag{10}\]
**Maxpooling:** We track the node indices selected by pooling in both target and background portion.
#### 3.4.2 Decomposing GATs
The graph attention layer in GAT is similar to Eq. 5, but uses the attention coefficients \(\alpha_{i,j}\) to aggregate the information (an alternative way to understand Eq. 5 is that \(\alpha_{i,j}=(\tilde{\mathbf{D}}^{-\frac{1}{2}}\tilde{\mathbf{A}}\tilde{ \mathbf{D}}^{-\frac{1}{2}})_{i,j}\)):
\[\alpha_{i,j}=\frac{\exp\left(\mathrm{LeakyReLU}\left(\left[\mathbf{X}_{i}[t] \mathbf{W}\|\mathbf{X}_{j}[t]\mathbf{W}\right]\mathbf{a}\right)\right)}{\sum_{ k\in\mathcal{N}_{i}\cup\{i\}}\exp\left(\mathrm{LeakyReLU}\left(\left[ \mathbf{X}_{i}[t]\mathbf{W}\|\mathbf{X}_{k}[t]\mathbf{W}\right]\mathbf{a} \right)\right)}, \tag{11}\]
where \(\|\) represents the concatenation operation. \(\mathbf{W}\) and \(a\) are parameters. \(\mathbf{X}_{i}[t]\) denotes the embedding of node \(i\) at layer \(L_{t}\). \(\mathcal{N}_{i}\) denotes the neighbors of node \(i\).
Therefore, a graph attention layer can be seen as consisting of four smaller layers: linear mapping, concatenation, LeakyReLU activation, and softmax operation. Decomposing a linear mapping is as trivial as decomposing an FC layer. To decompose the concatenation operator:
\[\mathbf{X}_{i}[t]\|\mathbf{X}_{j}[t]=\mathbf{X}_{i}^{\gamma}[t]\|\mathbf{X}_{j }^{\gamma}[t]+\mathbf{X}_{i}^{\beta}[t]\|\mathbf{X}_{j}^{\beta}[t]. \tag{12}\]
For LeakyReLU, the idea of decomposition is the same as ReLU. For softmax operation, we split the coefficients proportionally to the exponential value of the target and the background term of input:
\[\mathbf{X}^{\gamma}[t+1]=softmax\left(\mathbf{X}[t]\right)\cdot\frac{\exp \left(\left|\mathbf{X}^{\gamma}[t]\right|\right)}{\exp\left(\left|\mathbf{X}^{ \gamma}[t]\right|\right)+\exp\left(\left|\mathbf{X}^{\beta}[t]\right|\right)}, \tag{13}\] \[\mathbf{X}^{\beta}[t+1]=softmax\left(\mathbf{X}[t]\right)\cdot\frac{ \exp\left(\left|\mathbf{X}^{\beta}[t]\right|\right)}{\exp\left(\left|\mathbf{X}^{ \beta}[t]\right|\right)+\exp\left(\left|\mathbf{X}^{\gamma}[t]\right|\right)}.\]
Here we employ the similar motivation that used to split bias term in Eq. 7, and let \(\left|\mathbf{X}^{\gamma}[t]\right|\) and \(\left|\mathbf{X}^{\beta}[t]\right|\) to compete for the original value. The detail of decomposing the attention coefficients can be found in Appendix B.
## 4 Subgraph-Level Explanation via Agglomeration
Through decomposition, we could compute the contribution score of any given node groups. However, this is not enough for explaining GNNs. Our goal of explanation is to find the most important subgraph structure, but it is usually impossible to exhaustively compute and compare the scores of all possible subgraphs. In this section, we design an agglomeration algorithm to tackle the challenge.
### Contextual Contribution Score
We first introduce a new scoring function to be used in our algorithm. Different from the _absolute contribution_ scores provided by decomposition, in many scenarios, we are more interested in the _relative contribution_ of the target compared to its contexts. Let \(\mathcal{V}^{\gamma}\subset\mathcal{V}\) be the target node group, and \(f^{D}(\cdot)\) be the contribution score calculated from decomposition. The relative contribution of \(\mathcal{V}^{\gamma}\) averaged over different contexts is calculated as:
\[\phi(\mathcal{V}^{\gamma})\triangleq\mathbb{E}_{\mathcal{C}\sim RW\left( \mathcal{N}_{L}(\mathcal{V}^{\gamma})\right)}\left[f^{D}(\mathcal{V}^{\gamma} \cup\mathcal{C})-f^{D}(\mathcal{C})\right], \tag{14}\]
where \(\mathcal{C}\) is the context around \(\mathcal{V}^{\gamma}\), and \(\mathcal{N}_{L}\left(\mathcal{V}^{\gamma}\right)\) contains the neighboring nodes of \(\mathcal{V}^{\gamma}\) within \(L\)-hops. Here we use a random walk process \(RW()\) to sample \(\mathcal{C}\) within the neighborhood around \(\mathcal{V}^{\gamma}\). The reason for sampling within the neighborhood is based on the information aggregation, where a node collects the information from its neighbors within certain hops constrained by the GNN depth.
### Subgraphs Construction via Agglomeration
Our agglomeration algorithm initializes from individual nodes and terminates when the whole graph structure is included. Specifically, the interpretation process constructs a series of intermediate subgraph sets \(\mathcal{E}=\left\{\mathcal{S}_{1},...,\mathcal{S}_{I}\right\}\), where \(\mathcal{S}_{i}=\left\{\mathcal{B}_{1},...,\mathcal{B}_{M_{i}}\right\}\) contains \(M_{i}\) subgraphs. At each step, the algorithm searches for the candidate node or node group \(v\) that most significantly affects the contribution of subgraph \(\mathcal{B}_{m},m\in\left\{1,...,M_{i}\right\}\) according to the ranking score \(r(v)\):
\[s(v)\triangleq\phi\left(\left\{v\right\}\cup\mathcal{B}_{m}\right)-\phi\left( \mathcal{B}_{m}\right),r(v)\triangleq\left|s(v)-\mathbb{E}_{v^{\prime}}\left[ s(v^{\prime})\right]\right|,\ s.t.\ v,v^{\prime}\in\mathcal{N}(\mathcal{B}_{m}), \tag{15}\]
where \(\mathcal{N}\left(\mathcal{B}_{m}\right)\) is the set of neighbor nodes or node groups to \(\mathcal{B}_{m}\). Here \(s(v)\) measures the influence of \(v\) to \(\mathcal{B}_{m}\), while \(r(v)\) further revises the value by considering the relative influence of \(v\) compared to other candidates \(v^{\prime}\). At the beginning of our algorithm, \(\mathcal{B}_{m}=\emptyset\). A node \(v\) is selected if \(r(v)\geq q\cdot\max_{v^{\prime}}r(v^{\prime})\), and we set \(q=0.6\) in experiments. The selected nodes are merged into subgraphs to form \(\mathcal{S}_{i+1}\). Small subgraphs will be merged into larger ones, so we have \(M_{i}\leq M_{j},i\geq j\). The algorithm executes the above steps repeatedly and terminates until all nodes are included (i.e., \(M_{i}=1\)), or a certain pre-defined step budget is used up. Further details of the algorithm can be found in Section C of the Appendix.
## 5 Experiments
### Experimental Datasets
Following the setting in previous work (Ying et al., 2019), we adopt both synthetic datasets and real-world datasets. The statistic of all datasets are given in Sec A in the Appendix.
#### 5.1.1 Synthetic Datasets
* [leftmargin=*,noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,p,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,p,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,p,parsep=0pt,parsep=0pt,p,parsep=0pt,p,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,p,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,p,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,p,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,p,parsep=0pt,p,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,p,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep,p,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep,parsep=0pt,parsep=0pt,parsep=0pt,parsep,parsep=0pt,parsep=0pt,parsep,parsep=0pt,parsep=0pt,parsep,parsep=0pt,parsep,parsep=0pt,parsep=0pt,parseppt=0pt,parsep,parsep=0pt,parsep,parsep=0pt,parsep,parsep=0pt,parsep=0pt
* [leftmargin=*]
* **Tree-Cycles.** The Tree-Cycles dataset germinates from an eight-level balanced binary tree base graph. The motif is a six-node cycle. 80 motifs are randomly added to the nodes of the base graph. The nodes are classified into two classes, i.e., base-nodes and motif-nodes.
* **Tree-Grids.** It is constructed in the same way as the Tree-Cycles dataset. The Tree-Grid dataset has the same base graph while replacing the cycle motif with a 3-by-3 grid motif.
#### 5.1.2 Real-world Datasets
* **MUTAG.** It is a dataset with 4,337 molecule graphs. Every graph is labeled according to their mutagenic effect on the bacterium. As discussed in (Debnath et al., 1991), the molecule with chemical group \(\mathrm{NH}_{2}\) or \(\mathrm{NO}_{2}\) and carbon rings are known to be mutagenic. Since non-mutagenic molecules have no explicit motifs, only mutagenic ones are presented during the analysis.
* **Graph-SST2.** It is a dataset of 70,042 sentiment graphs, which are converted through Biaffine parser (Liu et al., 2021). Every graph is labeled according to its sentiment, either positive or negative. The nodes denote words, and edges denote their relationships. The node features are initialized as the pre-trained BERT word embeddings (Devlin et al., 2019).
### Experimental Setup
#### 5.2.1 Evaluation Metrics
The interpretation problem is formalized as a binary classification problem distinguishing between important and unimportant structures (nodes or edges, depending on the nature of ground truth). A good explanation should assign high scores to the important structures and low scores to unimportant ones. We consider the nodes within the motif to be important for the synthetic dataset and the rest to be unimportant. In the MUTAG dataset, the "N-H" and "N-O" edges are important, and the rest are unimportant. We conduct quantitative experiments on the synthetic datasets and the MUTAG dataset, and qualitative experiments on the MUTAG dataset and the Graph-SST2 dataset. We adopt the Area Under Curve (AUC) to evaluate the performance quantitatively.
#### 5.2.2 Baselines Methods and Implementation Details
Baselines Methods.We compare with four baselines methods: GRAD (Ying et al., 2019), GAT (Velickovic et al., 2017), GNNExplainer (Ying et al., 2019) and PGExplainer (Luo et al., 2020). (1) GRAD computes the gradients of GNN output with respect to the adjacency matrix or node features. (2) GAT averages the attention coefficients across all graph attention layers as edge importance. (3) GNNExplainer optimizes a soft mask of edges or node features by maximizing the mutual information. (4) PGExplainer learns an MLP (Multi-layer Perceptron) model to generate the mask using the reparameterization trick (Jang et al., 2017).
Construction of Target Models.We use all synthetic datasets together with the MUTAG dataset for quantitative evaluation experiments. We train a GCN and GAT model as the model to be explained for all datasets following the setup of previous work. Meanwhile, we construct DEGREE(GCN) and DEGREE(GAT) as the decomposed version for our method. We set the number of GNN layers to 3 for all datasets, except for the Tree-Grid dataset where it is 4. Since the 3-hop neighbors of some target nodes has only in-motif nodes (no negative samples). For the qualitative evaluation experiment, we use the MUTAG dataset and Graph-SST2 dataset. For all the model training, we use Adam optimizer. All the datasets are divided into train/validation/test sets.
Explainer Setting.For all baseline methods, we keep the default hyper-parameters. For baselines (e.g., PGExplainer) who need training additional modules, we also split the data. We also split data for baselines requiring additional training (e.g. PGExplainer). We use all nodes in the motif for evaluation. For explainers that only provide node explanation, we average the score of its vertexes as edge explanation. The details of explainer settings can be found in Sec A in Appendix.
### Quantitative Evaluation
In this section, we introduce experimental results on both synthetic datasets and the MUTAG dataset. For node classification, the computation graph only contains nodes within \(l\)-hop from the target node, where \(l\) is the number of model layer. The reason is that the nodes outside the computation
graph will not influence the final prediction. Table 1 shows the explanation AUC and time efficiency (the training time is shown outside the parentheses for PGExplainer). We have the following key findings. First, DEGREE achieves SOTA performances in most scenarios, showing its advantages in faithfulness over baseline methods. Second, the performance of DEGREE on GCN and GAT models can achieve similar high performance. This observation demonstrates the adaptability of our approach. Third, the improvement of AUC on BA-Community (?9%) and MUTAG (?5%) is more noticeable, where the two datasets distinguish themselves from others is that their features are not constants. It thus shows that our explanation method could well handle node features as they propagate through the graph structure. In terms of efficiency, DEGREE is implemented by decomposing the built-in forward propagation function, so there is no training process. The time cost is highly correlated to the complexity of the target model and the input size. We report further quantitative experiments in Appendix D.
### Qualitative Evaluation
In this section, we use Graph-SST2 and MUTAG datasets to visualize the explanation and demonstrate the effectiveness of our subgraph agglomeration algorithm in Sec 4.
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline \hline Task & \multicolumn{5}{c|}{Node Classification} & Graph Classification \\ \hline Dataset & BA-Shapes & BA-Community & Tree-Cycles & Tree-Grid & MUTAG \\ \hline GRAD & 0.882 & 0.750 & 0.905 & 0.612 & 0.787 \\ \hline GAT & 0.815 & 0.739 & 0.824 & 0.667 & 0.763 \\ \hline GNNExplainer & 0.832 & 0.750 & 0.862 & 0.842 & 0.742 \\ \hline PGExplainer & 0.963 & 0.894 & **0.960** & 0.907 & 0.836 \\ \hline DEGREE(GCN) & **0.991**\(\pm\)0.005 & **0.984**\(\pm\)0.005 & 0.958\(\pm\)0.004 & **0.925**\(\pm\)0.040 & **0.875**\(\pm\)0.028 \\ \hline DEGREE(GAT) & 0.990\(\pm\)0.008 & 0.982\(\pm\)0.010 & 0.919\(\pm\)0.027 & 0.935\(\pm\)0.031 & 0.863\(\pm\)0.042 \\ \hline \multicolumn{5}{c}{Time Efficiency(s)} \\ \hline GNNExplainer & 0.65 & 0.78 & 0.69 & 0.72 & 0.43 \\ \hline PGEExplainer & 116.72(0.014) & 35.71(0.024) & 117.96(0.09) & 251.37(0.011) & 503.52(0.012) \\ \hline DEGREE(GCN) & 0.44 & 1.02 & 0.25 & 0.37 & 0.83 \\ \hline DEGREE(GAT) & 1.98 & 2.44 & 0.96 & 1.03 & 0.79 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Quantitative Experiment Result.
Figure 2: The subgraph agglomeration results on MUTAG dataset. The first row shows a correct prediction. The second and the third row report two typical examples of errors. Red is mutagenic, blue is non-mutagenic, gray is not selected. The colored edges link the selected nodes. The process goes from left to right. The graph on the far left in each row displays the score for individual nodes.
In the first example, we show three visualizations from the MUTAG dataset in Figure 2. The first row represents a correctly predicted instance. Our model successfully identifies the "NO2" motif as a moderately positive symbol for mutagenicity. The "H" or the carbon ring is considered a negative sign for mutagenicity. Once the "NO2" and the ring join, they become a strong positive symbol for mutagenicity. This phenomenon is consistent with the knowledge that the carbon rings and "NO2" groups tend to be mutagenic (Debnath et al., 1991). We check instances with wrong predictions and show two representative examples. From the second row in Fig. 2, the GCN model precisely finds out the "NH2" motif with the ring motif as a strong mutagenic symbol. But another wandering part without connection shows a strong non-mutagenic effect, ultimately leading to an incorrect prediction. The second row shows another typical failure pattern. The model catches the "NH2" and part of the carbon ring as a mutagenic symbol, but the "CH3" on the bottom right shows a strong non-mutagenic effect. The model erroneously learns a negative interaction between them.
In the second example, we show two visualizations for the Graph-SST2 dataset in Figure 3. The sentence in the first row is labeled negative, yet its prediction is wrong. Our algorithm can explain the decision that the GNN model regards first half of the sentence ("Maybe it's asking too much") as negative, the second half ("going to inspire me", "want a little more than this") as positive. But the model can not tell the subjunctive tone behind the word "if", and consequently yields a positive but incorrect prediction. The sentence in the second row is negative, and the prediction is correct. Our algorithm precisely identifies the positive part ("Though Ford and Neeson capably hold our interest") and the negative part ("but its just not a thrilling movie"). Moreover, it reveals that the GCN model can correctly learn the transition relationship between these two components.
We observe that our method can detect non-linear interactions between subgraphs throughout the agglomeration process from above examples. It can help to diagnose the incorrect predictions and enhance the credibility of the model. More visualizations and efficiency study are in Appendix E.
## 6 Conclusions
In this work, we present DEGREE which explains a GNN by decomposing its feedforward propagation process. After summarizing the fundamental rules for designing decomposition based explanation, we propose concrete decomposition schemes for those commonly used layers in GNNs. We also design an algorithm to provide subgraph-level explanation via agglomeration, which efficiently employs the topological information in graphs. Experimental results show that DEGREE outperforms baselines in terms of faithfulness and can capture meaningful structures in graph data.
Figure 3: The subgraph agglomeration results on the Graph-SST2 dataset. The first row shows an incorrect prediction, the second row shows the correct one. Red is negative, blue is positive. |
2308.12882 | LCANets++: Robust Audio Classification using Multi-layer Neural Networks
with Lateral Competition | Audio classification aims at recognizing audio signals, including speech
commands or sound events. However, current audio classifiers are susceptible to
perturbations and adversarial attacks. In addition, real-world audio
classification tasks often suffer from limited labeled data. To help bridge
these gaps, previous work developed neuro-inspired convolutional neural
networks (CNNs) with sparse coding via the Locally Competitive Algorithm (LCA)
in the first layer (i.e., LCANets) for computer vision. LCANets learn in a
combination of supervised and unsupervised learning, reducing dependency on
labeled samples. Motivated by the fact that auditory cortex is also sparse, we
extend LCANets to audio recognition tasks and introduce LCANets++, which are
CNNs that perform sparse coding in multiple layers via LCA. We demonstrate that
LCANets++ are more robust than standard CNNs and LCANets against perturbations,
e.g., background noise, as well as black-box and white-box attacks, e.g.,
evasion and fast gradient sign (FGSM) attacks. | Sayanton V. Dibbo, Juston S. Moore, Garrett T. Kenyon, Michael A. Teti | 2023-08-23T17:42:00Z | http://arxiv.org/abs/2308.12882v2 | # LCANets++: Robust Audio Classification Using Multi-Layer Neural Networks with Lateral Competition
###### Abstract
Audio classification aims at recognizing audio signals, including speech commands or sound events. However, current audio classifiers are susceptible to perturbations and adversarial attacks. In addition, real-world audio classification tasks often suffer from limited labeled data. To help bridge these gaps, previous work developed neuro-inspired convolutional neural networks (CNNs) with sparse coding via the Locally Competitive Algorithm (LCA) in the first layer (i.e., LCANets) for computer vision. LCANets learn in a combination of supervised and unsupervised learning, reducing dependency on labeled samples. Motivated by the fact that auditory cortex is also sparse, we extend LCANets to audio recognition tasks and introduce LCANets++, which are CNNs that perform sparse coding in multiple layers via LCA. We demonstrate that LCANets++ are more robust than standard CNNs and LCANets against perturbations, e.g., _background noise_, as well as black-box and white-box attacks, e.g., _evasion_ and _fast gradient sign (FGSM)_ attacks.
Sayanton V. Dibbo\({}^{\lx@sectionsign\dagger}\)+, Juston S. Moore\({}^{\lx@sectionsign}\), Garrett T. Kenyon\({}^{\lx@sectionsign}\), Michael A. Teti\({}^{\lx@sectionsign}\)
\({}^{\lx@sectionsign}\) Los Alamos National Laboratory, Los Alamos, NM, USA, \({}^{\dagger}\)Dartmouth College, Hanover, NH, USA Audio Classification, Robustness, Neural Networks, Adversarial Machine Learning
Footnote †: 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Sayanton V. Dibbo\({}^{\lx@sectionsign\dagger}\)+, Juston S. Moore\({}^{\lx@sectionsign}\), Garrett T. Kenyon\({}^{\lx@sectionsign}\), Michael A. Teti\({}^{\lx@sectionsign}\)
\({}^{\lx@sectionsign}\) Los Alamos National Laboratory, Los Alamos, NM, USA, \({}^{\dagger}\)Dartmouth College, Hanover, NH, USA
Footnote †: 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
## 1 Introduction
Audio signal classification for the purposes of sound recognition (SR) or sound event detection (SE) has become an active area of research interest [1, 2]. This includes using the Convolutional Neural Network (CNN) models for understanding human speech words, e.g., 'yes','stop', etc., or classifying sound events like 'baby cries', and 'barking'. However, standard CNNs are notoriously susceptible to perturbations or adversarial attacks [3, 4, 5, 6]. Standard audio classification models depend highly on large labeled datasets for better performances [7, 8], but large labeled datasets can be scarce for many common tasks, such as speaker identification. Generating augmented samples for audio data is one proposed approach to mitigate this challenge, but data augmentation can be time-consuming and expensive [9, 10]. Therefore, it is crucial to develop audio classifiers that can learn robust features with limited labeled samples.
Recent studies have shown that CNNs that are more similar to the primary visual cortex are more robust than standard CNNs [11]. Based on this, previous work developed CNNs in which the first layer performed sparse coding via the Locally Competitive Algorithm (LCA) [12, 13, 14], which is a biologically plausible model of the primate primary visual cortex [15]. These CNNs, which we refer to as LCANets, were shown to be more robust than standard CNNs on standard CV tasks. However, there are two issues with this approach we address here. First, sparse coding models were designed to model the visual cortex, so it is unclear how they will impact the performance of CNNs on audio classification tasks. Second, these LCANets were robust to natural corruptions, but they were susceptible to white-box adversarial attacks [13] unless the exact attack was known before hand [14].
Motivated by this, we introduce multi-layer LCANets, which perform sparse coding in multiple CNN layers. We refer to these multilayer LCANets as LCANets++ and train them on audio classification tasks. To test the robustness of LCANets++ relative to LCANets and standard CNNs, we first conducted experiments with different audio perturbations, e.g., _background noise_. In addition, we show that our proposed LCANets++ are more robust compared to the state-of-the-art (SOTA) models (e.g., ResNet18, standard CNN) and LCANets against white-box attacks, i.e., _fast gradient sign attack (FGSM)_[16] and _projected gradient descent attack (PGD)_[17], as well as black-box attacks, i.e., _evasion attack_.
## 2 Proposed Method
### LCA Layer
As presented in Fig. 1, LCA layer is the basic building block for the LCA frontend and our proposed LCANets++. LCA layer converts the input \(\mathcal{X}\) to a coding \(\mathcal{C}\), i.e., representation of the input \(\mathcal{X}\), leveraging the least number of active neurons (i.e., features). The goal of the reconstruction minimization problem applied here is to find the sparse coding representation \(C\) closest possible to original input \(\mathcal{X}\) as follows:
\[\mathcal{L}_{re}=\min_{C}\frac{1}{2}\|\mathcal{X}-\mathcal{C}\oplus\Phi\|_{2 }^{2}+\lambda\|\mathcal{C}\|_{1} \tag{1}\]
where \(\Phi\) is the \(\Phi\)-norm of the input \(\mathcal{X}\), and \(\Phi\) is the \(\Phi\)-norm of the input \(\mathcal{X}\). The \(\Phi\)-norm of the input \(\mathcal{X}\) is the \(\Phi\)-norm of the input \(\mathcal{X}\).
where \(\mathcal{L}_{re}\) denotes the reconstruction loss, \(\mathcal{X}\) is the original input, \(\mathcal{C}\) is the sparse code, \(\mathfrak{G}\) is transpose convolution, \(\Phi\) is dictionary components learned last iteration, and \(\lambda\) is the trade-off (i.e., regularization) constant. LCA layers perform lateral competitions to fire neurons and a neuron membrane follows the following ordinary differential equation [13]:
\[\hat{\mathcal{M}}(t)=\frac{1}{\gamma}[\mathcal{D}(t)-\mathcal{M}(t)-\mathcal{ C}(t)*\mathcal{S}+\mathcal{C}(t)] \tag{2}\]
where \(\gamma\) is time constant, \(\mathcal{D}(t)\) stands for neuron's input drive obtained by convolution of inputs with dictionary, i.e., \(\mathcal{X}*\Phi\); \(\mathcal{M}(t)\) is neuron's membrane potential, \(\mathcal{S}\)= \(\Phi*\Phi\) is the pairwise feature similarity, and \(\mathcal{C}(t)\) is the neuron's firing rate obtained by applying a soft threshold activation on the membrane potential \(\mathcal{M}(t)\). Coordinate ascent is used to learn dictionary \(\Phi\), which solves for \(\mathcal{C}\), given an input batch using LCA and then updates \(\Phi\) with stochastic gradient descent (SGD).
### LCA Frontend
LCA frontend is the unsupervised pre-training part of the LCANet, as shown in Fig. 0(a). It basically consists of the raw audio waveform converted to MFCCs, as input signal \(\mathcal{X}\) and the LCA layer to compute the sparse representation \(\mathcal{C}\) of the input \(\mathcal{X}\), which can then feed to conventional CNN layers for the classification task.
### LCANets
LCANets for the audio classification consist of the LCA frontend and followed by CNN layers. LCA frontend learns in unsupervised fashion and then passes the computed sparse code \(\mathcal{C}\) to the CNN layers to finally perform the classification task. One major difference is that, the sparse code \(\mathcal{C}\) does not need to recompute back to original input \(\mathcal{X}\) before feeding to CNN layer, as other reconstruction-based models usually do. This makes the LCANets more effective against perturbations, while reducing dependency on labeled audio samples.
### LCANets++
We present the overview of our proposed LCANets++ in Fig.0(b). The basic building block of our proposed LCANets++ is the LCA layers. In this architecture, multiple LCA layers are inserted that learn in unsupervised fashion. The convolutional layers in SOTA CNN networks are replaced by the LCA layers, performing sparse coding in each layer. Similar to [18], in order to reduce over-sparsity, in between two consecutive sparse layers (i.e., LCA layers), a dense layer, i.e., batch normalization layer, is mounted (Fig.0(b)) in our LCANets++.
## 3 Experiments
In this section, details of the experimental setup, including dataset, pre-processing, and models, are described.
### Dataset and Pre-processing
We experiment with Google Speech Commands v2 [19] dataset. This dataset has audio waveforms of 35 classes of human speech commands like "yes," "no," and "left", "right." We perform pre-processing on the raw waveforms of the three influential classes, i.e., "yes," "no," and "stop" to obtain the Mel-frequency cepstral coefficient (MFCC) [8] features and train all the models with the MFCCs of the waveforms.
### Models
We experiment with regular CNN models with 2 convolutional layers. In our LCANets++ on CNN model, we replace both convolutional layers with the LCA layers. We also compare our LCANets++ with the larger SOTA model, i.e., ResNet18. In the ResNet18 model, we replace the alternative convolutional layers in the first block with LCA layers to obtain the ResNet18_LCA++ model. To experiment with the performance of LCANets++ against _white-box_ or _black-box_ attacks, we consider the regularization constant \(\lambda=1.00\) for better sparse representations and hence, improved robustness against perturbations or attacks.
Figure 1: An overview of (a.) LCA frontend and (b.) pipeline of our proposed LCANets++, utilizing sparse coding via multiple LCA layers in the state-of-the-art (SOTA) CNN backbone, enabling lower misclassification on perturbed test sets or attacks.
### Experimental Setup
We run all the experiments on 8 nodes NVIDIA_A100-PCIE-40GB GPUs with 64-128 cores on the cluster. We use the Pytorch framework to develop the LCA class and LCANets++ implementations. We consider a train test split of 70% and 30% for all the models experimented in this work. For the _background noise_ experiment, we train models for 50 epochs, for rest of the experiments models are trained with 20 epochs. We consider 0.0001 learning rate. We use SGD optimizer with 0.9 momentum for optimization. In order to add background noise, we impose background noise on all test set raw waveforms of audio clips, tuning the SNR [db] values to obtain different perturbed test sets. Similarly, we consider perturbing MFCCs with different \(\epsilon\) values.
## 4 Results and Analysis
In this section, we illustrate the key results of our experiments on different perturbations and adversarial attacks.
### Input Perturbations
We test the robustness of standard CNNs without the LCA layer(s), LCANets, and our proposed LCANets++ to perturbations. We experiment with two different cases of input perturbations: i) _background noise_ on the raw audio clips and ii) _gaussian noise_ on MFCCs to compare robustness against both perturbation scenarios.
#### 4.1.1 Background Noise
In Fig. (a)a, we present the performance of regular CNN, LCANet, and our proposed LCANet++, tested on different perturbed test sets with _background noise_, varying SNR [db] values. Observe that the regular CNN model performance drastically goes down as more perturbation is applied to original waveforms (i.e., lower SNRs). Whereas, LCANet goes down slowly with increasing perturbations, and our proposed LCANet++ shows the most robustness compared to LCANet and regular CNN models, as presented in Fig. (a)a. This is attributed to the fact that LCA layers learn in an unsupervised fashion, reducing the numbers of the neurons activated through lateral competitions. These fewer activated neurons represent the most relevant input features, which are less impacted by slight perturbations.
We also test the robustness of LCANets++ on larger models, i.e., ResNet18 model with 18 layers. As presented in Fig. (b)b, we observe that the ResNet18 with multilayer LCAs, i.e., ResNet18_LCANet++ outperforms regular ResNet18 and ResNet18 with LCA in the first layer, i.e., ResNet18_LCANet. From Table 1, we find that for the ResNet18 architecture, LCANets++ slightly improves the robustness on perturbed test sets than regular ResNet18 without LCA layers, as opposed to significantly higher robustness LCANets++ exhibited on regular CNN model. Larger model with more layers and parameters make ResNet18 inherently more robust than regular CNNs, resulting in LCANets++ to boost up only slightly in ResNet18 than regular CNNs.
#### 4.1.2 Gaussian Noise
We impose _Gaussian noise_ on the MFCCs varying \(\epsilon\) values. As presented in Table. 2, with increasing the \(\epsilon\) (more perturbations), performance of the regular CNN model goes down. Also, LCANet and LCANet++ performance slightly goes down, but still, the models with LCA layers show more robustness compared to the model without LCA layers, i.e., regular CNN model. This shows that our LCANets++ are more robust not only against perturbations on raw waveforms, but also against perturbations on the feature space, i.e., MFCCs.
\begin{table}
\begin{tabular}{l c c c c c} \hline Model & \(SNR\) & & & & \\ & = \(15db\) & \(20db\) & \(24db\) & \(25db\) & \(\infty\) \\ \hline CNN & 0.692 & 0.788 & 0.793 & 0.858 & 0.920 \\ LCANet & 0.760 & 0.840 & 0.876 & 0.904 & 0.940 \\
**LCANet++** & **0.768** & **0.847** & **0.903** & **0.914** & **0.962** \\ \hline \hline ResNet18 & 0.847 & 0.920 & 0.942 & 0.955 & 0.970 \\ ResNet18\_LCA & 0.850 & 0.922 & 0.944 & 0.958 & 0.966 \\
**ResNet18\_LCA++** & **0.853** & **0.928** & **0.945** & **0.959** & **0.971** \\ \hline \end{tabular}
\end{table}
Table 1: Performance comparisons against perturbations with Background Noise on waveforms
\begin{table}
\begin{tabular}{l c c c c c c} \hline Model & \(\epsilon\) & & & & \\ & = 0 & 0.01 & 0.02 & 0.03 & 0.04 & 0.05 \\ \hline CNN & 0.866 & 0.864 & 0.863 & 0.863 & 0.858 & 0.856 \\ LCANet & 0.939 & 0.938 & 0.935 & 0.925 & 0.909 & 0.883 \\
**LCANet++** & **0.950** & **0.943** & **0.939** & **0.927** & **0.914** & **0.900** \\ \hline \end{tabular}
\end{table}
Table 2: Performance comparisons against perturbations with Gaussian Noise on MFCCs
Figure 2: Comparisons of our LCANets++ and other SOTA models against perturbations with _background noise_.
### Adversarial Attacks
We experiment with adversarial attacks having different capabilities. For experimental purposes, we consider both the _white-box_ and _black-box_ adversarial attacks.
#### 4.2.1 White-box Attacks
In _white-box_ attacks, an adversary has more capabilities like having access to the model architectures, including model parameters, weights, and gradients. We consider two different types of _white-box_ attacks, i.e., _FGSM_[16] and _PGD_[17]. In both attacks, the adversary utilizes the gradients to perturb the MFCCs of test sets to misclassify them during inference.
We present the performances of the regular CNN model and LCANets, as well as our proposed LCANets++, against the FGSM attack in Fig. (a)a. We find that the regular CNN is not very robust, and it's performance goes down, as perturbations (\(\epsilon\)) go higher. We observe that, the single-layer LCA, i.e., LCANets are not robust against the _white-box_ FGSM attack, which is consistent to findings in [13] for CV tasks. However, our proposed multi-layer LCANets++ outperforms the CNN model and LCANets on audio classification against the FGSM attack. In Fig. (a)a, we observe that LCANets++ performance decreases comparatively slowly as attack becomes stronger with higher perturbations (\(\epsilon\)). We also experiment with another _white-box_ attack, i.e., PGD attack, where LCANets++ consistently show more robustness than SOTA models and LCANets, as shown in Table 3.
#### 4.2.2 Black-box Attacks
We experiment with the _black-box_ evasion attack, where the adversary has no access to the model gradients. In this attack, an adversary only has query access to the model and can get predictions from the model utilizing the query access. In our setup, the adversary is able to make queries to the original target model and get predictions from the model. The adversary utilizes the predictions and input queries to develop a surrogate model. The surrogate model generates the perturbed samples, varying perturbations (\(\epsilon\)), and we tested the performance of the original models on these perturbed test sets. We present the performances of regular CNN, LCANet, and our proposed LCANet++ against the _black-box_ evasion attack in Fig. (b)b. We observe that, in _black-box_ evasion attack, LCANet shows more robustness compared to CNN and LCANet++ outperforms all the models on perturbed test sets (i.e., \(\epsilon>0\)). Note that, models are trained for 20 epochs with three audio classes (i.e., limited samples), which might lead to a significant performance gap among the regular CNN and LCA-based models on unperturbed test sets \(\epsilon=0\) in Table. 4.
## 5 Conclusions
In this work, we developed CNNs with sparse coding in multiple layers, referred to as LCANets++. We showed that LCANets++ can be easily implemented using regular CNNs like ResNet18. Our empirical analysis shows that LCANets++ can be used in audio classifiers to increase robustness to noise and adversarial attacks relative to LCANets and standard CNNs. In addition, we observe how the unsupervised training with LCA and number of LCA layers impacts clean and robust test accuracy. Overall, our work sheds light into future directions in designing privacy-preserving robust audio classifiers.
## 6 Acknowledgements
We gratefully acknowledge support from the Advanced Scientific Computing Research (ASCR) program office in the Department of Energy's (DOE) Office of Science, award #77902, as well as the Center for Nonlinear Studies and the Cyber Summer School at Los Alamos National Laboratory.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Model & \(\epsilon\) & & & & & \\ & = 0 & 0.01 & 0.02 & 0.03 & 0.04 & 0.05 \\ \hline CNN & 0.866 & 0.865 & 0.865 & 0.864 & 0.862 & 0.860 \\ LCANet & 0.939 & 0.939 & 0.935 & 0.926 & 0.908 & 0.880 \\
**LCANet++** & **0.950** & **0.948** & **0.939** & **0.928** & **0.915** & **0.905** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comparisons against _black-box_ (Evasion) attack
Figure 3: Comparisons of LCANets++ and SOTA models on \(L_{\infty}\) norm _white-box_ attacks.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Attack & Model & \(\epsilon\) & & & & \\ & = 0 & 0.01 & 0.016 & 0.02 & 0.03 \\ \hline \multirow{4}{*}{FGSM} & CNN & 0.866 & 0.439 & 0.196 & 0.108 & 0.017 \\ & LCANet & 0.939 & 0.261 & 0.123 & 0.092 & 0.062 \\ & **LCANet++** & **0.950** & **0.679** & **0.418** & **0.417** & **0.414** \\ \hline \hline \multirow{4}{*}{PGD} & CNN & 0.866 & 0.382 & 0.147 & 0.073 & 0.025 \\ & LCANet & 0.939 & 0.028 & 0.005 & 0.005 & 0.005 \\ & **LCANet++** & **0.950** & **0.588** & **0.585** & **0.579** & **0.567** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparisons against _white-box_ attacks |
2302.00205 | Gradient Descent in Neural Networks as Sequential Learning in RKBS | The study of Neural Tangent Kernels (NTKs) has provided much needed insight
into convergence and generalization properties of neural networks in the
over-parametrized (wide) limit by approximating the network using a first-order
Taylor expansion with respect to its weights in the neighborhood of their
initialization values. This allows neural network training to be analyzed from
the perspective of reproducing kernel Hilbert spaces (RKHS), which is
informative in the over-parametrized regime, but a poor approximation for
narrower networks as the weights change more during training. Our goal is to
extend beyond the limits of NTK toward a more general theory. We construct an
exact power-series representation of the neural network in a finite
neighborhood of the initial weights as an inner product of two feature maps,
respectively from data and weight-step space, to feature space, allowing neural
network training to be analyzed from the perspective of reproducing kernel {\em
Banach} space (RKBS). We prove that, regardless of width, the training sequence
produced by gradient descent can be exactly replicated by regularized
sequential learning in RKBS. Using this, we present novel bound on uniform
convergence where the iterations count and learning rate play a central role,
giving new theoretical insight into neural network training. | Alistair Shilton, Sunil Gupta, Santu Rana, Svetha Venkatesh | 2023-02-01T03:18:07Z | http://arxiv.org/abs/2302.00205v1 | # Gradient Descent in Neural Networks as Sequential Learning in RKBS
###### Abstract
The study of Neural Tangent Kernels (NTKs) has provided much needed insight into convergence and generalization properties of neural networks in the over-parametrized (wide) limit by approximating the network using a first-order Taylor expansion with respect to its weights in the neighborhood of their initialization values. This allows neural network training to be analyzed from the perspective of reproducing kernel Hilbert spaces (RKHS), which is informative in the over-parametrized regime, but a poor approximation for narrower networks as the weights change more during training. Our goal is to extend beyond the limits of NTK toward a more general theory. We construct an exact power-series representation of the neural network in a finite neighborhood of the initial weights as an inner product of two feature maps, respectively from data and weight-step space, to feature space, allowing neural network training to be analyzed from the perspective of reproducing kernel _Banach_ space (RKBS). We prove that, regardless of width, the training sequence produced by gradient descent can be exactly replicated by regularized sequential learning in RKBS. Using this, we present novel bound on uniform convergence where the iterations count and learning rate play a central role, giving new theoretical insight into neural network training.
## 1 Introduction
The remarkable progress made in neural networks in recent decades has led to an explosion in their adoption in a wide swathe of applications. However this widespread success has also left unanswered questions, the most obvious of which is why non-convex, massively over-parameterised networks are able to perform so much better than predicted by traditional machine learning theory.
Neural tangent kernels represent an attempt to answer this question. As per (Jacot et al., 2018; Arora et al., 2019b), during training, the evolution of an over-parameterised neural network follows the kernel gradient of the functional cost with respect to a neural tangent kernel (NTK). It was shown that, for a sufficiently wide network with random weight initialisation, the NTK is effectively fixed, and results from machine learning in reproducing kernel Hilbert space (RKHS) can thus be brought to bear on the problem. This has led to a plethora of results analysing the convergence (Du et al., 2019; Allen-Zhu et al., 2019; Du et al., 2019; Zou et al., 2020; Zou and Gu, 2019) and generalisation (Arora et al., 2019b, a; Cao and Gu, 2019) properties of neural networks.
Despite their successes, NTK models are not without problems. As noted in (Bai and Lee, 2019), the expressive power of the linear approximation used by NTK is limited to that of the corresponding, randomised feature space or RKHS, as evidenced by the observed gap between NTK predictions and actual performance. To break out of this regime, (Bai and Lee, 2019) proposed using a second or higher-order approximation of the network. Moreover it is natural to ask how well a linear approximation of the behaviour, constructed on the assumption of small weight-steps, will scale to larger weight steps in narrower networks.
To overcome these difficulties we replace the Taylor approximation used in NTK with an an exact power series representation of the neural network in a finite neighbourhood around the initial weights. We demonstrate that this leads to a representation as an inner product between two feature maps, from data and weight-step space respectively. This structure underlies the construction of reproducing kernel Banach spaces (RKBS, (Lin et al., 2022)), allowing us to go on to show an equivalence between back-propagation and sequential learning in RKBS, which is similar to NTK but without the constraints of linearity, allowing us to derive new bounds on uniform convergence for networks of arbitrary width.
## 2 Related Work
There has been a significant amount of work looking at uniform convergence behaviour of networks of different types using variety of assumptions during training (Neyshabur et al., 2015, 2018, 2019, 2017; Harvey et al., 2017; Bartlett et al., 2017; Golowich et al., 2018; Arora et al., 2018; Allen-Zhu et al., 2018; Draxler et al., 2018; Li and Liang, 2018; Nagarajan and Kolter, 2019a, b; Zhou et al., 2019).
The study of the connection between kernel methods and neural networks has a long history. (Neal, 1996) demonstrated that, in the infinite-width limit, iid randomly initialised single-layer networks converge to draws from a Gaussian process. This was extended to multi-layered neural networks in (Lee et al., 2018; Matthews et al., 2018) by assuming random weights up to (but not including) the output layer. Other works deriving approximate kernels by assuming random weights include (Rahimi and Benjamin, 2009; Bach, 2014, 2017; Daniely et al., 2016; Daniely, 2017).
Neural tangent kernels (Jacot et al., 2018; Arora et al., 2019b) are a more recent development. The basis of NTK is to approximate the behaviour of neural network (for a given input \(\mathbf{x}\)) as the weights and biases vary about some initial values using a first-order Taylor approximation. This approximation is linear in the change in weights, and the coefficients of this approximation are functions of \(\mathbf{x}\) and may therefore be treated as a feature map, making the model amenable to the kernel trick and subsequent analysis in terms of RKHS theory. This approach may be generalised to higher order approximations (Bai and Lee, 2019), but the size of change in the weights that can be approximated remains limited except in the over-parametrised limit, where the variation of the weights becomes small.
Arc-cosine kernels (Cho and Saul, 2009) work on a similar premise. For activation functions of the form \(\tau(\xi)=(\xi)_{+}^{p}\), \(p\in\mathbb{N}\), in the infinite-width limit, arc-cosine kernels capture the feature map of the network. Depth is achieved by composition of kernels. However once again this approach is restricted to networks of infinite width, whereas our approach works for arbitrary networks.
Finally there has been some very recent work (Bartolucci et al., 2021; Sanders, 2020; Parhi and Nowak, 2021; Unser, 2021) in a similar vein to the current work, seeking to connect neural networks to RKBS theory. However these works consider only 1 and 2 layer networks (we consider networks of arbitrary depth), and no equivalence is established between the weight-steps found by back-propagation and those found by regularised learning in RKBS.
## 3 Notations
Let \(\mathbb{N}=\{0,1,2,\ldots\}\), \(\mathbb{N}_{n}=\{0,1,\ldots,n-1\}\). Vectors and matrices are denoted \(\mathbf{a}\) and \(\mathbf{A}\), respectively, with elements \(a_{i}\), \(A_{i,\nu}\), and columns \(\mathbf{A}_{:i}\) indexed by \(i,i^{\prime}\in\mathbb{N}\). We define:
\[\begin{array}{l}\|\mathbf{x}\|_{p}=(\sum_{i}|x_{i}|^{p})^{1/p},\|\mathbf{A}\| _{p,q}=\|[\|\mathbf{A}_{:i}\|_{p}]\|_{q}\\ \|\mathbf{x}\|_{\infty}=\max_{i}\{|x_{i}|\},\|\mathbf{x}\|_{-\infty}=\min_{i}\{ |x_{i}|\}\end{array}\]
\(\forall p,q\in[-\infty,0)\cup(0,\infty]\), which are norms if \(p,q\in[1,\infty]\). The Frobenius norm and inner product are \(\|\cdot\|_{F}=\|\cdot\|_{2,2}\). \(\langle\mathbf{A},\mathbf{B}\rangle_{F}=\mathrm{Tr}(\mathbf{A}^{T}\mathbf{B})\). The Kronecker and Hadamard product are \(\mathbf{a}\otimes\mathbf{b}\), \(\mathbf{a}\odot\mathbf{b}\). The Kronecker and Hadamard powers are \(\mathbf{a}^{\otimes e}=\mathbf{a}\odot\mathbf{a}\odot\mathbf{a}\odot\mathbf{a}\), \(\mathbf{a}^{\otimes e}=\mathbf{a}\odot\mathbf{a}\odot\mathbf{a}\). The elementwise absolute and sign are \(|\mathbf{a}|\), \(\mathrm{sgn}(\mathbf{a})\). Finally, for vectors \(\mathbf{a}\), \(\mathbf{b}\) we let \(\mathbf{a}(\mathbf{a},\mathbf{b})=[\,a_{0}(\mathbf{b}^{\otimes 1})^{T},a_{1}( \mathbf{b}^{\otimes 2})^{T},\ldots\,]^{T}\). \(\mathrm{diag}(\mathbf{A})\) is a vector containing the diagonal elements of \(\mathbf{A}\), and conversely \(\mathrm{diag}(\mathbf{a})\) is a diagonal matrix with diagonal elements from \(\mathbf{a}\).
We study fully connected \(D\)-layer neural networks \(\mathbf{f}:(\mathbb{X}\subset\mathbb{R}^{n})\rightarrow(\mathbb{Y}\subset \mathbb{R}^{m})\) with layer widths \(H^{[j]}\) (and \(H^{[-1]}=n\)) trained on a training set \(\{(\mathbf{x}^{(k)},\mathbf{y}^{(k)})\in\mathbb{X}\times\mathbb{Y}:k\in \mathbb{N}_{N}\}\). We use index range conventions \(k\in\mathbb{N}_{N}\), \(j\in\mathbb{N}_{D}\) and \(i_{j+1}\in\mathbb{N}_{H^{[j]}}\), and for clarity we write:
\[\begin{array}{c}\text{Relating to:}\quad\frac{\text{Training vector $k$ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ Layer \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ $j$}}{\sum_{\begin{subarray}{c}\mathbf{x}^{ \{\{k}}[j](r)}\\ \mathbf{x}^{\{\{k}}[j](r)}\\ \mathbf{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{ \mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{ \mathcal{ \mathcal{ }}}}}}}}} \end{\mathbf{\mathbf{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{ \mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal \
exists a kernel \(K\) such that:
\[f\left(\mathbf{x}\right)=\left\langle f\left(\cdot\right),K\left(\mathbf{x}, \cdot\right)\right\rangle_{\mathcal{H}}\quad\forall f\in\mathcal{H}\]
Subsequently \(K(\mathbf{x},\mathbf{x}^{\prime})=\left\langle K(\mathbf{x},\cdot),K(\mathbf{x }^{\prime},\cdot)\right\rangle\) and, by the Moore-Aronszajn theorem, \(K\) is uniquely defined by \(\mathcal{H}\) and vice-versa. \(K\) is called the reproducing kernel, and the corresponding RKHS is denoted \(\mathcal{H}_{K}\). RKHS based approaches have gained popularity as they are well suited to many aspects of machine learning (Steinwart and Christman, 2008; Shawe-Taylor and Cristianini, 2004). The inner product structure enables the kernel trick, and the kernel is readily understood as a similarity measure. Furthermore the structure of RKHSs has led to a rich framework of complexity analysis and generalisation bounds (Steinwart and Christman, 2008; Shawe-Taylor and Cristianini, 2004). More recently neural tangent kernels were introduced (Jacot et al., 2018), allowing RKHS theory to be applied to the neural network training in the over-parametrised regime.
In an effort to introduce a richer set of geometrical structures into RKHS theory, reproducing kernel Banach spaces (RKBSs) generalise RKHSs by starting with a Banach space of functions (Der and Lee, 2007; Zhang et al., 2009; Song et al., 2013; Xu and Ye, 2014; Lin et al., 2022) etc. Precisely:
**Definition 1** (Reproducing kernel Banach space (RKBS - (Lin et al., 2022))).: A reproducing kernel Banach space \(\mathcal{B}\) on a set \(\mathbb{X}\) is a Banach space of functions \(f:\mathbb{X}\rightarrow\mathbb{Y}\) such that every point evaluation \(\delta_{\mathbf{x}}:\mathcal{B}\rightarrow\mathbb{Y}\), \(\mathbf{x}\in\mathbb{X}\), on \(\mathcal{B}\) is continuous (so \(\forall\mathbf{x}\in\mathbb{X}\)\(2C_{\mathbf{x}}\in\mathbb{R}_{+}\) such that \(|\delta_{\mathbf{x}}(f)|=|f(\mathbf{x})|\leq C_{\mathbf{x}}\|f\|_{\mathcal{B}} \)\(\forall f\in\mathcal{B}\)).
There are several distinct approaches to RKBS construction. In the present context however we find the approach of (Lin et al., 2022, Theorem 2.1) most convenient. Given the components outlined in Figure 1, and assuming that \(\mathbf{\Phi}_{\mathcal{O}}(\mathbb{X})\) is dense in \(\mathcal{X}_{\mathcal{O}}\) and that \(\mathbf{\Psi}_{\mathcal{O}}(\mathbb{W}_{\mathcal{O}})\) is dense in \(\mathcal{W}_{\mathcal{O}}\), we define the reproducing kernel Banach space \(\mathcal{B}_{\mathcal{O}}\) on \(\mathbb{X}\) as:
\[\begin{array}{l}\mathcal{B}_{\mathcal{O}}=\left\{\left.\left\langle\mathbf{ \Phi}_{\mathcal{O}}\left(\cdot\right),\mathbf{\Omega}\right\rangle_{\mathcal{ X}_{\mathcal{O}}\times\mathcal{W}_{\mathcal{O}}}\right|\mathbf{\Omega}\in \mathcal{W}_{\mathcal{O}}\right\}\\ \text{where: }\left\|\left\langle\mathbf{\Phi}_{\mathcal{O}}\left(\cdot\right), \mathbf{\Omega}\right\rangle_{\mathcal{X}_{\mathcal{O}}\times\mathcal{W}_{ \mathcal{O}}}\right\|_{\mathcal{B}_{\mathcal{O}}}=\left\|\mathbf{\Omega} \right\|_{\mathcal{W}_{\mathcal{O}}}\end{array} \tag{1}\]
with reproducing Banach kernel:
\[K_{\mathcal{O}}\left(\mathbf{x},\mathbf{W}_{\Delta}\right)=\left\langle \mathbf{\Phi}_{\mathcal{O}}\left(\mathbf{x}\right),\mathbf{\Psi}_{\mathcal{O }}\left(\mathbf{W}_{\Delta}\right)\right\rangle_{\mathcal{X}_{\mathcal{O}} \times\mathcal{W}_{\mathcal{O}}} \tag{2}\]
## 5 Setup and Assumptions
We assume a fully-connected, \(D\)-layer feedforward neural network \(\mathbf{f}:(\mathbb{X}\subseteq\mathbb{R}^{n})\rightarrow(\mathbb{Y}\subseteq \mathbb{R}^{m})\) with layers of widths \(H^{[0]},H^{[1]},\ldots,H^{[D-1]}\), where \(H^{[D-1]}=m\) and we define \(H^{[-1]}=n\). We assume layer \(j\in\mathbb{N}_{D}\) (\(j\in\mathbb{N}_{D}\) throughout) contains only neurons with activation function \(\tau^{[j]}:\mathbb{R}\rightarrow\mathbb{R}\). The network is defined recursively \(\forall j\in\mathbb{N}_{D}\):
\[\begin{array}{l}\mathbf{f}\left(\mathbf{x}\right)=\mathbf{x}^{[D]}\in \mathbb{R}^{H^{[D-1]}}\\ \mathbf{x}^{[j+1]}=\tau^{[j]}(\tilde{\mathbf{x}}^{[j]})\in\mathbb{R}^{H^{[j]}} \\ \tilde{\mathbf{x}}^{[j]}=\frac{1}{\sqrt{H^{[D]}}}\mathbf{W}^{[j]}\mathbf{T} \mathbf{x}^{[j]}+\alpha^{[j]}\mathbf{b}^{[j]}\in\mathbb{R}^{H^{[j]}}\\ \mathbf{x}^{[0]}=\mathbf{x}\in\mathbb{X}\subset\mathbb{R}^{H^{[-1]}}\ (H^{[-1]}=n) \end{array} \tag{3}\]
where \(\mathbf{W}^{[j]}\in\mathbb{R}^{H^{[j-1]}\times H^{[j]}}\) and \(\mathbf{b}^{[j]}\in\mathbb{R}^{H^{[j]}}\) are weights and biases, which we summarise as \(\mathbf{W}\in\mathbb{W}\), and \(\alpha^{[j]}\in\mathbb{R}_{+}\) are fixed. The set of functions of this form is denoted \(\mathcal{F}\).
We assume the goal of training is to take a training set and find weights and biases to minimise the empirical risk:
\[\begin{array}{l}\mathbf{W}^{*}=\operatorname*{argmin}_{\begin{subarray}{c} \mathbf{W}\in\mathbb{W}\\ R_{E}\left(\mathbf{W},\mathbb{D}\right)=\sum_{k}E\left(\mathbf{x}^{\{k\}}, \mathbf{y}^{\{k\}},\mathbf{f}_{\mathbf{W}}\left(\mathbf{x}^{\{k\}}\right) \right)\end{subarray}}\end{array} \tag{4}\]
where \(\mathbf{f}_{\mathbf{W}}\) is a network of the form (3) with weights and biases \(\mathbf{W}\), \(\mathbb{D}=\{(\mathbf{x}^{\{k\}},\mathbf{y}^{\{k\}})\in\mathbb{X}\times \mathbb{Y}:k\in\mathbb{N}_{N}\}\) is a training set (\(k\in\mathbb{N}_{N}\) throughout), and \(E:\mathbb{X}\times\mathbb{Y}\times\mathbb{R}^{m}\rightarrow\mathbb{R}\) is an error function defining the purpose of the network.
We make the following technical assumptions:
1. Input space: \(\mathbb{X}=[-1,1]^{n}\).
2. Error function: \(E:\mathbb{X}\times\mathbb{Y}\times\mathbb{R}^{m}\rightarrow\mathbb{R}\) is \(\mathcal{C}^{1}\) and \(L_{E}\)-Lipschitz in its third argument.
3. Activation functions: for all \(j\in\mathbb{N}_{D}\), \(\tau^{[j]}:\mathbb{R}\rightarrow[-1,1]^{H^{[j]}}\) is bounded, \(\mathcal{C}^{\infty}\), and has a power-series representations with region of convergence (ROC) at least \(\rho^{[j]}\in\mathbb{R}_{+}\) around all \(z\in\mathbb{R}\).
4. Weight non-triviality: for all \(j\in\mathbb{N}_{D}\), \(\mathbf{W}^{[j]}\neq\mathbf{0}\) at all times during training.1 Footnote 1: Networks that do not meet this requirement have a constant output independent of input \(\mathbf{x}\). We do not consider this a restrictive assumption as it is highly unlikely that a randomly initialised network trained with a typical training set will ever reach this state.
5. Weight initialisation: we assume LeCun initialisation, so for all \(j\in\mathbb{N}_{D}\), \(W^{[j]}_{i_{j},i_{j+1}},b^{[j]}_{i_{j+1}}\sim\mathcal{N}(0,1)\).
6. Training: we assume training is gradient descent (back-propagation) with learning rate \(\eta\in\mathbb{R}_{+}\).
### Back-Propogation Training
As stated above, we assume the network is trained using back-propagation (gradient descent) [11]. This is an iterative approach. An iteration starts with initial weights and biases \(\mathbf{W}_{\mathcal{O}}\in\mathbb{W}\). A weight-step:
\[\mathbf{W}_{\Delta}^{\boxplus}=-\eta\;\frac{\partial}{\partial\mathbf{W}} \sum_{k}E\left(\mathbf{x}^{\{k\}},\mathbf{y}^{\{k\}},\mathbf{f}_{\mathbf{W}} \left(\mathbf{x}^{\{k\}}\right)\right)\big{|}_{\mathbf{W}=\mathbf{W}_{ \mathcal{O}}}\]
is calculated, and weights and biases are updated as \(\mathbf{W}=\mathbf{W}_{\mathcal{O}}+\mathbf{W}_{\Delta}^{\boxplus}\). Our notational convention for activations before an iteration, and the subsequent change due to a weight step, are given in figure 2. The weight-step is [11] (see appendix B for a derivation):
\[\begin{array}{l}\mathbf{W}_{\Delta_{ij+1}}^{[j]\boxplus}=-\frac{\eta}{ \sqrt{H^{[-1]}H^{[-2]}\dots H^{[j+1]}}}\sum_{k}\gamma_{\mathcal{O}_{ij+1}}^{ \{k\}[j]}\frac{\mathbf{x}_{\mathcal{O}}^{[j]\boxplus}}{\sqrt{H^{[j]}}}\\ b_{\Delta_{ij+1}}^{[j]\boxplus}=-\frac{\eta}{\sqrt{H^{[-1]}H^{[-2]}\dots H^{[j+ 1]}}}\sum_{k}\gamma_{\mathcal{O}_{ij+1}}^{[k]\boxplus}\alpha[j]\end{array} \tag{5}\]
for all \(j\in\mathbb{N}_{D},i_{j+1}\) where, recursively \(\forall j\in\mathbb{N}_{D-1}\):
\[\begin{array}{l}\gamma_{\mathcal{O}_{ij}}^{\{k\}[j-1]}=\sum_{i_{j+1}}\gamma _{\mathcal{O}_{ij+1}}^{\{k\}[j]}W_{\mathcal{O}_{ij},i_{j+1}^{\prime}}^{[j]} \tau^{[j-1](1)}\left(\tilde{x}_{\mathcal{O}_{ij}}^{\{k\}[j-1]}\right)\\ \gamma_{\mathcal{O}_{ij}}^{\{k\}[j-1]}=\nabla_{i_{D}}E\left(\dots^{\{k\}}\right) \tau^{[D-1](1)}\left(\tilde{x}_{\mathcal{O}_{ij}}^{\{k\}[j-1]}\right)\end{array}\]
Note that the change in bias \(\mathbf{b}_{\Delta}^{[j]}\) is proportional to \(\alpha^{[j]}\).
## 6 Analysis of a Single Iteration
In the first phase of our analysis we consider the change in neural network behaviour resulting from a small in weights and biases (a weight-step). The overall training sequence is readily extrapolated from this as per section 7. Our first goal is to rewrite the neural network after a training iteration as:
\[\mathbf{f}\left(\mathbf{x}\right)=\mathbf{f}_{\mathcal{O}}\left(\mathbf{x} \right)+\mathbf{f}_{\Delta}\left(\mathbf{x}\right) \tag{6}\]
where \(\mathbf{f}_{\mathcal{O}}=\mathbf{f}_{\mathbf{W}_{\mathcal{O}}}:(\mathbb{X} \subset\mathbb{R}^{n})\rightarrow(\mathbb{Y}\subset\mathbb{R}^{m})\) is the neural network before the iteration and \(\mathbf{f}_{\Delta}:(\mathbb{X}\subset\mathbb{R}^{n})\rightarrow\mathbb{R}^{m}\) is the change in network behaviour due to the change \(\mathbf{W}=\mathbf{W}_{\mathcal{O}}\rightarrow\mathbf{W}=\mathbf{W}_{ \mathcal{O}}+\mathbf{W}_{\Delta}\) in weights and biases for this iteration, as detailed in Figure 2, so that:
\[\begin{array}{l}\mathbf{f}_{\Delta}\left(\mathbf{x}\right)=\left\langle \mathbf{\Phi}_{\mathcal{O}}\left(\mathbf{x}\right),\mathbf{\Psi}_{\mathcal{O}} \left(\mathbf{W}_{\Delta}\right)\right\rangle_{\mathcal{X}\mathbb{O}\times \mathcal{W}_{\mathcal{O}}}\\ \text{where:}\quad\mathbf{\Phi}_{\mathcal{O}}:\mathbb{X}\rightarrow\mathcal{X}_ {\mathcal{O}}=\mathrm{span}\left(\mathbf{\Phi}_{\mathcal{O}}\left(\mathbb{X} \right)\right)\subset\mathbb{R}^{\infty\times m}\\ \mathbf{\Psi}_{\mathcal{O}}:\mathbb{W}_{\mathcal{O}}\rightarrow\mathcal{W}_{ \mathcal{O}}=\mathrm{span}\left(\mathbf{\Psi}_{\mathcal{O}}\left(\mathbb{W}_ {\mathcal{O}}\right)\right)\subset\mathbb{R}^{\infty\times m}\end{array} \tag{7}\]
are feature maps determined entirely by the structure of the (number and width of layers, activation functions) and the initial weights and biases \(\mathbf{W}_{\mathcal{O}}\); and \(\left\langle\mathbf{\Xi},\mathbf{\Omega}\right\rangle_{\mathcal{X}_{ \mathcal{O}}\times\mathcal{W}_{\mathcal{O}}}=\mathrm{diag}\left(\mathbf{\Xi}^ {\mathrm{T}}\mathbf{\Omega}\right)\) is a bilinear form. Subsequently our second goal is to derive kernels and norms from these feature maps to allow us to study their convergence properties, and we finish by proving an equivalence between the weight-step \(\mathbf{W}_{\Delta}^{\boxplus}\) due to a single step of back-propagation and the analogous weight-step \(\mathbf{W}_{\Delta}^{\boxplus}\) that minimises the (RKBS) regularised risk.
### Contribution 1: Feature-Map Expansion
In this section we derive appropriate feature maps to express the change in neural network behaviour for a finite weight-step. Our approach is simple in principle but technical, so details are reserved for appendix B. Roughly speaking however, we begin by noting that, for a smooth activation function \(\tau^{[j]}:\mathbb{R}\rightarrow\mathbb{R}\), \(z\in\mathbb{R}\) and finite-dimensional vectors \(\mathbf{c},\mathbf{c}^{\prime}\) whose inner product lies in the radius of convergence \(\rho^{[j]}\) (so that \(|\left\langle\mathbf{c},\mathbf{c}\right\rangle|<\rho^{[j]}\)), the power-series representation of \(\tau^{[j]}\) about \(z\) can be written \(\tau^{[j]}(z+\left\langle\mathbf{c},\mathbf{c}\right\rangle)=\tau^{[j]}(z)+ \left\langle\boldsymbol{\rho}(\mathbf{g}^{[j]}(z),\mathbf{c}),\boldsymbol{ \rho}(\mathbf{1}_{\infty},\mathbf{c}^{\prime})\right\rangle\), where:
\[\begin{array}{l}\boldsymbol{\varrho}\left(\mathbf{a},\mathbf{d}\right)=\left[ \begin{array}{c}a_{0}\mathbf{d}^{\otimes 1\mathrm{T}}\;a_{1}\mathbf{d}^{\otimes 2\mathrm{T}}\;a_{2} \mathbf{d}^{\otimes 3\mathrm{T}}\;\dots\end{array}\right]^{\mathrm{T}}\\ \mathbf{g}^{[j]}\left(z\right)=\left[\begin{array}{c}\frac{1}{1!}\tau^{[j](1 )}\left(z\right)\;\frac{1}{2!}\tau^{[j](2)}\left(z\right)\;\frac{1}{3!}\tau^{[ j](3)}\left(z\right)\;\dots\end{array}\right]^{\mathrm{T}}\end{array}\]
Figure 2: Definition of terms for neural network before and after an iteration.
Given an input \(\mathbf{x}\), starting at layer \(0\) and working forward, and with reference to Figure 2, we can write the change \(\mathbf{x}_{\Delta}^{[1]}\) in the output of layer \(0\) due to the weight-step \(\mathbf{W}_{\Delta}\) as:
\[\begin{split}\mathbf{x}_{\Delta}^{[1]}&=\left[\left\langle \boldsymbol{\Phi}_{\mathcal{O};i_{1}}^{[0]}\left(\mathbf{x}\right),\boldsymbol{ \Psi}_{\mathcal{O};i_{1}}^{[0]}\left(\mathbf{W}_{\Delta}\right)\right\rangle \right]_{i_{1}}\\ \text{where: }\boldsymbol{\Phi}_{\mathcal{O};i_{1}}^{[0]}\left( \mathbf{x}\right)&=\boldsymbol{\varrho}\left(\mathbf{g}^{[0]} \left(\tilde{\boldsymbol{x}}_{\mathcal{O}i_{1}}^{[0]}\right),\mu_{i_{1}}^{[0] }\right]\frac{1}{\sqrt{2}\tilde{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{ \boldsymbol{ }}}}}}}}}}}}} \lambda_{\mathcal{O}}^{[0]}\left]\right)\\ \boldsymbol{\Psi}_{\mathcal{O};i_{1}}^{[0]}\left(\mathbf{W}_{ \Delta}\right)&=\boldsymbol{\varrho}\left(\mathbf{1}_{\infty}, \frac{1}{\mu_{i_{1}}^{[0]}}\left[\begin{array}{c}\sqrt{2}b_{\Delta i_{1}}^{ [0]}\\ \sqrt{2}\mathbf{W}_{\Delta i_{1}}^{[0]}\end{array}\right]\right)\end{split} \tag{8}\]
where we note that both feature maps have a finite radius of convergence. The feature maps are parameterised by the scale factors \(\mu_{i_{1}}^{[0]}\in\mathbb{R}_{+}\) whose role is mainly technical, insofar as they will allow us to show equivalence between RKBS regularised risk minimisation and back-propogation.2 Their exact value (beyond existence) is unimportant here.
Footnote 2: See appendix for more discussion.
The process is repeated for subsequent layers (see appendix B for details). After working through all layers:
\[\mathbf{f}_{\Delta}\left(\mathbf{x}\right)=\left\langle\boldsymbol{\Phi}_{ \mathcal{O}}\left(\mathbf{x}\right),\boldsymbol{\Psi}_{\mathcal{O}}\left( \mathbf{W}_{\Delta}\right)\right\rangle_{\mathcal{X}_{0}\times\mathcal{W}_{ \mathcal{O}}} \tag{9}\]
where \(\boldsymbol{\Phi}_{\mathcal{O}}=\boldsymbol{\Phi}_{\mathcal{O}}^{[D-1]}\), \(\boldsymbol{\Psi}_{\mathcal{O}}=\boldsymbol{\Psi}_{\mathcal{O}}^{[D-1]}\), and \(\forall j\in\mathbb{N}_{D}\backslash\{0\}\):
(10)
recursively \(\forall j\in\mathbb{N}_{D}\backslash\{0\}\) which are parameterised by scale factors \(\mu_{i_{j+1}}^{[j]}\in\mathbb{R}_{+}\) and shadow weights \(\omega_{i_{j}}^{[j]}\), \(\widetilde{\omega}_{i_{j+1}}^{[j]}\in\mathbb{R}_{+}\), which play a role in the equivalence between RKBS regularised risk minimisation and back-propagation (otherwise their exact values are unimportant).
### Contribution 2: Induced Kernels and Norms
In the previous section we established that, as a result of a single weight-step \(\mathbf{W}_{\Delta}\), we can write:
\[\begin{split}\mathbf{f}\left(\mathbf{x}\right)&= \mathbf{f}_{\mathcal{O}}\left(\mathbf{x}\right)+\mathbf{f}_{\Delta}\left( \mathbf{x}\right)\\ \mathbf{f}_{\Delta}\left(\mathbf{x}\right)&=\left\langle \boldsymbol{\Phi}_{\mathcal{O}}\left(\mathbf{x}\right),\boldsymbol{\Psi}_{ \mathcal{O}}\left(\mathbf{W}_{\Delta}\right)\right\rangle_{\mathcal{X}_{0} \times\mathcal{W}_{\mathcal{O}}}\end{split}\]
where \(\mathbf{f}_{\mathcal{O}}:\mathbb{X}\rightarrow\mathbb{Y}\) is the neural network pre-iteration and the feature maps \(\boldsymbol{\Phi}_{\mathcal{O}}\), \(\boldsymbol{\Psi}_{\mathcal{O}}\) are feature maps (8-10). Using these, we induce kernels on \(\mathbb{X}\), \(\mathbb{W}_{\mathcal{O}}\) using the kernel trick:
\[\begin{split}\mathbf{K}_{\mathcal{X}_{0}}\left(\mathbf{x}, \mathbf{x}^{\prime}\right)&=\left\langle\boldsymbol{\Phi}_{ \mathcal{O}}\left(\mathbf{x}\right),\boldsymbol{\Phi}_{\mathcal{O}}\left( \mathbf{x}^{\prime}\right)\right\rangle_{\mathcal{X}_{0}\times\mathcal{X}_{0} }\\ &=\boldsymbol{\Phi}_{\mathcal{O}}^{\mathbb{X}}\left(\mathbf{x} \right)\boldsymbol{\Phi}_{\mathcal{O}}\left(\mathbf{x}^{\prime}\right)\\ \mathbf{K}_{\mathcal{W}_{\mathcal{O}}}\left(\mathbf{W}_{\Delta}, \mathbf{W}_{\Delta}^{\prime}\right)&=\left\langle\boldsymbol{\Psi}_{ \mathcal{O}}\left(\mathbf{W}_{\Delta}\right),\boldsymbol{\Psi}_{\mathcal{O}} \left(\mathbf{W}_{\Delta}^{\prime}\right)\right\rangle_{\mathcal{W}_{ \mathcal{O}}\times\mathcal{W}_{\mathcal{O}}}\\ &=\boldsymbol{\Psi}_{\mathcal{O}}^{\mathbb{Y}}\left(\mathbf{W}_{ \Delta}\right)\boldsymbol{\Psi}_{\mathcal{O}}\left(\mathbf{W}_{\Delta}^{ \prime}\right)\end{split}\]
We call these kernels _neural neighbourhood kernels_ (NNK) as they describe the similarity structure in the finite neighbourhood of the \(\mathbf{W}_{\mathcal{O}}\) (c/f NTK, which is the behaviour tangent to, or in the infinitessimal neighbourhood of, \(\mathbf{W}_{\mathcal{O}}\)). These matrix-valued kernels are symmetric and positive definite by construction, and could potentially be used (transfered) in support vector machines (SVMs) or similar kernel-based methods, measuring similarity on \(\mathbb{X}\) and \(\mathbb{W}_{\mathcal{O}}\), respectively. Similarly, we induce a Banach kernel:
\[\begin{split}\mathbf{K}_{\mathcal{O}}\left(\mathbf{x},\mathbf{W}_{ \Delta}^{\prime}\right)&=\left\langle\boldsymbol{\Phi}_{\mathcal{O}} \left(\mathbf{x}\right),\boldsymbol{\Psi}_{\mathcal{O}}\left(\mathbf{W}_{ \Delta}^{\prime}\right)\right\rangle_{\mathcal{X}_{0}\times\mathcal{W}_{ \mathcal{O}}}\\ &=\operatorname{diag}\left(\mathbf{f}_{\Delta}\left(\mathbf{x} \right)\right)\end{split} \tag{11}\]
which is trivially the change in the network output \(\mathbf{f}_{\Delta}(\mathbf{x})\) under weight-step \(\mathbf{W}_{\Delta}\) for input vector \(\mathbf{x}\), diagonalised.
The precise form of the neural neighbourhood kernels is rather complicated (the derivation is straightforward but resulting recursive equation is very long). The NNK is the (un-approximated) analog of the NTK. However while neural networks evolve - to first order - in the RKHS defined by the NTK, the same is not true of the NNK. Indeed, it is not difficult to see that _RKHS regularisation using the NNK will always result in a weight vector that is not the image of a weight-step under \(\boldsymbol{\Psi}_{\mathcal{O}}\) (i.e. RKHS theory is insufficient, and we need RKBS theory to proceed)_. Thus, as the NNK is not the main focus of our paper will not reproduce them in the body of the paper - the interested reader can find them in appendix C. Using these induced kernels we may obtain expressions for the norms of the images of \(\mathbb{X}\) and \(\mathbb{W}_{\mathcal{O}}\) in feature space:
\[\left\|\boldsymbol{\Phi}_{\mathcal{O}}\left(\mathbf{x}\right) \right\|_{\mathcal{X}_{\mathcal{O}}}^{2} =\left\|\boldsymbol{\Phi}_{\mathcal{O}}^{[D-1]}\left(\mathbf{x} \right)\right\|_{F}^{2} \tag{12}\] \[=\mathrm{Tr}\left(\mathbf{K}_{\mathcal{X}_{\mathcal{O}}}\left( \mathbf{x},\mathbf{x}\right)\right)\] \[\left\|\boldsymbol{\Psi}_{\mathcal{O}}\left(\mathbf{W}_{\Delta} \right)\right\|_{\mathcal{W}_{\mathcal{O}}}^{2} =\left\|\boldsymbol{\Psi}_{\mathcal{O}}^{[D-1]}\left(\mathbf{W}_{ \Delta}\right)\right\|_{F}^{2}\] \[=\mathrm{Tr}\left(\mathbf{K}_{\mathcal{W}_{\mathcal{O}}}\left( \mathbf{W}_{\Delta},\mathbf{W}_{\Delta}\right)\right)\]
Once again, the precise form of these expressions is complicated (the derivation is straightforward but the answer is very long - details can be found in appendix C). The importance of these norms lies in deriving conditions on convergence of the feature maps. Defining the helper function:
\[\overline{\sigma}^{[j]}\left(\zeta\right)=\max_{z\in\mathbb{R}_{+}\cup\{0\}} \left\{\sum_{l=1}^{\infty}\left(\frac{1}{l!}\right)^{2}\tau^{[j]\left(l\right) }\left(z\right)\tau^{[j]\left(l\right)}\left(z\right)\zeta^{l}\right\} \tag{13}\]
and the constants:
\[s^{[j]2}=\left\{\begin{array}{ll}\frac{1}{2}\alpha^{[j]2}+\frac{1}{2}\frac{ H^{[j-1]}}{H^{[j-1]}}&\text{if }j=0\\ \frac{1}{2}\alpha^{[j]2}+\frac{H^{[j-1]}}{H^{[j-1]}}&\text{otherwise}\end{array}\right. \tag{14}\]
which are, loosely speaking, surrogates for, respectively, the expansion/contraction (fanout) in width from layer \(j\) to layer \(j-1\) and the size of the weight-step at layer \(j\); in appendix C.6 we derive the following Lemmas (see corresponding proofs of Theorems 5 and 6 in the appendix):
**Lemma 1**.: _Let \(\epsilon_{\phi}^{[j]}\in(0,1)\)\(\forall j\in\mathbb{N}_{D}\). For a given neural network and initial weights \(\mathbf{W}_{\mathcal{O}}\) define \(\frac{\phi^{[j]}}{H^{[j]}}=\overline{\sigma}^{[j]}((1-\epsilon_{\phi}^{[j]}) \sqrt{\rho^{[j]}})\)\(\forall j\in\mathbb{N}_{D}\). If the scale factors satisfy:_
\[\mu_{ij+1}^{[j]2} \leq\]
\(\forall j\in\mathbb{N}_{D},i_{j+1}\) _then \(\left\|\boldsymbol{\Phi}_{\mathcal{O}}^{[j]}(\mathbf{x})\right\|_{F}^{2}\leq \phi^{[j]}\)\(\forall\mathbf{x}\in\mathbb{X}\)._
**Lemma 2**.: _Let \(\epsilon_{\psi}^{[j]}\in(0,1)\)\(\forall j\in\mathbb{N}_{D}\). For a given neural network and initial weights \(\mathbf{W}_{\mathcal{O}}\) and weight-step \(\mathbf{W}_{\Delta}\), if:_
\[t_{\Delta i_{1}}^{[0]2}\leq\left(1-\epsilon_{\psi}^{[0]}\right) \mu_{i_{1}}^{[0]2}\] \[t_{\Delta i_{j+1}}^{[j]2}+\sum_{i_{j}}\frac{\left\|\boldsymbol{ \Phi}_{\mathcal{O}}^{[j-1]}\left(\mathbf{W}_{\Delta}\right)\right\|_{\mathbb{ Z}}^{2}}{\omega_{i_{j}}^{[j]2}}\left(\widetilde{\omega}_{i_{j},i_{j+1}}^{[j]2}+W_{ \Delta i_{j},i_{j+1}}^{[j]2}\right)\leq\left(1-\epsilon_{\psi}^{[j]}\right) \mu_{i_{j+1}}^{[j]2}\]
\(\forall j\in\mathbb{N}_{D},i_{j^{\prime}+1}\) _then \(\left\|\boldsymbol{\Psi}_{\mathcal{O}}^{[j]}(\mathbf{W}_{\Delta})\right\|_{F}^{2} \leq H^{[j]}\frac{1-\epsilon_{\psi}^{[j]}}{\epsilon_{\psi}^{[0]}}\)._
Lemma 1 allows us to place bounds on the scale factors to ensure that the feature map \(\boldsymbol{\Phi}_{\mathcal{O}}^{[j]}:\mathbb{X}\rightarrow\mathcal{X}_{ \mathcal{O}}\) is finite (well defined) for all \(\mathbf{x}\in\mathbb{X}\). Lemma 2 is similar, but rather than bounding the scale factors it takes these as given (by Lemma 1) and places bounds on the size of the weight-step \(\mathbf{W}_{\Delta}\) for which the feature map is \(\boldsymbol{\Psi}_{\mathcal{O}}^{[j]}:\mathbb{W}\rightarrow\mathcal{W}_{ \mathcal{O}}\) is finite (well defined). Taken together, therefore, they give some bound on the size of weight-step that can be modelled for a give neural network structure and initial weights \(\mathbf{W}_{\mathcal{O}}\). However they say nothing directly about the shadow weights. In the next section we use these Lemmas to establish a link between the weight-step generated by gradient descent and learning in RKBS, as well as clarifying the size of weight-step which we can model using our construct.
### Contribution 3: Equivalence of Gradient Descent and regularised Risk Minimisation in RKBS
In our previous contributions we showed that the change \(\mathbf{f}_{\Delta}\) in neural network behaviour can be represented as by the form (9) with feature maps (8,10), derived kernels from these feature maps and gave bounds on the size of the weight-step for which this representation is valid. In this section we use these results to establish a link between gradient descent learning in neural networks and regularised risk minimisation in reproducing kernel Banach space (RKBS).
To begin, it is not difficult to see that the feature maps \(\mathbf{\Phi}_{\mathcal{O}}:\mathbb{X}\rightarrow\mathcal{X}_{\mathcal{O}}\) and \(\mathbf{\Psi}_{\mathcal{O}}:\mathbb{W}_{\mathcal{O}}\rightarrow\mathcal{W}_{ \mathcal{O}}\) define a RKBS \(\mathcal{B}_{\mathcal{O}}\) imbued with \(\|\cdot\|_{\mathcal{B}_{\mathcal{O}}}=\|\cdot\|_{F}\) using (1) with the reproducing Banach kernel (11) deriving from (2):3
Footnote 3: In the appendix we prove that these maps satisfy the relevant density requirements.
\[\mathbf{K}_{\mathcal{O}}\left(\mathbf{x},\mathbf{W}_{\Delta}^{ \prime}\right) =\left\langle\mathbf{\Phi}_{\mathcal{O}}\left(\mathbf{x}\right), \mathbf{\Psi}_{\mathcal{O}}\left(\mathbf{W}_{\Delta}^{\prime}\right)\right\rangle _{\mathcal{X}_{\mathcal{O}}\times\mathcal{W}_{\mathcal{O}}}\] \[=\operatorname{diag}\left(\mathbf{f}_{\Delta}\left(\mathbf{x} \right)\right)\]
in terms of which the change \(\mathbf{f}_{\Delta}^{\oplus}\) in the network's behaviour due to a back-progation iteration may be written:
\[\mathbf{f}_{\Delta}^{\oplus}\left(\cdot\right)=\mathbf{K}_{\mathcal{O}}\left( \cdot,\mathbf{W}_{\Delta}^{\oplus}\right)\mathbf{1}\]
For a given neural network with initial weights \(\mathbf{W}_{\mathcal{O}}\), we assume that the weight-step is chosen using gradient descent.4 An alternative approach might be to select a weight-step to minimise the regularised risk in RKBS, specifically:
Footnote 4: Note that the training set \(\mathbb{D}\) here is for this iteration only and may be a random subset of a larger training set.
\[\mathbf{W}_{\Delta}^{\bullet}=\underset{\mathbf{W}_{\mathcal{O}}\in\mathcal{ W}_{\mathcal{O}}}{\operatorname{argmin}}\,R_{\lambda}\left(\mathbf{W}_{ \Delta}\right) \tag{15}\] \[R_{\lambda}\left(\mathbf{W}_{\Delta}\right)=\lambda\|\mathbf{ \Psi}_{\mathcal{O}}\left(\mathbf{W}_{\Delta}\right)\|_{\mathcal{W}_{\mathcal{O }}}+R_{E}\left(\mathbf{W}_{\Delta}+\mathbf{W}_{\mathcal{O}},\mathbb{D}\right)\]
where we call \(\lambda\) the trade-off coefficient. Larger trade-off coefficients favour smaller weight-steps, and vice-versa. The advantage of this form over the back-propagation derived weight-step is that we can directly apply complexity bounds etc. from RKBS theory, and then extend to the complete training process. This motivates us to ask:
_For a given neural network with initial weights and biases \(\mathbf{W}_{\mathcal{O}}\), let \(\mathbf{W}_{\Delta}^{\oplus}\) be the back-propagation weight-step (gradient descent with learning rate \(\eta\)) defined by (5), and let \(\mathbf{W}_{\Delta}^{\bullet}\) be a weight-step solving the regularised risk minimisation problem (15). Given the gradient-descent derived weight-step \(\mathbf{W}_{\Delta}^{\#}\), can we select scale factors, shadow weights and trade-off parameter \(\lambda\) (as a function of \(\mathbf{W}_{\Delta}^{\#}\)) that would guarantee that \(\mathbf{W}_{\Delta}^{\#}=\mathbf{W}_{\Delta}^{\#}\)?_
If the answer is yes (which we demonstrate) then we can gain understanding of back-propagation by analysing (15). Now, the solution to (15) must satisfy first-order optimality conditions (assuming differentiability for simplicity), so:
\[\tfrac{\partial}{\partial\mathbf{W}_{\Delta}}\left\|\mathbf{\Psi}_{\mathcal{O }}\left(\mathbf{W}_{\Delta}^{\bullet}\right)\right\|_{\mathcal{W}_{\mathcal{O }}}=\tfrac{-1}{\lambda}\tfrac{\partial}{\partial\mathbf{W}_{\Delta}}R_{E} \left(\mathbf{W}_{\Delta}^{\bullet}+\mathbf{W}_{\mathcal{O}},\mathbb{D}\right)\]
Note that _if_ the gradient of the regularisation term satisfies:
\[\tfrac{\partial}{\partial\mathbf{W}_{\Delta}}\left\|\mathbf{\Psi}_{\mathcal{O }}\left(\mathbf{W}_{\Delta}^{\#}\right)\right\|_{\mathcal{W}_{\mathcal{O}}}= \nu\mathbf{W}_{\Delta}^{\#}\]
for some \(\nu\in\mathbb{R}_{+}\), and \(\frac{1}{\lambda}=\eta\nu\), then \(\mathbf{W}_{\Delta}^{\bullet}=\mathbf{W}_{\Delta}^{\#}\). Thus the question of whether there exists scaling factors, shadow weights and \(\lambda\) such that the regularised risk minimisation weight-step corresponds to the gradient-descent weight-step for a specified learning rate \(\eta\) can be answered in the affirmative by proving the existence of canonical scalings, which we define as follows:
**Definition 2** (Canonical Scaling).: For a given neural network, initial weights \(\mathbf{W}_{\mathcal{O}}\) and weight step \(\mathbf{W}_{\Delta}^{\#}\) generated by back-propagation, we define a _canonical scaling_ to be a set of scaling factors and shadow weights for which \(\frac{\partial}{\partial\mathbf{W}_{\Delta}}\left\|\mathbf{\Psi}_{\mathcal{O}} \left(\mathbf{W}_{\Delta}^{\#}\right)\right\|_{\mathcal{W}_{\mathcal{O}}}=\nu \mathbf{W}_{\Delta}^{\#}\) for \(\nu\in\mathbb{R}_{+}\), and \(\left\|\mathbf{\Psi}_{\mathcal{O}}(\mathbf{W}_{\Delta}^{\#})\right\|_{\mathcal{W} _{\mathcal{O}}},\left\|\mathbf{\Phi}_{\mathcal{O}}(\mathbf{x})\right\|_{ \mathcal{X}_{\mathcal{O}}}<\infty\)\(\forall\mathbf{x}\in\mathbb{X}\).
To prove the existence of canonical scaling and thus the key connection between gradient descent (back-propogation) and learning in RKBS, using (14), defining:
\[t_{\Delta i_{j+1}}^{[j]\#2}=\left\{\begin{array}{ll}2b_{\Delta i_{1}}^{[0] \#2}+2\left\|\mathbf{W}_{\Delta i_{1}}^{[0]\#1}\right\|_{2}^{2}&\text{ if }j=0\\ 2b_{\Delta i_{j+1}}^{[j]\#2}+2\left\|\mathbf{W}_{\Delta i_{j+1}}^{[j]\#2}\right\| _{2}^{2}&\text{ otherwise}\end{array}\right.\]
\(\forall j\in\mathbb{N}_{D},i_{j+1}\), in the appendix we prove the following key result based on Lemmas 1 and 2:
**Theorem 1**.: _Let \(\epsilon,\chi\in(0,1)\). For a given neural network with initial weights \(\mathbf{W}_{\mathcal{O}}\), let \(\mathbf{W}_{\Delta}^{\#}\) be the weight-step for this derived from back-propagation, assuming wlog that \(\alpha^{[j]}\in\mathbb{R}_{+}\) is chosen such that \(\forall j\in\mathbb{N}_{D-1}\):5_
Footnote 5: Note that \(b_{\Delta i_{j+1}}^{[j]\#}\) is proportional to \(\alpha^{[j]}\), so we can always increase \(t_{\Delta i_{j+1}}^{[j]\#2}\) to ensure the condition holds by adjusting \(\alpha^{[j]}\).
\[\left\|\mathbf{W}_{\Delta}^{[j+1]\#}\right\|_{F}^{2}=\chi\left\|\mathbf{t}_{ \Delta}^{[j]\#2}\right\|_{\infty}^{2}\]
_Let \(\epsilon_{\psi}=1-\frac{1}{1-\chi}\frac{s^{[D-1]\mathbbm{n}}}{(1-\epsilon)\sqrt{ \rho^{[D-1]}}}\|\mathbf{t}_{\Delta}^{[D-1]\mathbbm{n}}\|_{\infty}^{2}\) and:6_
Footnote 6: The positivity of \(\epsilon_{\psi}\) is due to the constraints on \(\|\mathbf{t}_{\Delta}^{[D-1]\mathbbm{n}}\|_{\infty}^{2}\).
\[\epsilon_{\phi}^{[j]}=1-\frac{1}{\sqrt{\rho^{[j]}}}\overline{\sigma}^{[j]-1} \left(\frac{\frac{1}{1-\chi}\frac{\chi^{D-1-j_{\phi}}\left(1-\frac{[j+1]}{D} \sqrt{\rho^{[j+1]}}\right)}{\left\|\chi^{D+1}\|\mathbf{t}_{\Delta}^{[D-1] \mathbbm{n}}\right\|_{\infty}^{2}}-\frac{\chi^{D-1-j_{\phi}}}{1-\chi^{D-1-j_{ \phi}}}\left|\mathcal{F}\right|^{j+12}}{\frac{\max\left\{\frac{1}{\rho^{[j+1] }}\left[\left|\mathbf{W}_{\Delta}^{[j+1]}\right|_{\infty}^{2}\right]^{2}}{ \left\|\mathbf{W}_{\Delta}^{[j+1]}\right\|_{\infty}^{2}\right\|_{\infty}^{2}} \left\|\mathbf{W}_{\Delta}^{[j+1]}\right\|_{\infty}^{2}+\frac{H^{[j]}}{H^{[j+1 ]}}}\right)\]
\(\forall j\in\mathbb{N}_{D-1}\)_, where \(\epsilon_{\phi}^{[D-1]}=\epsilon\). If the weight-step satisfies:_
\[\left\|\mathbf{t}_{\Delta}^{[j]\mathbbm{n}}\right\|_{\infty}^{2} <B^{[j]}=\left\{\begin{array}{ll}\frac{(1-\chi)^{2}\left(1- \chi^{D-j-1}\epsilon_{\psi}\right)}{\left(\frac{\epsilon^{[j]\mathbbm{n}}}{(1- \epsilon)\sqrt{\rho^{[j]}}}\right)}&\mbox{if }j<D-1\\ \frac{(1-\chi)^{2}}{\left(\frac{\epsilon^{[D-1]}}{(1-\epsilon)\sqrt{\rho^{[D -1]}}}\right)}&\mbox{otherwise}\end{array}\right.\]
\(\forall j\in\mathbb{N}_{D}\) _then there exists of a canonical scaling:_
\[\begin{array}{ll}\frac{\partial}{\partial\mathbf{W}_{\Delta}}\left\| \mathbf{\Psi}_{\mathcal{O}}\left(\mathbf{W}_{\Delta}\right)\right\|_{\mathbf{ W}_{\Delta}=\mathbf{W}_{\Delta}^{\mathbbm{n}}}=\nu\mathbf{W}_{\Delta}^{ \mathbbm{n}}\\ \nu=\frac{4}{\left\|\mathbf{t}_{\Delta}^{[D-1]\mathbbm{n}}\right\|_{\infty}^{ 2}}\frac{(1-\epsilon_{\phi})(1-\chi)}{(\epsilon_{\phi}-\chi)^{2}}<\infty \end{array}\]
_where \(\|\mathbf{\Phi}_{\mathcal{O}}(\mathbf{x})\|_{F}^{2}\leq H^{[D-1]}\overline{ \sigma}^{[D-1]}((1-\epsilon)\sqrt{\rho^{[D-1]}})\)\(\forall\mathbf{x}\in\mathbb{X}\) and \(\|\mathbf{\Psi}_{\mathcal{O}}(\mathbf{W}_{\Delta})\|_{F}^{2}\leq H^{[D-1]} \frac{1-\epsilon_{\psi}}{\epsilon_{\psi}}\)._
This is proven as corollary 10 in the appendix. This theorem tells us that, for any set of initial weights \(\,\mathbf{W}_{\mathcal{O}}\), for a sufficiently small weight-step \(\mathbf{W}_{\Delta}^{\mathbbm{n}}\) generated by back proposition, there exists a canonical scaling - i.e. a set of scaling factors and shadow weights such that the gradient-descent weight-step is exactly equivalent to the step generated by regularised RKBS learning using an appropriate trade-off parameter \(\lambda\). Note that:
* The maximum step-size \(B^{[j]}\) in layer \(j\) (up to near-identity scaling terms) is determined by the inverse fanout \(1/s^{[j]}\) (which scales roughly as \(H^{[j]}/H^{[j-1]}\)) and the scaled radius of convergence \((1-\epsilon_{\phi}^{[j]})\sqrt{\rho^{[j]}}\) (which scales roughly inverse the the weight-step in subsequent layers). This bound will tend to get smaller as we move from the output layer back toward the input, but so too will the weight-steps in many cases due to the problem of vanishing gradients.
* The trade-off coefficient (degree of regularisation) required by this canonical scaling is: \[\lambda=\frac{1}{\eta\nu}=\frac{B^{[D-1]}}{4\eta}\left(1-\left(\frac{\frac{ \mathbbm{n}}{B^{[j]}}\mathbbm{n}}{B^{[D-1]}}\right)^{2}\right)^{2}\] as shown in figure 3. Note that (a) the degree of regularisation required to generate an equivalent RKBS weight-step is inversely proportional to the learning rate used to generate the original back-propogation weight-step and (b) larger gradient-descent weight-steps are equivalent to less regularised RKBS weight-steps, as might be expected.
## 7 Application - Rademacher Complexity
Having established an exact representation of neural networks in finite neighbourhoods of weights and biases, established the link with RKBS theory and demonstrated that a gradient descent step is equivalent to a regularised step in RKBS by appropriate, a-posterior selection of scale factors and shadow weights (canonical scaling), we now consider an application of this framework to uniform convergence analysis using Rademacher complexity. The Rademacher complexity of a set \(\mathcal{G}\) of real-valued functions is a measure of its capacity. Assuming training vectors \(\mathbf{x}_{i}\sim\nu\) and Rademacher random variables \(\epsilon_{i}\in\{-1,1\}\), the Rademacher complexity of \(\mathcal{G}\) is [10]:
\[\mathcal{R}_{N}\left(\mathcal{G}\right)=\mathbb{E}_{\omega,\epsilon}\left[ \sup_{f\in\mathcal{G}}\left|\frac{1}{N}{\sum_{i}}\epsilon_{i}f\left(\mathbf{x}_ {i}\right)\right|\right]\]
This may be used in uniform convergence analysis to bound how quickly the empirical risk converges to the expected risk, typically of the form:
\[\left|\overline{R}\left(g\right)-R_{E}\left(g\right)\right|\leq c\mathcal{R} _{N}\left(\mathcal{G}\right)+\mbox{excess risk}\]
The following theorem demonstrates how our framework may be used to bound Rademacher complexity for a scalar-output neural network:
**Theorem 2**.: _Let \(\epsilon,\chi\in(0,1)\) and for a given neural network with initial weights \(\mathbf{W}_{\mathcal{O}}\), and let \(\mathbf{W}_{\Delta}^{\boxplus}\) be the weight-step for this derived from back-propagation satisfying the conditions set out in corollary 10. Then \(\widehat{f}_{\Delta}^{\boxplus}\in\mathcal{F}^{\bullet}\), where the Rademacher complexity of \(\mathcal{F}^{\bullet}\) is bounded as:_
\[\frac{\mathcal{R}_{N}(\mathcal{F}^{\bullet})}{H^{[D-1]}}\leq\sqrt{\frac{ \sigma^{[D-1]}\left(\left(1-\epsilon\right)\sqrt{\rho^{[D-1]}}\right)}{N} \frac{\left\|\mathbf{t}_{\Delta}^{[D-1]}\right\|_{\infty}^{2}}{\frac{1}{1- \lambda}\mathbf{t}_{\Delta}^{[D-1]}\mathbf{t}_{\Delta}^{[D-1]}\mathbf{t}_{ \Delta}^{2}}}\]
The proof of this theorem is a straightforward application of theorem 1 (see proof of theorem 11 in the appendix for details). Apart from the usual \(\frac{1}{N}\) scaling this theorem is very different from typical bounds on Rademacher complexity, as it directly bounds the complexity using the size of the weight-step for a single iteration of gradient descent. Note:
* By the properties of Rademacher complexity (Bartlett and Mendelson, 2002, Theorem 12) the complexity of the trained neural network after \(T\) iterations may be bounded by a simple summation of the bounds on each step. This supports the practice of using early-stopping to prevent overfitting in neural networks.7 Footnote 7: We suspect that a more careful analysis accounting for the overlap between the RKBSs for each step may give sublinear dependence on \(T\), but this is beyond the scope of the present work.
* The size of the back-propogation weight-step scales propotially with the learning rate, so for sufficiently small learning rates we find: \[\frac{\mathcal{R}_{N}(\mathcal{F}^{\bullet})}{H^{[D-1]}}\lessapprox\sqrt{ \frac{\sigma^{[D-1]}\left(\left(1-\epsilon\right)\sqrt{\rho^{[D-1]}}\right)}{ N}}\left\|\mathbf{t}_{\Delta}^{[D-1]\boxplus}\right\|_{\infty}\] However as the learning rate increases the bound will increase at an accelerating rate.8 This supports the lower learning rates act as a form of regularisation in neural network training. Footnote 8: Eventually the bound will become meaningless as the step-size exceeds the size we can model using an RKBS.
To finish we consider a concrete example. Assume a 2-layer network with a scalar output (\(D=2\), \(n=H^{[1]}=1\)) with activation functions \(\tau^{[0]}=\tau^{[1]}=\tanh\) (so \(\rho^{[0]}=\rho^{[1]}=\frac{\pi}{2}\) and \(\tau^{(1)[0]}(\zeta),\tau^{(1)[1]}(\zeta)\in[-1,1]\). The form of \(\sigma_{z,z^{\prime}}^{[j]}\) is non-trivial (see appendix F for details), but it is not difficult to see that \(\overline{\sigma}^{[j]}=\sigma_{0,0}^{[j]}\), so, using the power-series about 0:
\[\overline{\sigma}^{[0]}\left(\zeta\right)=\overline{\sigma}^{[1]}\left(\zeta \right)=\sum_{m=1}^{\infty}\left(\frac{2^{2m}\left(2^{2m}-1\right)B_{2m}}{(2m )!}\right)^{2}\zeta^{2m-1}\]
and moreover \(s^{[0]2}=\alpha^{[0]2}/2+m/2H^{[0]}\) and \(s^{[1]2}=\alpha^{[1]2}/2+H^{[0]}\). Using our assumptions and the back-propagation equations, it is not difficult to see that:
\[\left\|\mathbf{t}_{\Delta}^{[0]\boxplus}\right\|_{\infty}^{2} \leq 2\eta^{2}N^{2}L_{k}^{2}\left\|\mathbf{W}_{\mathcal{O}}^{[1]} \right\|_{F}^{2}\left(1+\alpha^{[0]2}\right)\] \[\left\|\mathbf{t}_{\Delta}^{[1]\boxplus}\right\|_{\infty}^{2} \leq 2\eta^{2}N^{2}L_{k}^{2}\left(1+\alpha^{[1]2}\right)\]
and \(B^{[1]}=\frac{(1-\chi)^{2}(1-\epsilon)\sqrt{\frac{2}{2}}}{2\alpha^{[12}+H]^{0}}\). Assuming a learning rate:
\[\eta=s\frac{(1-\chi)}{NLE_{N}\sqrt{2(1+\alpha\nicefrac{{1}}{{2}})}}\sqrt{\frac{(1 -\epsilon)\sqrt{\frac{2}{2}}}{2\alpha^{[12}+H]^{0}}}\]
where \(0<s\ll 1\) - which follows the practice of scaling the learning rate inversely with the batch size and the network width - we obtain the following Rademacher complexity bound for the neural network trained for \(T\) iterations:
\[\frac{\mathcal{R}_{N}(\mathcal{F}^{\star})}{H^{[D-1]}}\lessapprox sT\sqrt{2 \frac{\sigma^{[D-1]}(\left(1-\epsilon\right)\sqrt{\frac{2}{2}})}{N}\frac{(1- \chi)^{2}(1-\epsilon)\sqrt{\frac{2}{2}}}{\frac{1}{2}\alpha^{[12}+H]^{0}}}\]
which we note scales inversely with the both the number of training vectors \(N\)_and_ the width of the network \(H^{[0]}\).
## 8 Conclusions and Future Directions
In this paper we have established a connection between neural network training using gradient descent and regularised learning in reproducing kernel Banach space. We have introduced an exact representation of the behaviour of neural networks as the weights and biases are varied in a finite neighbourhood of some initial weights and biases in terms of an inner product of two feature maps, one from data space to feature space, the other from weight-step space to feature space. Using this, we showed that the change in neural network behaviour due to a single iteration of back-propagation lies in a reproducing kernel Banach space, and moreover that the weight-step found by back-propagation can be exactly replicated through regularised risk minimisation in RKBS. Subsequently we presented an upper bound on the Rademacher complexity of neural networks applicable to both the over- and under-parametrised regimes, and discussed how this bound depends on learning rate, dataset size, network width and the number of training iterations used.
With regard to future work we foresee a number of useful directions. First, the analysis should be extended to non-smooth activation functions such as ReLU, presumably by modifying the feature map using a representation other than a power series expansion. Second, the precise influence of the learning rate needs to be further explicated, along with other details of Theorem 1. With regard to the neural neighbourhood kernels themselves, it would be helpful if these could be reduced to closed form to allow them to be used in practice.9 Finally, more work is needed to understand the impact of the depth of the network on this theory.
Footnote 9: A Taylor approximation the NNKs is readily obtained, but it is difficult to say how accurate this may be without further analysis.
|
2303.15521 | Reducing the cost of neural network potential generation for reactive
molecular systems | Although machine-learning potentials have recently had substantial impact on
molecular simulations, the construction of a robust training set can still
become a limiting factor, especially due to the requirement of a reference ab
initio simulation that covers all the relevant geometries of the system.
Recognizing that this can be prohibitive for certain systems, we develop the
method of transition tube sampling that mitigates the computational cost of
training set and model generation. In this approach, we generate classical or
quantum thermal geometries around a transition path describing a conformational
change or a chemical reaction using only a sparse set of local normal mode
expansions along this path and select from these geometries by an active
learning protocol. This yields a training set with geometries that characterize
the whole transition without the need for a costly reference trajectory. The
performance of the method is evaluated on different molecular systems with the
complexity of the potential energy landscape increasing from a single minimum
to a double proton-transfer reaction with high barriers. Our results show that
the method leads to training sets that give rise to models applicable in
classical and path integral simulations alike that are on par with those based
directly on ab initio calculations while providing the computational speed-up
we have come to expect from machine-learning potentials. | Krystof Brezina, Hubert Beck, Ondrej Marsalek | 2023-03-27T18:03:23Z | http://arxiv.org/abs/2303.15521v1 | # Reducing the cost of neural network potential generation for reactive molecular systems
###### Abstract
Although machine-learning potentials have recently had substantial impact on molecular simulations, the construction of a robust training set can still become a limiting factor, especially due to the requirement of a reference _ab initio_ simulation that covers all the relevant geometries of the system. Recognizing that this can be prohibitive for certain systems, we develop the method of transition tube sampling that mitigates the computational cost of training set and model generation. In this approach, we generate classical or quantum thermal geometries around a transition path describing a conformational change or a chemical reaction using only a sparse set of local normal mode expansions along this path and select from these geometries by an active learning protocol. This yields a training set with geometries that characterize the whole transition without the need for a costly reference trajectory. The performance of the method is evaluated on different molecular systems with the complexity of the potential energy landscape increasing from a single minimum to a double proton-transfer reaction with high barriers. Our results show that the method leads to training sets that give rise to models applicable in classical and path integral simulations alike that are on par with those based directly on _ab initio_ calculations while providing the computational speed-up we have come to expect from machine-learning potentials.
## I Introduction
Owing to the detailed atomistic insight into the structure and dynamics of molecular systems and materials, the relevance of computer simulations of molecular dynamics (MD) in current research is undeniable. MD simulations represent a valuable analytic and predictive tool in multiple fields of both basic and applied research including physical chemistry, material science, or drug design.[1; 2; 3; 4; 5] They also provide a way to explain and corroborate experimental data that might be difficult to interpret otherwise. For many systems of interest, MD simulations can be routinely performed under the Born-Oppenheimer approximation in the electronic ground state, with the nuclei being treated either classically or quantum-mechanically within the imaginary-time path integral formalism. This makes the choice of the potential energy surface (PES) a key decision that determines the accuracy of the resulting simulation. Out of the available options, _ab initio_ molecular dynamics[6] (AIMD) is a state-of-the-art methodology that relies on a full, on-the-fly quantum electronic structure calculation [7; 8] at every step of the simulation to evaluate the potential energy and forces. This is most commonly performed at the level of density functional theory[9; 10; 8] (DFT), which provides correlated electronic energies at a computational cost accessible in practice, but for smaller systems, the use of correlated wave function methods is feasible as well.[11; 12; 13] In any case, the computational cost of AIMD simulations is typically large -- especially so for advanced hybrid DFT functionals in the condensed phase -- and can easily become prohibitive in the light of the ever-growing demand for larger time and length scales of the relevant simulations.
This issue can be mitigated by the recent development of the so-called machine learning potentials (MLPs).[14; 15] These use various machine learning approaches[16; 17; 18; 19; 20] to faithfully approximate the desired _ab initio_ PES by training on a reference data set consisting of a relatively modest number of _ab initio_ geometries and their corresponding energies and, optionally, forces. As such, they indeed combine the best of the two worlds: they are able to maintain the accuracy of the parent _ab initio_ method, but they also circumvent the need for explicit electronic structure calculations at each step of the MD simulation. Thus, they evaluate the potential energy and forces at a significantly reduced computational cost.[21] One particular flavor of MLPs of major practical importance is represented by neural network potentials (NNPs), which rely on artificial neural networks combined with a set of appropriate atomic descriptors to accurately represent the molecular geometry-to-energy relationship, including all its symmetries.[16; 22] NNPs have repeatedly proved their worth in modeling a plethora of various molecular
systems ranging from liquids and solutions to interfaces and solids.[20; 23; 24; 25; 26] Our recent work focusing on NNPs[27; 26] shows that rather than using a single NNP to represent the PES, it is advantageous to build a model as a committee[28] of NNPs (C-NNP) that comprises a small number of NNPs, each trained individually to a subset of the main training set. The advantage is twofold: first, the energy prediction obtained as the committee average is known to be a better approximation of the _ab initio_ energy than the estimates of the individual members.[29; 30] Second, the committee disagreement,[31] represented by the standard deviation of the individual member estimates of energies or forces, serves as a valuable indicator of prediction reliability and can be used to monitor and optionally ensure the stability of the simulation.[27] Crucially, this disagreement can be used as the key ingredient of the active learning process called query by committee[32] (QbC) that systematically builds the training set in a data-driven way.[31; 27; 33]
An accurate and stable NNP can only be obtained on a foundation of robust, high-quality training data. This is typically based on a reference AIMD trajectory, from which geometries are selected for the training set, together with the corresponding energies and forces. However, the trajectory is highly correlated in time and thus most of the expensive AIMD data does not contribute useful information for the training of the model. This selection has been approached in different ways from random sampling and manual selection to more data-driven procedures,[25; 34; 35; 36; 37; 38; 39; 40; 41] with QbC being a particularly efficient method. QbC considers a set of candidate structures, in this case the whole AIMD trajectory, and iteratively builds up the training set. It starts by training a C-NNP on a very small set of initial configurations and using its disagreement to screen the candidate configurations for those with the most uncertain prediction. A small number of these configurations are then added to the training set, a new C-NNP is trained, and the process is iteratively repeated until some convergence criteria are met. In comparison to random selection, this approach is known to generate more compact training sets that give rise to robust models of similar accuracy.[35; 37] Even though the initial AIMD trajectory is typically the most expensive part of the procedure, numerous successful MLPs have been generated on top of reasonably short AIMD simulations.[26]
However, for many purposes, this process involving AIMD is still too expensive to be practical. For instance, the requirements on a high-level electronic structure method can raise the computational demands above a reasonable threshold. One might also be interested in a system that features rare events, such as chemical or conformational changes, which will happen quickly and occur infrequently or not at all in a direct AIMD simulation. In turn, these crucial configurations are underrepresented in the set of candidates and enhanced sampling simulations would be required in order to construct a robust training set, which typically raises the computational cost further by one or more orders of magnitude.
In case such a situation occurs, one needs to adhere to an approximate method of candidate generation that relieves some of the computational expenses while maintaining the quality of the resulting candidate set. For simple systems with a single potential energy minimum, the solution is fairly straightforward. In this case, one can benefit from random sampling of displacements in the directions of a fixed set of normal modes to obtain a set of distorted configurations. This approach, sometimes called normal mode sampling (NMS) in the literature,[18] avoids the cost of a full AIMD simulation by replacing it with a more manageable combination of a Hessian matrix evaluation and a number of single-point electronic structure calculations for the generated geometries. The sampling of the known normal mode distribution itself yields uncorrelated samples by definition and requires no _ab initio_ calculations, therefore its cost is negligible. Various versions of this approach were successfully used to generate structures for the training of MLPs. Using a scaled uniform random sampling of the normal modes, the method was used to obtain auxiliary structures used in model validation[42] and with approximate thermal distortions in NNP training set generation around configurational minima[18] as well as to construct an NNP model for a gas-phase ammonia molecule.[40] Clearly, the utility of NMS is limited when the harmonic approximation becomes insufficient. This can be the case if individual modes are strongly anharmonic or coupled, or if the system features conformational changes or reactions, where multiple local minima come into play.
In this work, we propose transition tube sampling (TTS), a robust and general approach to the generation of training sets and models that are able to accurately describe processes that feature transitions over potential energy barriers, which includes both conformational flexibility and chemical reactivity. We achieve this by generating thermally distorted candidate geometries along a reaction pathway with the help of multiple normal mode expansions and screening these candidates using QbC. The role of the minimum geometry in NMS is taken by the minimum energy path (MEP) that describes the course of the reaction through configuration space. Local harmonic expansions are performed in a small number of relevant configurations along the MEP and physically relevant candidate configurations are generated with uniform distribution along the MEP and with classical or quantum thermal weights in all perpendicular directions based on one of the sets of normal modes. An arbitrary number of these candidate configurations can be generated at a negligible computational cost and submitted to the QbC process, which selects the most important ones to have _ab initio_ calculations performed and to be included in the training set. This results in compact and robust training sets and models that maintain consistent accuracy along the reaction path, making them suitable for MD simulations of the reactive process, including enhanced sampling simulations, while no AIMD trajectories
are required as part of this process. We test this method on three different molecules in the gas phase to illustrate its capabilities.
The rest of the paper is organized as follows. In Section II, we begin by formalizing thermal NMS, which samples the exact classical or quantum canonical distribution under the harmonic approximation. With the obtained framework, we then proceed to introduce the MEP into the picture and describe the technical details of TTS. In Section III, we describe how we used TTS to create C-NNP models, the simulations performed with these models, and other related computational details. In Section IV, we apply this approach to three different gas-phase systems of increasing complexity represented by the molecules of benzene, malonaldehyde and 2,5-diaminonenzoquinone-1,4-diimine (DABQDI) and discuss its successes and possible pitfalls. Section V concludes the paper and offers outlooks concerning the generalization and the limitations of the method beyond the gas phase.
## II Theory
In this section, we discuss the theoretical basis of the TTS method. In this approach, we rely on the harmonic approximation and the vibrational normal mode formalism to obtain _ab initio_ training data for the construction of C-NNPs for reactive systems without the need to run expensive sampling simulations, such as full AIMD. First, we present the simple key idea behind NMS which relies on the harmonic approximation to describe the underlying PES and thus is expected to work well for systems that are close to harmonic around a single given minimum geometry at the temperature of interest. Clearly, this does not yield a flexible and general method, since the harmonic approximation is readily challenged by many realistic systems, notably those that exhibit more pronounced configurational flexibility or chemical reactivity. Therefore, we propose a more general approach to sampling candidate geometries based on NMS which is applicable even to systems described by multiple minima separated by barriers. This is achieved by using the harmonic expansion locally along an MEP in a way that eventually generates a balanced training set.
### NMS for thermal sampling around minimum geometries
To open the discussion of the theory behind TTS, we first turn our attention to the simple case represented by a PES with a single minimum geometry \(\mathbf{R}_{0}\) on which the nuclear motion is described by classical mechanics. Assuming a reasonable extent of validity of the harmonic approximation to capture the thermally accessible potential energy landscape, the classical thermal probability density \(\rho_{\mathrm{c}}\) at temperature \(T\) is approximated by
\[\rho_{\mathrm{c}}(\Omega_{1},\dots,\Omega_{N_{\mathrm{int}}})\propto\prod_{i=1 }^{N_{\mathrm{int}}}\exp\left(-\frac{1}{2}\beta\omega_{i}^{2}\Omega_{i}^{2} \right). \tag{1}\]
In this expression, \(\omega_{i}\) and \(\Omega_{i}\) denote, respectively, the natural frequency and the normal coordinate corresponding to the \(i\)-th normalized vibrational normal mode vector \(\mathbf{\Omega}_{i}\), and \(\beta\) is the inverse temperature equal to \(1/k_{\mathrm{B}}T\) (with \(k_{\mathrm{B}}\) representing the Boltzmann constant). \(N_{\mathrm{int}}\) is the total number of internal degrees of freedom of the species, typically \(3N-6\) for \(N\) atoms. Hence, in the harmonic approximation, the thermal density is described as a multivariate, yet uncoupled normal distribution where each \(i\)-th orthogonal degree of freedom has the standard deviation of \(\sigma_{i}=1/\sqrt{\beta\omega_{i}^{2}}\).
As such, it is straightforward to generate completely uncorrelated thermal geometries \(\mathbf{R}\) by distorting the minimum geometry \(\mathbf{R}_{0}\) independently in the direction of each of the normal mode vectors. The appropriate magnitude of the distortions is given by a randomly generated value of the corresponding normal coordinate \(\Omega_{i}\) from the distribution in Equation 1. The instrumental prescription for this procedure is the inverse coordinate transformation from normal modes back to Cartesian coordinates
\[\mathbf{R}=\mathbf{R}_{0}+\mathbb{M}^{-\frac{1}{2}}\sum_{i=1}^{N_{\mathrm{int }}}\Omega_{i}\mathbf{\Omega}_{i}, \tag{2}\]
where \(\mathbb{M}\) represents the diagonal mass matrix. Thus, by drawing samples of normal coordinates and transforming them, we obtain correctly distributed thermal samples in Cartesian coordinates.
We can now perform thermal NMS by sampling from this auxiliary harmonic ensemble as a source of candidate geometries to be potentially included in the training set of an MLP. The auxiliary ensemble is thus never used directly and no expectation values are calculated over it. It only needs to provide good coverage of the thermally accessible region of the PES, which will be the case as long as the harmonic approximation is reasonably accurate for the system of interest. Specifically, we construct a training set in our active learning procedure by generating a large number of these NMS candidate geometries and screening them in a QbC process using a C-NNP model. In each QbC iteration, electronic structure calculations are performed only for a small number of selected structures to obtain their potential energies and possibly forces, which then comprise the final training set once the process converges. The computational cost is thus determined primarily by the geometry optimization procedure, the Hessian calculation, the C-NNP prediction required for screening, and the electronic structure calculations for the selected geometries. The cost of the sample generation is negligible. This approach is substantially less computationally demanding when compared to the more conventional approach of sampling the candidate geometries for QbC from an AIMD trajectory, which requires a
large number of electronic structure calculations for very similar geometries that do not contribute diversity to the training set. In contrast to that, NMS generates fully decorrelated geometries by construction, and electronic structure calculations are only needed for the relatively small number of the most important geometries selected by the subsequent QbC process.
So far, we have focused on the situation where NMS is used to sample a classical distribution on the studied PES. However, since the harmonic approximation describes a molecule as a set of independent one-dimensional harmonic oscillators, we can readily generalize the above classical case to a quantum one as the analytic solution of the quantum harmonic oscillator is known. Specifically, it is straightforward to show (see Section S1 of the Supporting Information) that the canonical thermal density of a quantum harmonic oscillator at a given temperature is Gaussian just as its classical counterpart, but broader. This broadening is encoded in the quantum effective inverse temperature [43]
\[\beta^{*}(\beta,\omega)=\frac{2}{\hbar\omega}\tanh\left(\frac{\beta\hbar \omega}{2}\right) \tag{3}\]
at which a classical harmonic oscillator would have the same thermal width as a quantum harmonic oscillator at a reference inverse temperature \(\beta\). Since \(\beta^{*}\) is by definition a frequency-dependent quantity, one cannot describe the whole molecule by a single quantum effective temperature, but instead has to assign one to each individual mode. In turn, the quantum thermal density is given by
\[\rho_{\rm q}(\Omega_{1},\ldots,\Omega_{N_{\rm int}})\propto\prod_{i=1}^{N_{\rm int }}\exp\left[-\frac{1}{2}\beta^{*}(\beta,\omega_{i})\omega_{i}^{2}\Omega_{i}^{ 2}\right]. \tag{4}\]
This simple modification allows one to generate an auxiliary quantum ensemble at practically the same cost as the classical one that would otherwise need to be approached from a significantly more demanding perspective, perhaps based on sampling techniques using the imaginary time path integral formalism.
### Transition tube sampling
Up to this point, we have relied on the ability of the harmonic expansion around a single minimum to approximate the real PES so that the generated samples cover sufficiently all the relevant regions for the purpose of generating an MLP. Arguably, this is a reasonable requirement for most stable molecules with a single minimum geometry where the onset of the anharmonic region connected to the dissociation of the molecule is not thermally accessible. On the other hand, it is a stringent requirement for molecular systems which display conformational changes or chemical reactivity and, therefore, are represented by multiple PES minima connected by MEPs: features not captured by a single harmonic expansion. However, in such cases, it is desirable for the resulting model to be able to describe the potential energy landscape not only around local potential energy minima, but also in the transition regions. This is vital in the case of low-\(k_{\rm B}T\) barriers, where spontaneous transitions occur during direct MD. Nonetheless, it cannot be omitted even in the case of high-\(k_{\rm B}T\) barriers where an enhanced-sampling simulation would be required to cross the barrier. Even if the transition does not actually occur, the presence of the transition state may introduce substantial anharmonicity within the original PES basin that an eventual MLP should learn. However, in the case of barrier transitions it is not desirable to attempt to build the C-NNP model starting from a candidate set representing the true thermal ensemble, even if we could obtain it, since this would lead to a possibly detrimental under-representation of the high-energy configurations close to the transition state in the resulting
Figure 1: A schematic depiction of the TTS approach proposed for reactive systems. Panel A: An illustrative MEP winding through a model two-dimensional configuration space. Panel B: Control points are selected and their local normal modes (gray arrows) are calculated. Here, the control points are taken as the two endpoint minima on the MEP (purple and brown dots) and the transition state (yellow–green). Panel C: A much denser set of uniformly distributed reference geometries is generated along the MEP. Panel D: Each of the reference geometries is distorted using the local modes of their assigned control point (as detailed in panel E) to become a candidate geometry. Panel E: A detailed view showing how each reference geometry is assigned to a set of local modes. A set of reference geometries on the MEP is assigned to each control point following Equation 5. At the MEP edges, standard Gaussian thermal NMS is performed outside of the reaction coordinate (decaying purple and brown tails of the total density).
candidate set and, in turn, to poor performance of the resulting model in the transition regions.
Therefore, we propose the TTS method: a generalization of NMS for systems with transitions that employs local normal modes along an MEP to sample uniformly along the path and with proper thermal weights in all perpendicular directions. This leads to an auxiliary harmonic ensemble that differs significantly from the true thermal one but enables construction of MLPs with uniform accuracy along the whole transition. The TTS method naturally reduces to thermal NMS as described above for single-minimum systems in the zero MEP length limit. The process, illustrated in Figure 1, starts by finding the MEP \(\mathbf{R}(\xi)\) on the given PES (panel A). Here, \(\xi\) is a dimensionless reaction coordinate along the MEP curve through configuration space normalized to the interval from 0 to 1. In the following, we shall assume that the MEP is available as a continuous, differentiable function of the parameter \(\xi\). In practice, this can be achieved by spline fitting of the discretized representation of the MEP originating from, for instance, a nudged elastic band calculation.[44] Note that by definition, the MEP is a minimum of the PES in all directions perpendicular to it. Once the relevant MEP is known, we continue by selecting a sparse set of control points \(\mathbf{R}_{c},c=1,\ldots,N_{\mathrm{p}}\) along the MEP at positions \(\xi_{c}\) for which the Hessian matrices are calculated and diagonalized to give the set of local normal mode vectors \(\mathbf{\Omega}_{c,i}\) and their corresponding frequencies. For instance, this can be the two end-point minima and the transition state between them (Figure 1, panel B), although there is no constraint on how densely one might select the control points along the MEP other than the limiting computational expense of the Hessian matrix calculation. Formally, the expansion of the PES along the MEP becomes exact under the harmonic approximation in the limit of a large number of control points \(N_{\mathrm{p}}\). Since we want to achieve uniform sampling along the MEP, we now proceed to the generation of reference geometries on the MEP that do have this property. Specifically, to each control point \(\mathbf{R}_{c}\) we first assign a probability distribution \(p_{c}(\xi)\) defined as
\[p_{c}(\xi)=\begin{cases}\sin^{2}\left[\frac{\pi}{2|\xi_{c}-\xi_{c-1}|}(\xi-\xi_ {c-1})\right],\xi_{c-1}\leq\xi\leq\xi_{c}\\ \cos^{2}\left[\frac{\pi}{2|\xi_{c}-\xi_{c+1}|}(\xi-\xi_{c})\right],\xi_{c}\leq \xi\leq\xi_{c+1}\\ 0\text{ otherwise}\end{cases} \tag{5}\]
for all \(\xi\in[0,1]\) so that the identity
\[p(\xi)=\sum_{c=1}^{N_{\mathrm{p}}}p_{c}(\xi)=1 \tag{6}\]
holds over the whole length of the MEP (Figure 1, panel E over the range of \(\xi\)). Note that the choice of squares of harmonic functions is only one out of many possibilities, and any other pair of complementary functions that sum up to unity would work in this case. Next, we generate an arbitrary number of reference geometries \(\mathbf{R}_{0}(\xi)\) at a chosen linear density by drawing random values of \(\xi\) from the above distributions and passing them to the continuous prescription of the MEP, all while keeping track of the parent \(c\)-th control point (Figure 1, panel C). Analogously to the distortion of the minimum geometry in the single-minimum case through Equation 2, we distort each of these reference geometries using the normal modes and frequencies of its parent control point using
\[\mathbf{R}=\mathbf{R}_{0}(\xi)+\mathbb{M}^{-\frac{1}{2}}\sum_{i}{}^{{}^{\prime }}\Omega_{c,i}\left[1-\mathbb{P}(\xi)\right]\mathbf{\Omega}_{c,i}, \tag{7}\]
where the normal coordinate values \(\Omega_{c,i}\) are sampled thermally according to Equation 1 or 4 (Figure 1, panel D); the prime indicates that modes with imaginary frequencies are omitted from the sum. The matrix \(\mathbb{P}(\xi)\) is the projector on the tangent direction at point \(\xi\) which can be constructed analytically from \(\mathrm{d}\mathbf{R}(\xi)/\mathrm{d}\xi\). This is used to obtain distortions strictly perpendicular to the MEP and thus to correct for the approximate validity of the normal mode expansion calculated at \(\xi_{c}\) for all the displaced geometries. However, the use of the decaying probability distributions (Equation 5) favors the use of the local modes close to their origin. Through this procedure, one obtains a set of candidate geometries distributed inside a tube around the MEP the width of which is given thermally. At this point, this tube still has open ends cut sharply by the planes defined by normal vectors equal to the MEP tangent vector at the endpoints of the MEP. Since these points are (usually) also well-defined minima on the PES, the presence of these sharp edges is easily sanitized by appending the usual thermal NMS samples at these minima, although only adding the configurations away from the MEP (Figure 1, decaying gray dashed lines). In other words, just one half of the multivariate Gaussian is appended to the tube that does not overlap with it. In our TTS implementation, we ensure that the uniform density of the sampling along the MEP and the one at the peak of the half-Gaussian are seamlessly matched (as described in Section S1 of the Supporting Information).
Using the described sampling approach leads to an auxiliary ensemble of candidates that does not correspond to the true thermal ensemble, but contains a balanced selection of geometries distributed uniformly along the MEP with classical or quantum thermal displacements around it. Just like with plain NMS, we submit these samples as candidates to the QbC procedure, where in each iteration a large number of them is screened and a small number of those is selected to be included in the training set. _Ab initio_ calculations are only required for these selected geometries. This enables the building of diverse training sets in which all representative structures that might be encountered in a future simulation are contained so that the resulting model is, in fact, able to accurately describe the PES along the whole MEP, even in regions that have negligible thermal populations. Similar to NMS, the computational cost of TTS is de
termined primarily by the MEP optimization procedure, the Hessian calculation, the C-NNP prediction required for screening, and the electronic structure calculations for the selected geometries. In general, this can be expected to be substantially less computationally demanding than executing direct, or even enhanced-sampling, classical or path integral AIMD simulations and sampling from their trajectories.
## III Computational details
### Ab initio electronic structure
Two different levels of electronic structure theory were used in the simulations presented in this work. In both cases, we used the implementation provided by the CP2K software package [45] with its Quickstep DFT module.[46; 47] We described the electronic structure of the benzene molecule in the gas phase at the self-consistent charge density-functional tight binding [48] (SCC-DFTB) level with third-order diagonal corrections. The system was enclosed in a 10 A wide cubic box with open boundary conditions. For malonaldehyde and DABQDI systems, we used the revPBE0-D3 hybrid density functional [49; 50; 51; 52] combined with the TVZ2P Gaussian basis set [53; 46; 54] to represent the molecular orbitals and a plane wave basis with a 600 Ry cutoff to represent the density. The core electrons of the heavy atoms were represented using Goedecker-Tetter-Hutter pseudopotentials.[55] In addition, we used the auxiliary density matrix method [56] with the cpFIT3 fitting basis set for the DABQDI molecule. Both systems using hybrid DFT were centered in a 15 A wide cubic box with open boundary conditions and the wavelet Poisson equation solver.
### C-NNP model generation
Throughout all of our investigations, we used committees consisting of eight different Behler-Parrinello neural network potentials,[16] where for each of them, a different initialization of weights and a different 90 % subset of the full training data set was used to ensure a diverse committee. The models consisted of two hidden layers of 20 nodes each and were trained using the multistream [57] adaptive extended Kalman [58; 59] filter with 32 streams. The input features were a standard set of atom-centered symmetry functions.[26] The training of the model was done using the n2p2 package [57] and the selection of training structures by QbC was done with a development version of our AML package.[60] For benzene, 20 structures were randomly sampled initially and in each of the 40 QbC iterations, 10 new structures with the highest committee force disagreement were added to the data set, for a total of 420 training geometries. The final NNPs were trained for 2000 epochs. For the first generation of malonaldehyde and DABQDI models, 20 initial structures were sampled randomly and then the 15 structures with the highest disagreement were selected in each of the 40 QbC iterations, for a total of 620 training geometries. For malonaldehyde, where additional generations of models were required (as detailed in Section IV), the original training set was supplemented by additional structures QbC-sampled from an MD trajectory which was produced using the previous C-NNP model. Here, 15 structures were added in each iteration until the force committee disagreements of the selected structures and the remaining candidate structures were similar. All reference calculations were done using CP2K and the electronic structure settings described above.
### Geometry optimization and vibrational analysis
The optimization of the minimum reference geometries for benzene and DABQDI was executed natively in the CP2K software. It was performed using the BFGS optimizer [61] combined with threshold criteria of 0.07 eV A\({}^{-1}\) for the maximum change in force components, 0.009 A for the change in atomic positions and 0.13 eV for the change in total energy. For the malonaldehyde molecule, we employed the Atomic Simulation Environment (ASE) [62] together with CP2K and performed the optimization using the FIRE optimizer [63] while specifying only a force criterion of 0.01 eV A\({}^{-1}\). Additional constrained optimizations in the case of DABQDI needed for the relaxed PES scan were performed using the constraint functionality provided by ASE together with its FIRE optimizer. The Hessian matrix evaluation on the optimized structures was performed in each case using CP2K and a Cartesian atomic displacement of 0.0005 A.
### Nudged elastic band calculations
The relevant MEPs needed for the TTS procedure were obtained through the climbing-image [64] nudged elastic band [44] (CI-NEB) optimization procedure as implemented in CP2K. The initial band geometries in this work consisted of 15 replicas of the molecule in question including the two fixed, pre-optimized endpoints, and were obtained through linear interpolation. The spring constant of the harmonic links between the neighboring replicas was kept constant at the value of 4.86 eV A\({}^{-2}\). We used a force convergence criterion of 0.007 eV A\({}^{-1}\) and the minimization of the band energy was performed using a DIIS optimizer.
### MD simulations
All MD simulations involving both _ab initio_ as well as C-NNP potentials [30] were run using the CP2K package.
The simulations with classical representation of the nuclei were propagated at a temperature of 300 K using a time step of 0.5 fs to numerically integrate the Langevin equation with the friction coefficient \(\gamma\) of 0.02 fs\({}^{-1}\) to achieve canonical sampling. The path integral simulations that include nuclear quantum effects were performed using imaginary-time ring polymers consisting of 64 replicas using the RPMD propagator. The canonical distribution at 300 K was sampled using the path integral Langevin equation thermostat [65] with the time constant for the centroid motion of 200 fs while the integration time step was kept at 0.25 fs.
### Umbrella sampling
The initial conditions for each umbrella sampling window were extracted from a steered MD trajectory, which was performed in the CP2K v2022.1 software package combined with the PLUMED plugin [66, 67, 68]. In this case, the value of the proton-sharing coordinate \(\delta_{1}\) (as detailed in Section IV) was biased from \(-1.2\,\mathrm{\SIUnitSymbolAngstrom}\) to \(1.2\,\mathrm{\SIUnitSymbolAngstrom}\) during a 10 ps long simulation using a moving harmonic restraint with the force constant \(\kappa\) of 500.0 kJ mol\({}^{-1}\) A\({}^{-2}\). The simulation was performed classically with an integration time step of 0.5 fs in the canonical ensemble at 300 K using a local CVR thermostat [69] with a time constant of 50 fs.
30 equidistant umbrella sampling windows separated by 0.08 A were set up from the above steered MD simulation. Individually in each window, the value of \(\delta_{1}\) was biased by a static harmonic restraint of 500.0 kJ mol\({}^{-1}\) A\({}^{-2}\) and simulated for 50 ps using the same setup as for the steered MD simulation above. The overlap of the corresponding histograms of \(\delta_{1}\) values observed in each simulation window is shown in Section S2 of the Supporting Information. The value of \(\delta_{2}\) was kept unbiased in each simulation window but was monitored for use in the following analysis. The biased configurations were reweighed to the unbiased ensemble using a Python implementation of the multistate Bennet acceptance ratio [70, 71] (MBAR) procedure to obtain both a one-dimensional free energy profile for the proton-transfer along \(\delta_{1}\) as well as a two-dimensional free energy surface showing the dependence on both proton-sharing coordinates. This was done by determining the thermal weight associated with each configuration in the biased simulations and using these to obtain the probability distribution in the \(\delta_{1}\), \(\delta_{2}\) subspace, and from that the corresponding free energy surface.
## IV Results and discussion
To showcase the performance of the TTS procedure in the creation of models for realistic potentials, we select three different gas-phase molecules with increasing complexity of their PES. We begin with benzene, which represents a single-minimum system with a close-to-harmonic potential at room temperature and thus allows us to illustrate the simple thermal NMS procedure. This is followed by a study of the end form of 1,3-propanediol (malonaldehyde), which exhibits reactivity by sharing the acidic proton between the two oxygen atoms spontaneously at ambient conditions [72]. Finally, we focus on a more involved proton-sharing system represented by 2,5-diaminobenzzoquinone-1,4-diimine (DABQDI), which has two proton-sharing sites [73]. Spontaneous proton transfer is hindered by a barrier thermally insurmountable at room temperature, and an enhanced sampling simulation is necessary to determine the free energy profile.
### Benzene
To lead off the discussion of the ability of TTS to seed a training set for the creation of C-NNP models for realistic systems in the gas phase, we focus on the benzene molecule. It represents an ideal example to illustrate the basic idea of thermal NMS using a single normal mode expansion at an optimal geometry, since it features a single configurational minimum and the surrounding PES exhibits almost no anharmonic effects.
To prepare the ground for comparison with the relevant C-NNP data, we initially performed one 250 ps AIMD simulation of gas-phase benzene at 300 K at the DFTB level using a classical representation of the atomic nuclei as well as a 100 ps PIMD simulation using 64 replicas to approximate the imaginary time path. Two C-NNP models were then based on candidate sets obtained from a thermal NMS of gas-phase benzene using a Hessian matrix calculated at the same DFTB level of theory as the (PI)-AIMD simulations at 300 K for the classical model and with the appropriate effective temperatures at 300 K for the quantum one. The resulting models were evaluated on test sets consisting of 1000 structures sampled from the two AIMD trajectories. Both models performed very well with an energy root mean square error (RMSE) of 1.66 meV and 5.90 meV for the model constructed for the use without and with path integral structures, respectively. The RMSE for a single force component was \(14.9\,\mathrm{meVA}^{-1}\) and \(30.4\,\mathrm{meVA}^{-1}\). Subsequently, the models were used to obtain new 500 ps long MD and 100 ps PIMD simulations at 300 K.
The comparison of the C-NNP models to the corresponding (PI)-AIMD trajectories in terms of molecular geometry properties is summarized in Figure 2. In general, we can see the expected broadening of probability distributions due to nuclear quantum effects when we compare the left and right columns of Figure 2. In both the classical and quantum case, we observe a perfect match between the _ab initio_ (green shading) and C-NNP distributions (blue dashed lines) in C-C bond lengths (panels A and B), C-H bond lengths (panels D and E), C-C-C angles (panels E and F), and C-C-C dihe
drals (panels G and H). The two types of covalent bonds have expected distributions; the mean of the C-C-C angle is located at 120\({}^{\circ}\) which shows the average hexagonal arrangement of the aromatic ring subject to planarity, which is, in turn, demonstrated by the (signed) C-C-C dihedral angle peaking at 0\({}^{\circ}\) as expected. This level of agreement suggests that the final models used for production MD represent excellent approximations to the original DFTB PES. Additionally, we show the distributions of the NMS structures (orange dotted lines) alongside the anharmonic distributions. These exhibit significant overlap with both the (PI)-AIMD and C-NNP data. This suggests that the harmonic approximation to the original ensemble is relatively good and confirms the assumed high degree of harmonicity of the 300 K gas-phase benzene PES, even in the quantum case. However, note that the match of the NMS data with the (PI)-AIMD data is not nearly as perfect as that of the C-NNP data and certain deviations are, in fact, present. As discussed in Section II, these are to be expected since the NMS ensemble is only auxiliary and its goal is to provide sufficient coverage of the accessible PES which ultimately leads to an accurate C-NNP model. Using thermal NMS, we were thus able to construct a C-NNP that accurately describes the original PES of benzene based on a single Hessian evaluation and 420 single-point electronic structure calculations.
### Malonaldehyde
The end form of malonaldehyde is a simple organic molecule that has been used in MD simulations [72] to illustrate a simple intramolecular proton-transfer reaction in which the proton is moved from the end oxygen to the aldehyde oxygen with a simultaneous electronic rearrangement which altogether causes the reactant and the product to become symmetrically mirrored, chemically identical structures. As such, malonaldehyde is a convenient molecule to demonstrate the ability of TTS to describe a simple reaction.
With the aim to describe the proton-sharing process accurately, we decided to model the original _ab initio_ PES at the hybrid DFT level using the revPBE0-D3 functional. Thanks to the ability to use quantum normal coordinate distributions in the TTS method, we produced both classical and quantum models and classical and PIMD trajectories for malonaldehyde to test and showcase this functionality. However, we focus mostly on the classical case in the main text and discuss the complementary quantum results, for which qualitatively similar conclusions arise, in the Supporting information, Section S2. The starting point of the TTS procedure is the proton-sharing MEP, which was discretized into 15 replicas and optimized using the CI-NEB procedure and the revPBE0-D3 density functional. For illustration purposes, we decided to use the full MEP with both the reactant and product explicitly represented: this is strictly speaking not necessary since the symmetry of the reaction allows the use of only one non-trivial half of the MEP for TTS. Out of the optimized full-length MEP, three control points were selected in the two minima (reactant and product) and in the transition state. For the visualization of the multidimensional configuration data, we choose the reduction into a 2D space of two geometric parameters: the proton-sharing coordinate \(\delta(\mathbf{R})=|\mathbf{R}_{\mathrm{O}}-\mathbf{R}_{\mathrm{H}}|-|\mathbf{ R}_{\mathrm{O^{\prime}}}-\mathbf{R}_{\mathrm{H}}|\) and the oxygen-oxygen \(d_{\mathrm{OO^{\prime}}}(\mathbf{R})=|\mathbf{R}_{\mathrm{O}}-\mathbf{R}_{ \mathrm{O^{\prime}}}|\) where O and O' denote the two oxygen atoms that share the proton H. The parameters are illustrated in the snapshot on the left of Figure 3. The obtained optimized MEP and the selected control
Figure 2: Thermal geometry properties of benzene in the gas phase at 300 K from classical MD (left column) and path integral MD (right column) compared between simulations using the reference DFTB potential, the harmonic TTS ensemble, and simulations using a C-NNP model building on the thermal NMS geometries. Panels A and B show the distribution of C–C bond lengths, panels C and D the distribution of C–H bond lengths, panels E and F the distribution of C–C–C angles, and, finally, panels G and H the distribution of the C–C–C dihedral angles.
points in this representation are shown in the top left panel of Figure 3 in orange. The TTS classical candidate structures were then generated using the procedure outlined in Section II at the temperature of 300 K, linear sampling density along the MEP of \(1\times 10^{5}\,\mathrm{\SIUnitSymbolAngstrom}^{-1}\) and matched-density sampling at the minima. The distribution of the obtained configurations is shown in the top left panel of Figure 3 as blue contours. The same distribution colored by the assignment of each candidate to the individual control points (corresponding to the situation shown in panel D of Figure 1) is shown in Section S2 of the Supporting information. Note that this particular presentation of the data does not do justice to the uniformity of the sample distribution along the MEP as the regions around the minima seem to be more populated than the transition state. This is an effect of the deformation of the configuration space by the projection on the selected subspace; the samples are distributed uniformly in the full dimension. After passing the resulting set of candidates through the QbC selection and training a C-NNP model on the obtained training set, the model was used to run a direct 250 ps MD simulation of gas-phase malonaldehyde at 300 K. A subset of the obtained configurations is shown in the form of a scatter plot in the top right panel of Figure 3 colored by the decadic logarithm of the norm of the force committee disagreement on carbon atoms in the usual \(\delta\) and \(d_{\mathrm{OO}^{\prime}}\) representation. Additionally, a 1D free-energy profile obtained by a Boltzmann inversion of the probability density of configurations along the \(\delta\)-axis is shown in the bottom panel of Figure 3; the height of the barrier is approximately 120 meV which corresponds to roughly 5 \(k_{\mathrm{B}}T\) at 300 K. This accounts for the expected low, but existing population surrounding the transition state at \(\delta=0\,\mathrm{\SIUnitSymbolAngstrom}\). Alongside the free energy profile, we show the corresponding average potential energy as a function of \(\delta\).
An important observation can be made from the presented data. By comparison of the two distributions in the top panels of Figure 3, it is clear that the TTS distribution populates a smaller volume of the configuration space than the data obtained from the MD simulation. In this particular case, it means that TTS does not directly provide good enough coverage and the resulting model is undertrained in the missing, yet thermally accessible regions. Specifically, the C-NNP model performs poorly in the large \(d_{\mathrm{OO}^{\prime}}\) tails of the shown distribution, as quantified by the larger force disagreement values. On the contrary, in regions around the proton-sharing MEP, the coverage is good and the force disagreement remains small, despite the tiny thermal population. Regardless of the elevated model uncertainty for some configurations, these MD simulations remain stable. The observed increased disagreement can be interpreted in the following way: going in the opposite direction from the minima as the proton-sharing MEP, the true anharmonic PES has a potential wall that grows slower than the harmonic wall captured by the TTS distribution and, therefore, the thermal coverage of the TTS configurations cannot reach far enough. In principle, this behavior is caused by either the strongly anharmonic character of the chemical bonds leading to bond dissociation or, more likely in this case, the presence of another reactive process leading to a new transition state. In the following two paragraphs, we present two possible solutions to the issue. The first one relies on an active-learning-based iterative improvement of the model, which has the advantage of requiring no knowledge of the origin of the anharmonicity, but is tedious to perform since several intermediate simulations and model generations need to be created. Meanwhile, the other solution relies on the chemical intuition of the user to identify the reactive nature of the issue with the aim to extend the initial TTS, which leads to a fully capable C-NNP model straight away.
The QbC process can be used to fill in an already existing training data set that has gaps, perhaps due to an incomplete TTS candidate set in the QbC selection for the initial model. We illustrate this process in Figure 4, where the left panel shows the same data as the top right panel of Figure 3 as a starting point. Regions of configuration space not covered well in the training
Figure 3: The TTS sampling of the malonaldehyde proton transfer and the MD simulation with the resulting C-NNP model. The top left panel shows the relevant MEP (orange) with the three selected control points in the two configurational minima and the transition state highlighted and the distribution of the TTS geometries (blue). The top right panel shows a scatter plot of a subset of geometries (selected with a stride of 37.5 fs) obtained during a 250 ps MD simulation using the C-NNP model built on top of the TTS ensemble. Each point is colored by the norm of the force committee disagreement on the carbon atoms and the mean of the quantity is shown in the box. Note the high force disagreement in the high \(\delta\) tails of the distribution. The bottom panel shows the Boltzmann-inverted free energy profile (red) and the corresponding binned average potential energy of the system (black, aligned to zero) along the proton transfer reaction as observed in the MD simulation.
set of this generation 1 model can be easily identified by the high committee disagreement, as can be seen in the tails of the distribution. Hence, one can start a new QbC using the existing training data set augmented by structures from an MD simulation performed with the initial C-NNP. Depending on the size of the gaps in the initial training data set, only a few iterations of QbC are typically necessary. However, adding these structures to the training data set can lead to substantial changes in the previously inaccurate regions of the potential energy surface, resulting in an MD simulation that again reaches new regions of the configuration space where the shape of the potential energy surface is yet unknown to the model and the committee disagreement is high. This can be seen in the generation 2 model shown in the middle panel of Figure 4. Therefore, multiple repetitions of the MD-QbC cycle might be necessary until a highly accurate model that exhibits low and uniform disagreements over the sampled data is reached, as is the case for the generation 3 model in the right panel of Figure 4. As such, the approach could become practically cumbersome when the regions of high disagreement coincide with regions of high free energy and long MD simulations are needed to uncover these structures, but nonetheless represents a general solution to the anharmonicity problem. Overall, repeating the cycle of sampling MD configurations with a given generation of a C-NNP model followed by training a new generation on a training set enhanced by high-disagreement QbC-selected structures from the previous MD simulations leads to a force disagreement that is lower in the problematic PES regions and, therefore, more uniform overall. In addition, we observe a decrease of the mean of the force disagreement of the sampled configurations due to the fact that the size of the training set increases in each generation. Specifically, 620 structures were used for the original model in the left panel of Figure 4, 800 structures for the model in the middle panel, and 950 structures for the last model in the right panel. This approach can be beneficial in systems where it is difficult to identify the origin of the anharmonicity of the original PES, but is rather demanding from the point of view of both computational requirements and user involvement due to the need for the semi-supervised iterative procedure.
However, in the case of malonaldehyde, the general active learning iterative procedure to improve the model might be excessive. The possible reasons for the softer-than-harmonic wall due to conformational flexibility are few and the particular direction against which the first generation of the model is pushing can be easily identified with the _s-cis_ and _s-trans_ torsional isomerism
which is mediated by rotation around the C-C single bond in the propane backbone. The optimized MEP corresponding to this torsion displays a perfect continuation in the correct direction when projected into the \(\delta\) and \(d_{\text{OO}^{\prime}}\) subspace, as shown in the top left panel of Figure 5 in orange. Although this pair of descriptors is not appropriate for the whole torsion MEP, which entails a more complicated motion, it is accurate enough at small deviations from the equilibrium geometry. To include structures along this MEP into the initial (first generation) proton-sharing TTS, two control points were chosen in the minimum (shared by the two MEPs) and in the new transition state (not shown in Figure 5, as it is around \(d_{\text{OO}^{\prime}}=3.8\,\text{\AA}\)). We do not need to use the _s-trans_ minimum at all, as we are not interested in including the
Figure 4: Evolution of the force disagreement of the carbon atoms through multiple instances of QbC. The left panel shows a subset of configurations originating from an MD simulation using a model trained on data selected directly from the TTS candidate set (identical data as in the top right panel of Figure 3 is shown). The force disagreement (depicted using the color scale) in the vicinity of the two configurational minima and along the proton-sharing reaction is adequately low, however, for structures with a high absolute \(\delta\) it is more than one order of magnitude higher. The central panel shows configurations and disagreements obtained from an MD simulation using a model trained on the initial training data augmented by QbC-selected high-disagreement configurations from the data in the left panel. In turn, the right panel shows data obtained by improving the model by the new data sampled in the simulation shown in the central panel. Most notably, structures at the tails of the populated configuration space are substantially improved. The mean force disagreement over all configurations in each data set is shown in the framed box in each panel. The \(\delta\) and \(d_{\text{OO}^{\prime}}\) coordinates are illustrated in the snapshot to the left of the panels.
transition itself, only the shape of the PES on the side of the global minimum. A new TTS was performed between these control points with the same parameters as the initial one and the resulting distribution of the combined sets of configurations is shown in blue in the top left panel of Figure 5. Running a 250 ps long MD simulation with a new C-NNP model trained on the QoC-selected training set from this combined candidate set leads to the distribution shown in the top right panel in Figure 5. Clearly, the distribution reaches all the expected thermally accessible regions, the force disagreement is evened out across the configurations, and the high-disagreement tails are no longer present. The mean value of the disagreement is comparable to that of the first-generation model, as these two models are based on training sets of the same size. The bottom panel of Figure 5 shows the potential energy and the free energy profile along \(\delta\), where the latter is shown for both classical and path-integral data. In all three curves, one can observe a softening of the barrier in the high \(|\delta|\) regions in comparison to the data shown in Figure 3 resulting from the present C-NNP being aware of the anharmonic nature of the PES in these regions. The transition-state free energy of 43 meV for the quantum simulation is \(\sim\)4 times lower than in the classical case, which is a manifestation of proton tunneling through the barrier. This is qualitatively consistent with existing literature but quantitatively different from the results reported for the BLYP functional, where the classical free energy barrier is somewhat higher and the quantum effect is substantially smaller, decreasing the barrier by a factor of only \(\sim\)2.[72]
For further insights, a test set independent of the training set data was created by generating 500 structures using TTS and sampling 500 structures from the classical MD trajectory shown in Figure 5, and evaluating their energies and forces with the revPBE0-D3 reference method. The generation 3 C-NNP of the iterative approach as well as the extended TTS C-NNP perform well with an energy root mean square error (RMSE) of 1.80 meV and 3.44 meV and a force component RMSE of 18.3 meV\(\text{\AA}^{-1}\) and 24.4 meV\(\text{\AA}^{-1}\), respectively. The slightly elevated RMSE of the extended TTS approach is due to the broader coverage of the training set. It includes a range of geometries along the C-C single bond torsion, even in regions that are not populated during MD, leading to a less dense coverage of the rest of the configuration space. The validation errors of the intermediate models of the iterative approach and the distribution of errors within configuration space are discussed in more detail in the Supporting Information, Section S2.
Both of the approaches above thus yield highly accurate models for the description of the proton-sharing reaction in malonaldehyde for classical and quantum nuclei. The advantage of one over the other therefore depends mostly on the specific situation in which the need for any of them should arise: if the reaction coordinate of the complementary transition can be identified, then the latter approach using the extended TTS reaches the desired result with higher efficiency. Note that this approach can also be used when multiple different transitions are to be included in a single model.
## Dabqdi
The most complex reactive system used to demonstrate the performance of the TTS method is represented by the DABQDI molecule. This nitrogenated benzoquinone derivative can exchange two protons between the neighboring amine and iming groups
\[\text{H}_{2}\text{N}\]
\[\text{NH}_{2}\]
\[\text{NH}_{2}\]
where each proton-sharing coordinate describes a single proton-sharing site, was obtained at the revPBE0-D3 level of electronic structure theory through a relaxed scan of the molecular potential energies while applying appropriate constraints and is shown in the left panel of Figure 6. The shown data was aligned so that the global minimum of the PES corresponds to the zero-energy level. The typical structure of the PES featuring four distinct configurational minima and four transition states corresponds to a sequential double proton transfer at the level of an MEP. Here, one proton is first fully exchanged to reach an intermediate state located at a higher potential energy and only then the other proton follows. The height of the potential energy barrier for this sequential process of roughly 0.8 eV indicates that its thermal rate should be negligible. The alternative concerted proton transfer path that is seen in other species including carboxylic acid dimers [74] is classically disallowed in this case by a tall (\(>\)1.2 eV) potential barrier in the middle of the presented PES which represents a second-order saddle point and, as such, no MEP can go through it. The symmetry of the DABQDI PES allows us to explicitly address only a single proton transfer: unlike in the previous example, we exploit this feature here for the C-NNP model generation. The relevant non-trivial MEP connecting the two chemically distinct minima was obtained using the CI-NEB optimization at the revPBE0-D3 level of theory and is shown in white in the left panel of Figure 6. From there, the straightforward TTS candidate generation was performed using the three usual control points in the two minima and in the transition state at 300 K with the linear sampling density of \(1\times 10^{3}\,\mathrm{\SIUnitSymbolAngstrom}^{-1}\). A subset of the candidate geometries is shown in the left panel of Figure 6 as green points. The obtained C-NNP model was used to recreate the 2D proton transfer PES which is shown in the middle panel of Figure 6 with the energies aligned in the same way as in the DFT case. Qualitatively, the C-NNP model captures all the features of the original _ab initio_ PES including the position of the minima, the transition state, and the barriers, as well as the potential energy values. Note that the good agreement in the representation of the central barrier in spite of the lack of corresponding geometries in the TTS candidates is due to the successful extrapolation by the model. The quantitative difference between the _ab initio_ and C-NNP potential energy landscape is captured in the right panel of Figure 6 and shows that the potential energies are reproduced as closely as approximately \(\pm 0.05\,\mathrm{eV}\). This difference could be decreased further, if desired, by changing the architecture of the C-NNP and by increasing the size of the training set beyond the current 620 structures.
Since DABQDI features barriers that are not practically accessible by direct MD, it serves as a useful example to illustrate the power of the TTS-based C-NNP model to perform an enhanced sampling calculation to correctly estimate the free energy profile of the double proton transfer at 300 K. This was obtained using an umbrella sampling simulation in the coordinate \(\delta_{1}\) with the C-NNP PES (as described in Section III) followed by a multistate Bennet acceptance ratio (MBAR) reweighting of the biased configurations. The obtained 1D free energy profile in \(\delta_{1}\) is shown in blue in the top panel of Figure 7. The transition state is located at roughly 0.8 eV above the global minimum. Comparing this with
Figure 6: Comparison between the reference _ab initio_ revPBE0-D3 and the C-NNP proton-sharing PESs of the DABQDI molecule. The left panel shows the projection is the reference DFT PES into the \(\delta_{1}\), \(\delta_{2}\) subspace using a color scale as well as individual isoenergetic contours. Furthermore, the minimal non-trivial MEP which describes a single proton transfer is depicted in white with the selected control points highlighted. A sparse subset of the TTS geometries sampled around the MEP at 300 K is shown in green. The middle panel shows the corresponding PES projection calculated with the resulting C-NNP. Finally, the right panel shows the difference between the two PESs and quantifies the maximum error of the C-NNP fit in the classically thermally accessible regions on the order of \(10^{-3}\) to \(10^{-2}\) eV. The two \(\delta\) coordinates are illustrated in the snapshot of the DABQDI molecule to the left of the panels.
the value of the corresponding potential energy suggests that the entropic contribution in the gas phase system is small and that the population at the barrier is clearly negligible at 300 K. To validate the obtained free energy profile, we perform a reweighing of a subset of the obtained configurations in each umbrella window to the original DFT ensemble. This is achieved by additionally multiplying each MBAR-obtained unbiased weight by the corresponding factor \(e^{-\beta\Delta E}\) where the energy difference \(\Delta E\) is the difference between the C-NNP and DFT potential energy for each configuration. For this purpose, we used a total of 3000 configurations obtained by selecting 100 geometries evenly spaced in time from each of the 30 umbrella sampling windows. The resulting profile, which is a good approximation to the full-DFT free energy profile, is displayed as the orange dashed line in Figure 7 and shows very good correspondence with the profile obtained using the C-NNP model alone. This procedure thus at the same time validates the C-NNP model and provides DFT data for a fraction of the cost of the hypothetical purely _ab initio_ enhanced sampling simulation. Monitoring the values of the collective variable \(\delta_{2}\) along the umbrella sampling simulation and using the thermal weights obtained from the MBAR treatment of the biased simulations also allows recovering the 2D free energy surface in \(\delta_{1}\) and \(\delta_{2}\). Its symmetrized version is shown in the bottom panel of Figure 7. It is worth noting that the computational acceleration of the C-NNP umbrella simulation in comparison to the naive execution with the original DFT method is substantial. To illustrate the computational savings, we can compare the times required for one MD step with the implementations in CP2K used in this work. With the hybrid functional, one MD step takes 272 s on a single core or 17 s on 32 cores (a full node) of our EPYC-based cluster. With the C-NNP, one step takes 0.006 s and does not scale meaningfully to more cores due to the small system size. This yields a speedup of \(\sim\)45000\(\times\) on identical resources or \(\sim\)2800\(\times\) with more resources given to the DFT calculation. Obviously, the specific numbers will depend on the details of the electronic structure setup and the MLP architecture used, as well as the specific implementations and hardware used, but this behavior of our particular setup should provide a general idea.
## V Conclusions
In this work, we have introduced the TTS method to sample thermal geometries around MEPs that describe barrier-crossing transitions in molecular systems. The goal of the method is to provide a physically meaningful set of candidate structures for the creation of MLPs without the need to run computationally demanding _ab initio_ simulations. In our case specifically, we submit these geometries to QbC and construct a C-NNP model, but the same candidates could be used for other types of models as well. The execution of the TTS protocol as a whole entails a relatively modest computational cost with respect to the original _ab initio_ method that is given by the MEP optimization, several Hessian evaluations, and a small number of single-point _ab initio_ calculations for the QbC-selected geometries. In terms of application to realistic systems, the TTS method yields highly accurate C-NNP models in all studied cases. This was achieved either by using the generated candidate set directly, or by letting the resulting C-NNP model undergo additional active learning generations to compensate for a pronounced anharmonic effect as seen in the case of malonaldehyde. As such, the performance of TTS in the presented test systems demonstrates its robustness and efficiency and suggests applicability in most gas-phase systems, including highly anharmonic cases.
A noteworthy feature of the TTS method is its ability to provide thermal geometries sampled from the quantum thermal distribution at essentially the computational cost of the classical case. As such, models that are appropriate for use in path integral simulations are made readily available without the need to run expensive PI-AIMD simulations at all. However, it is important to recognize
Figure 7: Umbrella sampling simulation of the single proton transfer in the DABQDI molecule along the \(\delta_{1}\) collective variable using the C-NNP potential. The top panel shows the obtained free energy profile in blue. For validation purposes, we also show the DFT free energy profile (orange, dashed) obtained by reweighting the C-NNP configurations as described in the text. Note that no umbrella sampling using the DFT potential was performed to obtain the DFT free energy profile. The bottom panel shows the full 2D free energy surface obtained by weighing the distribution in the two proton-sharing coordinates by the thermal Boltzmann factors extracted from the biased simulation and symmetrizing the resulting histogram.
that although the present formulation of TTS can address quantum behavior, it has limitations in this regard that derive from the fundamentally classical nature of the MEP. Nuclear quantum effects, in particular quantum tunneling through the potential barrier, can cause the configuration-space probability density of the system to deviate from the transition tube around the MEP in a way that renders the coverage by TTS samples insufficient.
To account for this, the above formulation of TTS can be straightforwardly generalized from sampling around classical MEPs to ring-polymer instantons, [75] which represent the paths of optimal tunneling. While this modification requires essentially no adaptation of the TTS theory and implementation itself, one can anticipate an elevated computational cost due to the required instanton optimization at the explicit _ab initio_ level. The approach will find applications beyond the gas phase, in systems where vibrational normal modes are a meaningful concept, such as crystals and surface-bound molecules. Since our research anticipates the need to address such systems in the near future, we expect TTS to be a valuable tool in the creation of accurate, yet computationally accessible potentials that will enable the accurate description of these more complex systems at unprecedented sizes and simulation time scales.
###### Acknowledgements.
K.B. and H.B. acknowledge funding from grant schemes at Charles University, reg. n. CZ.02.2.69/0.0/0.0/19_073/0016935, and from the IMPRS for Quantum Dynamics and Control. O.M. acknowledges support from the Czech Science Foundation, project No. 21-27987S.
## References
* Section C **104**, 142-164 (2008).
* [2] Marco De Vivo, Matteo Masetti, Giovanni Bottegoni, and Andrea Cavalli, "Role of Molecular Dynamics and Related Methods in Drug Discovery," Journal of Medicinal Chemistry **59**, 4035-4061 (2016).
* [3] Scott A. Hollingsworth and Ron O. Dror, "Molecular Dynamics Simulation for All," Neuron **99**, 1129-1143 (2018).
* [4] Volker L. Deringer, Miguel A. Caro, and Gabor Csanyi, "Machine Learning Interatomic Potentials as Emerging Tools for Materials Science," Advanced Materials **1902765**, 1902765 (2019).
* [5] Nan Yao, Xiang Chen, Zhong Fu, and Qiang Zhang, "Applying Classical, Ab Initio, and Machine-Learning Molecular Dynamics Simulations to the Liquid Electrolyte for Rechargeable Batteries," Chemical Reviews **122**, 10970-11021 (2022).
* [6] Dominik Marx and Jurg Hutter, _Ab initio molecular dynamics: basic theory and advanced methods_ (Cambridge University Press, New York, 2009).
* [7] Attila Szabo and Neil S. Ostlund, _Modern quantum chemistry: introduction to advanced electronic structure theory_ (Dover Publications, Inc., Mineola, N.Y., 1996).
* [8] Robert G. Parr and Weitao Yang, _Density-functional theory of atoms and molecules_ (Oxford University Press, New York, 1989).
* [9] P. Hohenberg and W. Kohn, "Inhomogeneous Electron Gas," Physical Review **136**, B864-B871 (1964).
* [10] W. Kohn and L. J. Sham, "Self-Consistent Equations Including Exchange and Correlation Effects," Physical Review **140**, A1133-A1138 (1965).
* [11] Ondrej Marsalek, Pei Yang Chen, Romain Dupuis, Magali Benoit, Merlin Meheut, Zlatko Bacic, and Mark E. Tuckerman, "Efficient calculation of free energy differences associated with isotopic substitution using path-integral molecular dynamics," Journal of Chemical Theory and Computation **10**, 1440-1453 (2014).
* [12] Thomas Spura, Hossam Elgabarty, and Thomas D. Kuhne, ""On-the-fly" coupled cluster path-integral molecular dynamics: impact of nuclear quantum effects on the protonated water dimer," Physical Chemistry Chemical Physics **17**, 14355-14359 (2015).
* [13] V. Kapil, J. VandeVondele, and M. Ceriotti, "Accurate molecular dynamics and nuclear quantum effects at low cost by multiple steps in real and imaginary time: Using density functional theory to accelerate wavefunction methods," The Journal of Chemical Physics **144**, 054111 (2016), arXiv:1512.00176.
* [14] Jorg Behler, "Neural network potential-energy surfaces in chemistry: a tool for large-scale simulations," Physical Chemistry Chemical Physics **13**, 17930-17955 (2011).
* [15] Pascal Friederich, Florian Hase, Jonny Proppe, and Alan Aspuru-Guzik, "Machine-learned potentials for next-generation matter simulations," (2021).
* [16] Jorg Behler and Michele Parrinello, "Generalized Neural-Network Representation of High-Dimensional Potential-Energy Surfaces," Physical Review Letters **98**, 146401 (2007).
* [17] Albert P. Bartok, Mike C. Payne, Risi Kondor, and Gabor Csanyi, "Gaussian approximation potentials: The accuracy of quantum mechanics, without the electrons," Physical Review Letters **104** (2010), 10.1103/PhysRevLett.104.136403, arXiv:0910.1019.
* [18] J. S. Smith, O. Isayev, and A. E. Roitberg, "ANI-1: an extensible neural network potential with DFT accuracy at force field computational cost," Chemical Science **8**, 3192-3203 (2017), arXiv:1610.08935.
* [19] K. T. Schutt, P. J. Kindermans, H. E. Sauceda, S. Chmiela, A. Tkatchenko, and K. R. Muller, "SchNet: A continuous-filter convolutional neural network for modeling quantum interactions," Advances in Neural Information Processing Systems **2017-Decem**, 992-1002 (2017), arXiv:1706.08566.
* [20] Simon Batzner, Albert Musaelian, Lixin Sun, Mario Geiger, Jonathan P. Mailoa, Mordechai Kornbluth, Nicola Molinari, Tse E. Smid, and Boris Kozinsky, "E(3)-equivariant graph neural networks for data-efficient and accurate interatomic potentials," Nature Communications **13** (2022), 10.1038/s41467-022-29939-5, arXiv:2101.03164.
* [21] Pavlo O. Dral, "Quantum Chemistry in the Age of Machine Learning," (2020).
* [22] Jorg Behler, "Atom-centered symmetry functions for constructing high-dimensional neural network potentials," Journal of Chemical Physics **134** (2011), 10.1063/1.3553717.
* [23] Suresh Kondati Natarajan and Jorg Behler, "Neural network molecular dynamics simulations of solid-liquid interfaces: Water at low-index copper surfaces," Physical Chemistry Chemical Physics **18**, 28704-28752 (2016).
* A deep learning architecture for molecules and materials," Journal of Chemical Physics **148** (2018), 10.1063/1.5019779, arXiv:1712.06113.
* [25] Ganesh Sivaram, Anand Narayanan Krishnamoorthy, Matthias Baur, Christian Holm, Marius Stan, Gabor Csanyi, Chris Benmore, and Alvaro Vazquez-Mayagoitia, "Machine-learned interatomic potentials by active learning: amorphous and
liquid hafnium dioxide," npj Computational Materials **6** (2020), 10.1038/s41524-020-00367-7.
* Schran _et al._ [2021]Christoph Schran, Fabian L. Thiemann, Patrick Rowe, Erich A. Muller, Ondrej Marsalek, and Angelos Michaelides, "Machine learning potentials for complex aqueous systems made simple," Proceedings of the National Academy of Sciences of the United States of America **118**, e211007118 (2021), arXiv:2106.00048.
* Schran _et al._ [2020]Christoph Schran, Krystof Brezina, and Ondrej Marsalek, "Committee neural network potentials control generalization errors and enable active learning," The Journal of Chemical Physics **153**, 104105 (2020), arXiv:2006.01541.
* Hansen and Salamon [1990]Lars Kai Hansen and Peter Salamon, "Neural Network Ensembles," IEEE Transactions on Pattern Analysis and Machine Intelligence **12**, 993-1001 (1990).
* Gastegger _et al._ [2017]Michael Gastegger, Jorg Behler, and Philipp Marquetand, "Machine learning molecular dynamics for the simulation of infrared spectra," Chemical Science **8**, 6924-6935 (2017).
* Schran _et al._ [2020]Christoph Schran, Jorg Behler, and Dominik Marx, "Automated Fitting of Neural Network Potentials at Coupled Cluster Accuracy: Protonated Water Clusters as Testing Ground," Journal of Chemical Theory and Computation **16**, 88-99 (2020), arXiv:1908.08734.
* Krogh and Vedelsby [1995]Anders Krogh and Jesper Vedelsby, "Neural Network Ensembles, Cross Validation, and Active Learning," Advances in Neural Information Processing Systems **7**, 231-238 (1995).
* COLT '92_ (ACM Press, New York, New York, USA, 1992) pp. 287-294.
* Chen _et al._ [2020]Lei Chen, Ivan Sukuba, Michael Probst, and Alexander Kaiser, "Iterative training set refinement enables reactive molecular dynamics via machine learned forces," RSC Advances **10**, 4293-4299 (2020).
* Morawietz _et al._ [2016]Tobias Morawietz, Andreas Singraber, Christoph Dellago, and Jorg Behler, "How van der Waals interactions determine the unique properties of water," Proceedings of the National Academy of Sciences **113**, 8368-8373 (2016).
* Smith _et al._ [2018]Justin S. Smith, Ben Nebgen, Nicholas Lubbers, Olexandr Isayev, and Adrian E. Roitberg, "Less is more: Sampling chemical space with active learning," Journal of Chemical Physics **148** (2018), 10.1063/1.5023802, arXiv:1801.09319.
* Musl _et al._ [2018]Felix Musl, Sandip De, Jack Yang, Joshua E. Campbell, Graeme M. Day, and Michele Ceriotti, "Machine learning for the structure-energy-property landscapes of molecular crystals," Chemical Science **9**, 1289-1300 (2018).
* Vandermause _et al._ [2020]Jonathan Vandermause, Steven B. Torrisi, Simon Batzner, Yu Xie, Lixin Sun, Alexie M. Kolpak, and Boris Kozinsky, "On-the-fly active learning of interpretable Bayesian force fields for atomistic rare events," npj Computational Materials **6** (2020), 10.1038/s41524-020-0283-z, arXiv:1904.02042.
* Wang _et al._ [2020]Wujie Wang, Tzhusing Yang, William H. Harris, and Rafael Gomez-Bombarelli, "Active learning and neural network potentials accelerate molecular screening of ether-based soluic ionic liquids," Chemical Communications **56**, 8920-8923 (2020).
* Miksch _et al._ [2021]April M. Miksch, Tobias Morawietz, Johannes Kastner, Alexandre Urban, and Nongnuch Artiti, "Strategies for the construction of machine-learning potentials for accurate and efficient atomic-scale simulations," (2021), arXiv:2101.10468.
* Schwabbe-Koda _et al._ [2021]Daniel Schwabbe-Koda, Aik Rui Tan, and Rafael Gomez-Bombarelli, "Differentiable sampling of molecular geometries with uncertainty-based adversarial attacks," Nature Communications **12**, 1-12 (2021), arXiv:2101.11588.
* Vandenhaute _et al._ [2023]Sander Vandenhaute, Maarten Cools-Ceuppens, Simon DeKeyser, Toon Verstraelen, and Veronique Van Speybroeck, "Machine learning potentials for metal-organic frameworks using an incremental learning approach," npj Computational Materials **9** (2023), 10.1038/s41524-023-00969-x.
* Rupp _et al._ [2015]Matthias Rupp, Raghunathan Ramakrishnan, and O. Anatole Von Lilienfeld, "Machine Learning for Quantum Mechanical Properties of Atoms in Molecules," Journal of Physical Chemistry Letters **6**, 3309-3313 (2015), arXiv:1505.0035.
* Ceriotti _et al._ [2009]Michele Ceriotti, Giovanni Bussi, and Michele Parrinello, "Nuclear Quantum Effects in Solids Using a Colored-Noise Thermostat," Physical Review Letters **103**, 030603 (2009).
* Jonsson _et al._ [1998]Hannes Jonsson, Greg Mills, and Karsten W. Jacobsen, "Nudged elastic band method for finding minimum energy paths of transitions,", 385-404 (1998).
* Hutter _et al._ [2014]Jurg Hutter, Marcella Iannuzzi, Florian Schiffmann, and Joost Vandevondele, "Cp2k: Atomistic simulations of condensed matter systems," Wiley Interdisciplinary Reviews: Computational Molecular Science **4**, 15-25 (2014).
* Vandevondele _et al._ [2005]Joost Vandevondele, Matthias Krack, Fawzi Mohamed, Michele Parrinello, Thomas Chassaing, and Jurg Hutter, "Quickstep: Fast and accurate density functional calculations using a mixed Gaussian and plane waves approach," Computer Physics Communications **167**, 103-128 (2005).
* Quickstep: Efficient and accurate electronic structure calculations," The Journal of Chemical Physics **152**, 194103 (2020), arXiv:2003.03868.
* Porezag [1995]D. Porezag, Th Frauenheim, Th Kohler, G. Seifert, and R. Kaschner, "Construction of tight-binding-like potentials on the basis of density-functional theory: Application to carbon," Physical Review B **51**, 12947 (1995).
* Perdew _et al._ [1996]John P. Perdew, Kieron Burke, and Matthias Ernzerhof, "Generalized gradient approximation made simple," Physical Review Letters **77**, 3865-3868 (1996).
* Zhang and Yang [1998]Yingkai Zhang and Weitao Yang, "Comment on "generalized gradient approximation made simple"," Physical Review Letters **80**, 890 (1998).
* Adamo and Barone [1999]Carlo Adamo and Vincenzo Barone, "Toward reliable density functional methods without adjustable parameters: The PBE0 model," Journal of Chemical Physics **110**, 6158-6170 (1999).
* Goerigk and Grimme [2011]Lars Goerigk and Stefan Grimme, "A thorough benchmark of density functional methods for general main group thermochemistry, kinetics, and noncovalent interactions," Physical Chemistry Chemical Physics **13**, 6670-6688 (2011).
* Guidon _et al._ [2008]Manuel Guidon, Florian Schiffmann, Jurg Hutter, and Joost Vandevondele, "Ab initio molecular dynamics using hybrid density functionals," Journal of Chemical Physics **128**, 1-15 (2008).
* Guidon _et al._ [2009]Manuel Guidon, Jurg Hutter, and Joost Vandevondele, "Robust periodic Hartree-Fock exchange for large-scale simulations using Gaussian basis sets," Journal of Chemical Theory and Computation **5**, 3010-3021 (2009).
* Condensed Matter and Materials Physics **54**, 1703-1710 (1996), arXiv:9512004 [mtrl-th].
* Guidon _et al._ [2010]Manuel Guidon, Jurg Hutter, and Joost Vandevondele, "Auxiliary density matrix methods for Hartree-Fock exchange calculations," Journal of Chemical Theory and Computation **6**, 2348-2364 (2010).
* Singraber _et al._ [2019]Andresa Singraber, Tobias Morawietz, Jorg Behler, and Christoph Dellago, "Parallel Multistream Training of High-Dimensional Neural Network Potentials," Journal of Chemical Theory and Computation **15**, 3075-3092 (2019).
* Shah _et al._ [1992]Samir Shah, Francesco Palmieri, and Michael Datum, "Optimal filtering algorithms for fast learning in feedforward neural networks," Neural Networks **5**, 779-787 (1992).
* Blank and Brown [2015]Thomas B. Blank and Steven D. Brown, "Adaptive, global, ex
tended Kalman filters for training feedforward neural networks," Journal of Chemometrics **8**, 391-407 (1994).
* [60] "AML public GitHub repository. [https://github.com/marselekgroup/aml](https://github.com/marselekgroup/aml), Accessed: 2023-03-20,".
* [61] C. G. Broyden, "The Convergence of a Class of Double-rank Minimization Algorithms 1. General Considerations," IMA Journal of Applied Mathematics **6**, 76-90 (1970).
* A Python library for working with atoms," Journal of Physics Condensed Matter **29**, 273002 (2017).
* [63] Erik Bitzek, Pekka Koskinen, Franz Gahler, Michael Moseler, and Peter Gumbsch, "Structural relaxation made simple," Physical Review Letters **97**, 170201 (2006).
* [64] Graeme Henkelman, Blas P. Uberuaga, and Hannes Jonsson, "Climbing image nudged elastic band method for finding saddle points and minimum energy paths," Journal of Chemical Physics **113**, 9901-9904 (2000).
* [65] Michele Ceriotti, Michele Parrinello, Thomas E Markland, and David E Manolopoulos, "Efficient stochastic thermostatting of path integral molecular dynamics." The Journal of chemical physics **133**, 124104 (2010).
* [66] Massimiliano Bonomi, Davide Branduardi, Giovanni Bussi, Carlo Camilloni, Davide Provasi, Paolo Raiteri, Davide Donadio, Fabrizio Marinelli, Pabio Pietrucci, Ricardo A. Broglia, and Michele Parrinello, "PLUMED: A portable plugin for free-energy calculations with molecular dynamics," Computer Physics Communications **180**, 1961-1972 (2009), arXiv:0902.0874.
* [67] Massimiliano Bonomi, Giovanni Bussi, Carlo Camilloni, Gareth A. Tribello, Pavel Banas, Alessandro Barducci, Mattia Bernetti, Peter G. Bolhuis, Sando Bottaro, Davide Branduardi, Riccardo Capelli, Paolo Carloni, Michele Ceriotti, Andrea Cesari, Haochuan Chen, Wei Chen, Francesco Colizzi, Sandip De, Marco De La Pierre, Davide Donadio, Viktor Drobot, Bernd Ensing, Andrew L. Ferguson, Marta Filizola, James S. Fraser, Hao-hao Fu, Piero Gasparotto, Francesco Luigi Gervasio, Federico Giberti, Alejandro Gil-Ley, Toni Giorgino, Gabriella T. Heller, Glen M. Hocky, Marcella Iannuzzi, Michele Invernizzi, Kim E. Jelfs, Alexander Jussupp, Espery Kirilin, Alessandro Laio, Vittorio Limongelli, Kresten Lindorff-Larsen, Thomas Lohr, Fabrizio Marinelli, Layla Martin-Samos, Matteo Masetti, Ralf Meyer, Angelos Michaelides, Carla Molteni, Tetsuya Morishita, Marco Nava, Cristina Passoni, Elena Papaleo, Michele Parrinello, Jim Pfaendtner, Pablo Piaggi, Giovanni Matic Piccini, Adriana Pietropaolo, Fabio Pietrucci, Silvio Pipolo, Davide Provasi, David Quigley, Paolo Raiteri, Stefano Ranioio, Jakabv Rydzewski, Matteo Salvalaglio, Gabriele Cesare Sosso, Vojtech Spiwok, Jiri Spomer, David W.B. Swenson, Pratyush Tiwary, Omar Valsson, Michele Vendruscolo, Gregory A. Voth, and Andrew White, "Promoting transparency and reproducibility in enhanced molecular simulations," Nature Methods **16**, 670-673 (2019).
* [68] Gareth A. Tribello, Massimiliano Bonomi, Davide Branduardi, Carlo Camilloni, and Giovanni Bussi, "PLUMED 2: New feathers for an old bird," Computer Physics Communications **185**, 604-613 (2014), arXiv:1310.0980.
* [69] Giovanni Bussi, Davide Donadio, and Michele Parrinello, "Canonical sampling through velocity rescaling," Journal of Chemical Physics **126**, 014101 (2007), arXiv:0803.4060.
* [70] Michael R. Shirts and John D. Chodera, "Statistically optimal analysis of samples from multiple equilibrium states," Journal of Chemical Physics **129**, 124105 (2008), arXiv:0801.1426.
* [71] "WHAM, [https://github.com/apallath/wham](https://github.com/apallath/wham), Accessed: 2023-03-20,".
* [72] Mark E Tuckerman and Dominik Marx, "Heavy-Atom Skeleton Quantization and Proton Tunneling in "Intermediate-Barrier" Hydrogen Bonds," Physical Review Letters **86**, 4946-4949 (2001).
* [73] Ales Cahlfk, Jack Hellerstedt, Jesus I. Mendieta-Moreno, Martin Svec, Vijai M. Santhini, Simon Pascal, Diego Soler-Polo, Sigurdur I. Erlingsson, Karel Vyborny, Ping Mutombo, Ondrej Marsalek, Olivier Siri, and Pavel Jelinek, "Significance of Nuclear Quantum Effects in Hydrogen Bonded Molecular Chains," ACS Nano **15**, 10357-10365 (2021), arXiv:2007.14657.
* [74] Zorka Smedarchina, Willem Siebrand, Antonio Fernandez-Ramos, and Ruben Meana-Paneda, "Mechanisms of double proton transfer. Theory and applications," Zeitschrift fur Physikalische Chemie **222**, 1291-1309 (2008).
* [75] Johannes Kastner, "Theory and simulation of atom tunneling in chemical reactions," WIREs Computational Molecular Science **4**, 158-168 (2014).
**Supporting information for: Reducing the cost of neural network potential generation for reactive molecular systems**
Krystof Brezina, Hubert Beck, and Ondrej Marsalek
[0.5cm] Charles University, Faculty of Mathematics and Physics, Ke Karlovu 3, 121 16 Prague 2, Czech Republic
[0.5cm]
(Dated: 29 March 2023)
## S1 Additional theory details
The following section contains additional details on the theory discussed in the main text. First, the derivation of the effective quantum temperature of the quantum harmonic oscillator is presented. This is followed by the discussion of the density-matching condition needed in the TTS procedure for appending thermal NMS of the minima away from MEPs.
### Quantum effective temperature
To derive the expression for the quantum effective temperature, we start from the standard Hamiltonian for a 1D harmonic oscillator in a mass-weighted coordinate \(\Omega\) with the natural frequency \(\omega\)
\[\widehat{H}=-\frac{\hbar^{2}}{2}\frac{\mathrm{d}^{2}}{\mathrm{d}\Omega^{2}}+ \frac{1}{2}\omega^{2}\Omega^{2}.\] (S1)
The \(n\)-th solution for the corresponding time-independent Schrodinger equation (\(n=0,1,2,\ldots,\infty\)) is in the well-known form based on the physicist's Hermite polynomials \(H_{n}\)
\[\psi_{n}(\Omega)=\left(\frac{1}{2^{n}n!\sqrt{\pi}}\right)^{\frac{1}{2}}\left( \frac{\omega}{\hbar}\right)^{\frac{1}{4}}e^{-\frac{\omega\Omega^{2}}{2\hbar}} H_{n}\left(\sqrt{\frac{\omega}{\hbar}}\Omega\right),\] (S2)
which can be simplified to
\[\psi_{n}(y)=\left(\frac{\alpha}{2^{n}n!\sqrt{\pi}}\right)^{\frac{1}{2}}e^{- \frac{y^{2}}{2}}H_{n}(y)\] (S3)
with the substitutions \(\alpha=\sqrt{\frac{\omega}{\hbar}}\) and \(y=\alpha\Omega\). The corresponding energy levels are
\[E_{n}=\hbar\omega\left(n+\frac{1}{2}\right).\] (S4)
Ultimately, we are interested in the thermal density of the harmonic oscillator \(\rho(y)\), which is formally obtained as the diagonal elements of the full density matrix \(\rho(y,y^{\prime})\). A formula for the density matrix in terms of an infinite sum over all states can be obtained as
\[\begin{split}\rho(y,y^{\prime})&=\bra{y^{\prime}} \ket{e^{-\beta\widehat{H}}}{y}\\ &=\sum_{n,m}\bra{y^{\prime}}\psi_{n}\bra{\psi_{n}}e^{-\beta \widehat{H}}\ket{\psi_{m}}\bra{\psi_{m}}y\\ &=\sum_{n}\psi_{n}^{*}(y^{\prime})\psi_{n}(y)e^{-\beta E_{n}}\\ &=e^{-\frac{\beta\hbar\omega}{2}}\frac{\alpha}{\sqrt{\pi}}\sum_ {n}\frac{1}{2^{n}n!}e^{-\beta\hbar\omega n}.\\ &\cdot e^{-\frac{y^{2}+y^{\prime 2}}{2}}H_{n}(y)H_{n}(y^{\prime}), \end{split}\] (S5)
which can be closed using the so-called Mehler kernel [11]
\[\sum_{n}\frac{\chi^{n}}{2^{n}n!}H_{n}(x)H_{n}(y)=\frac{1}{\sqrt{1-\chi^{2}}}e ^{-\frac{\chi^{2}(x^{2}+y^{2})-2\chi x}{1-\chi^{2}}}:\] (S6)
a mathematical identity for a parameter \(\chi\) that allows summing over the above product of Hermite polynomials. With this, one obtains
\[\begin{split}\rho(y,y^{\prime})&=e^{-\frac{\beta \hbar\omega}{2}}\frac{\alpha}{\sqrt{\pi}}e^{-\frac{y^{2}+y^{\prime 2}}{2}}\frac{1}{ \sqrt{1-e^{-2\beta\hbar\omega}}}\\ &\cdot e^{-\frac{-2\beta\hbar\omega(y^{2}+y^{\prime 2})-2\epsilon- \beta\hbar\omega\,yy^{\prime}}{1-e^{-2\beta\hbar\omega}}}.\end{split}\] (S7)
The thermal density is then simply obtained by setting \(y=y^{\prime}\), which gives
\[\begin{split}\rho(y)&=e^{-\frac{\beta\hbar\omega}{2 }}\frac{\alpha}{\sqrt{\pi}}\frac{1}{\sqrt{1-e^{-2\beta\hbar\omega}}}\\ &\cdot e^{-y^{2}\left[1+\frac{2e^{-\beta\hbar\omega}\left(e^{- \beta\hbar\omega}-i\right)}{1-e^{-2\beta\hbar\omega}}\right]}\end{split}\] (S8)
after trivial rearrangements. As discussed in the main text, the thermal density is Gaussian with an \(\omega\)-dependent normalization factor (first line of Equation S8) and a non-trivial scaling of the exponent. The latter will be rearranged to extract the quantum effective temperature from Equation 3 of the main text. Substituting \(t=e^{-\beta\hbar\omega}\), we get
\[\begin{split}& 1+\frac{2e^{-\beta\hbar\omega}\left(e^{-\beta \hbar\omega}-1\right)}{1-e^{-2\beta\hbar\omega}}=1+\frac{2t(t-1)}{1-t^{2}}\\ &=\frac{t^{2}-2t+1}{1-t^{2}}=\frac{1-t}{1+t}=\frac{1-e^{-\beta \hbar\omega}}{1+e^{-\beta\hbar\omega}}\\ &=\tanh\left(\frac{\beta\hbar\omega}{2}\right),\end{split}\] (S9)
which gives
\[\rho(\Omega)\propto e^{-\frac{\omega}{\hbar}\tanh\left(\frac{\beta\hbar\omega}{2} \right)\Omega^{2}}\] (S10)
for the thermal density in the original coordinates. Comparing the hyperbolic tangent factor with the form of a classical distribution (\(\rho_{\mathrm{cl}}(y)\propto e^{-\beta V}=e^{-\frac{1}{2}\beta\omega^{2}\Omega^{ 2}}\)) of the harmonic oscillator at a new inverse temperature \(\beta^{*}\), we immediately get
\[\begin{split}&\frac{\omega}{\hbar}\tanh\left(\frac{\beta\hbar \omega}{2}\right)\equiv\frac{1}{2}\beta^{*}\omega^{2}\\ &\beta^{*}(\beta,\omega)=\frac{2}{\hbar\omega}\tanh\left(\frac{ \beta\hbar\omega}{2}\right),\end{split}\] (S11)
as stated in the main text. Since the hyperbolic tangent approaches unity as its argument grows, it is always lower-valued than a linear function and thus always \(\beta^{*}<\beta\). In turn, this means that the quantum harmonic oscillator is always at an effective temperature higher than the reference classical temperature and its density is thus always wider. Moreover, note that the classical limit \(\beta^{*}\rightarrow\beta\) is reached as \(\beta\to 0\) or \(\omega\to 0\): at a given temperature, the higher frequency modes will have a more pronounced quantum effect and, in turn, at a given frequency, the quantum effect will be more pronounced at a lower temperature.
### Density matching
An obvious technical problem with appending the thermal NMS at the minima at the MEP endpoints is the matching of the sampling density so that it is a smooth continuation of the linear MEP sampling (as indicated in Figure 1 of the main text). For practical purposes, one can achieve sufficient smoothness by separately choosing a linear density for the MEP sampling and a fixed number of points that are sampled thermally around the minima by a trial-and-error approach. However, this does not lead to an analytically smooth transition. That can be achieved by integrating the thermal Gaussian distributions at the minima over all dimensions but the one defined by the tangent vector to the MEP \(\mathbf{\tau}\) at the minimum and finding a scaling factor that brings the density at the maximum of the Gaussian equal to the linear density \(\rho_{0}\) of sampling along the MEP, yielding an analytic continuation. Rather inconveniently from a mathematical perspective, the MEP tangent \(\mathbf{\tau}\) is, in general, not identifiable with any of the normal modes \(\mathbf{\Omega}_{i}\), which diagonalize the Gaussian distribution. Therefore, to perform the integration over everything but the \(\tau\)-direction, one needs to set up a new orthogonal basis built around this vector in which, however, the Gaussian is no longer diagonal. Still, the result of such integration will be a 1D Gaussian and its width will be given by the effective frequency \(\widetilde{\omega}\)
\[\rho(\tau)=\int\rho(\tau,\tau_{2},\ldots,\tau_{3N})\ \mathrm{d}\tau_{2} \ldots\mathrm{d}\tau_{3N}\propto e^{-\frac{1}{2}\beta\widetilde{\omega}^{2} \tau^{2}}.\] (S12)
It can be shown (with the full proof being outside of the scope of the present work), that this effective frequency can be calculated from the following compact formula
\[\widetilde{\omega}^{2}=\frac{\det(\mathbb{H})}{\det_{\tau,\tau}(\mathbb{H})},\] (S13)
where \(\mathbb{H}\) is the Hessian matrix of the potential at the minimum and the symbol \(\det_{\tau,\tau}\) denotes the minor of the Hessian matrix taken over the row and the column corresponding to the \(\tau\)-dimension. Note that, unlike the full determinant, the minor is a basis-dependent operation and, therefore, the Hessian matrix must be transformed to the \(\tau\)-oriented basis prior to the calculation of \(\widetilde{\omega}\). The effective standard deviation of the 1D distribution is \(\widetilde{\sigma}=1/\sqrt{\beta\widetilde{\omega}^{2}}\). The aim at this point is to find a scaling factor \(\alpha\) which ensures that the density at the configurational minimum \(\alpha\rho(0)\) matches \(\rho_{0}\). However, one cannot simply equate the two values directly, since the \(\rho(\tau)\) comes from a mass-weighted coordinate system (note that the \(\mathbf{\tau}\)-based basis was obtained as a simple rotation of the normal mode basis, which itself is naturally mass-weighted) and \(\rho_{0}\) is understood as a density along the non-weighted Cartesian MEP curve. Therefore, we need to calculate the effective mass \(\mu\) of the \(\tau\)-direction, which is the following weighted average
\[\mu=\frac{\mathbf{\tau}^{T}\mathbb{M}\mathbf{\tau}}{\mathbf{\tau}^{T}\mathbf{\tau}},\] (S14)
where \(\mathbb{M}\) is the diagonal mass matrix. Then, we obtain the analytic expression
\[\alpha\rho(0)=\frac{\alpha}{\widetilde{\sigma}\sqrt{2\pi}}=\sqrt{\frac{1}{\mu} }\rho_{0}\] (S15)
which immediately gives the crucial density-matching condition
\[\alpha=\widetilde{\sigma}\sqrt{\frac{2\pi}{\mu}}\rho_{0}.\] (S16)
The scaling parameter \(\alpha\) thus captures how many times the original normalized density \(\rho(\tau)\) needs to be increased in order to match at its peak the sampling density at the MEP. In practice, where the sampling of configurations is discrete and finite, one needs to sample the nearest-integer-to-\(\alpha\) points to match the sampling density at the MEP.
## S2 Additional results
### TTS of malonaldehyde proton transfer
The top panel of Figure S1 shows the distribution of the TTS geometries of malonaldehyde obtained at 300 K
using the classical harmonic distributions in the mode directions perpendicular to the proton-sharing MEP. Three control points were selected for the TTS: the two symmetric minima at the edges of the MEP and the transition state at \(\delta=0\,\)A. Each candidate geometry is colored by the assignment to its control point following the methodology described in Section II of the main text and illustrated in Figure 1 therein. The bottom panel of Figure S1 then shows the same situation but using the quantum formulation of TTS in which \(\beta^{*}\) is calculated for each degree of freedom. The use of the quantum densities leads to the characteristic broadening of the distribution of the geometries.
### Path integral MD for malonaldehyde
The process described in the main paper for classical MD simulation can be done analogously for PIMD. TTS was used to generate structures along the proton-sharing MEP at an effective temperature of 300 K and 620 structures were selected during one QbC run for the initial model. Similar to the classical model, force disagreement during PIMD was low around the two free energy minima and along the MEP, but increased at the tails of the distribution, as can be seen in the left panel of Figure S2 To improve the MLP in these areas of the configuration space, two strategies were pursued. An iterative process, in which PIMD simulations and QbC processes with the structures of the PIMD trajectory as candidates are alternated until force disagreement is low along the whole PIMD trajectory. Or a single QbC run, in which TTS is used to create candidate structures along the MEP of the proton-transfer reaction as well as the MEP of the torsion of the C-C single bond of the propane back bone up until the transition state to include the strongly anharmonic vibrations in this direction of the configuration space. The distribution of the force disagreement for the second-generation MLP of the iterative process can be seen in the middle panel of Figure S2. Unlike the classical case in the main paper, a single additional PIMD-QbC sequence, which added 435 structures to the training data set, was sufficient to obtain an MLP capable of accurate PIMD simulations. In the right panel of Figure S2, the results from the extended TTS MLP can be seen. As for the classical case, the resulting model gives a low committee disagreement along the whole PIMD trajectory.
### Validation of malonaldehyde MLPs
In the main text of the paper, the energy and force RMSEs on an independent test set for the final models were reported. The complete set of validation errors can be seen in table S1. While the later generations of the iterative approach and the extended TTS model performed well, the initial model performed poorly. However, the errors averaged over the whole test set tell only a part of the story, because they are not evenly distributed for all MLPs. As shown in Figure S3, the first generation model displays low force errors for most structures with a low \(d_{\mathrm{O-O}}\), which accounts for structures along the proton-sharing transition and the minima, but exceedingly large errors of up to 603 meV (energy) and 3908 \(\mathrm{meV\AA}^{-1}\) (forces) for \(d_{\mathrm{O-O}}>2.8\). Due to the anharmonicity of the system, no structures from this region of the configuration space are included in the TTS along the proton sharing coordinate and hence such structures are absent from the training data set. Therefore, the model is not suitable for accurate MD simulations. The same effect can be observed in the second generation model,
however less severe. The final models of both approaches, on the other hand, exhibit an even distribution of force RMSEs across the complete span of oxygen-oxygen distances.
### Overlap of umbrella windows in DABQDI
Figure S4 shows that the histograms of \(\delta_{1}\) values from the individual windows of the umbrella sampling simulation of the proton transfer in the DABQDI molecule exhibit sufficient overlap. Therefore, the combination of window spacing and harmonic restraint stiffness selected to obtain the results shown in Figure 7 of the main text represents an appropriate choice.
|
2304.14698 | X-RLflow: Graph Reinforcement Learning for Neural Network Subgraphs
Transformation | Tensor graph superoptimisation systems perform a sequence of subgraph
substitution to neural networks, to find the optimal computation graph
structure. Such a graph transformation process naturally falls into the
framework of sequential decision-making, and existing systems typically employ
a greedy search approach, which cannot explore the whole search space as it
cannot tolerate a temporary loss of performance. In this paper, we address the
tensor graph superoptimisation problem by exploring an alternative search
approach, reinforcement learning (RL). Our proposed approach, X-RLflow, can
learn to perform neural network dataflow graph rewriting, which substitutes a
subgraph one at a time. X-RLflow is based on a model-free RL agent that uses a
graph neural network (GNN) to encode the target computation graph and outputs a
transformed computation graph iteratively. We show that our approach can
outperform state-of-the-art superoptimisation systems over a range of deep
learning models and achieve by up to 40% on those that are based on
transformer-style architectures. | Guoliang He, Sean Parker, Eiko Yoneki | 2023-04-28T09:06:18Z | http://arxiv.org/abs/2304.14698v1 | # X-RLflow: Graph Reinforcement Learning for Neural Network Subgraphs Transformation
###### Abstract
Tensor graph superoptimisation systems perform a sequence of subgraph substitution to neural networks, to find the optimal computation graph structure. Such a graph transformation process naturally falls into the framework of sequential decision-making, and existing systems typically employ a greedy search approach, which cannot explore the whole search space as it cannot tolerate a temporary loss of performance. In this paper, we address the tensor graph superoptimisation problem by exploring an alternative search approach, reinforcement learning (RL). Our proposed approach, **X-RLflow**, can learn to perform neural network dataflow graph rewriting, which substitutes a subgraph one at a time. **X-RLflow** is based on a model-free RL agent that uses a graph neural network (GNN) to encode the target computation graph and outputs a transformed computation graph iteratively. We show that our approach can outperform state-of-the-art superoptimisation systems over a range of deep learning models and achieve by up to \(40\%\) on those that are based on transformer-style architectures.
## 1 Introduction
Recent modern software has key components that are underpinned by machine learning models, specifically, deep neural networks (DNNs). Over the past decade, there has been a focus on developing frameworks that provide tools using which we can design, train and evaluate these deep learning models.
A common internal representation for neural networks inside deep learning frameworks is that of a computation graph; a directed acyclic graph where nodes represent a specific computation and edges the paths where data is transferred. With the graph representation, frameworks such as TensorFlow (Abadi et al., 2015, 2016) and PyTorch (Paszke et al., 2019) apply optimisations to reduce computation resources during inference.
Currently, the majority of graph-level optimisations in deep learning frameworks are performed using manually defined heuristics. For example, TensorFlow, TensorRT (NVIDIA, 2017), and TVM (Chen et al., 2018) perform fusion to a computation graph by using rule-based strategies. While such heuristics apply to current architectures, as network design is consistently evolving, new rules are being constantly discovered and managing heuristics quickly becomes unwieldy.
To mitigate this problem, recent work, namely TASO (Jia et al., 2019), has shown an automatic cost-based search can replace heuristics to perform tensor graph rewriting. TASO first generates a set of rewrite rules by enumerating operators and then applies the generated rewrite rules to tensor programs via a backtracking search. However, such a backtracking search approach may not fully explore the potential search space due to the lack of planning in cost-based optimisation. As a step towards resolving the planning issue, this work explores the use of reinforcement learning (RL). RL is an area of machine learning in which an agent learns to act optimally, given a state and a suitable reward function, through interactions with an environment.
In this paper, we introduce **X-RLflow**, a tensor graph superoptimiser that uses Reinforcement Learning (RL) for automating tensor graph transformation. RL is known to be a better search methodology than a backtracking search for its planning for long-term reward, and thus it is more likely to discover the globally optimal tensor graph structure.
Applying RL to the tensor graph superoptimisation domain requires non-trivial modification because it takes the target computation graph as input and outputs a better candidate that is transformed by rewrite rules. As multiple rewrite rules are applicable to every iteration, choosing the best one is difficult because it also determines how the tensor graph can be transformed in subsequent iterations. As a result, the agent must learn to act optimally at each iteration, making the decision that is not only good for the current iteration but also good for the long term. Specifically, the current computation graph and all potential substitution
candidates are encoded via a graph neural network (GNN) to a representation and the graph representation is later fed into a policy network and a value network respectively to produce the action probability and value estimate, as in the standard actor-critic framework (Mnih et al., 2016). This process is performed iteratively until no rewrite rules are applied or the agent outputs a No-Op action, which then terminates the tensor graph superoptimisation process. The final optimised graph runs end-to-end inference to measure its inference latency.
The use of RL for tensor graph superoptimisation also enables using end-to-end inference latency as the feedback signal. We find that the results from cost modelling can deviate from the end-to-end inference latency by up to \(24\%\), and thus using cost modelling as the feedback signal to guide the transformation process may lead to sub-optimal results. On the other hand, running end-to-end inference causes significant measurement overhead, but the overhead can be amortised by evaluating sparsely. RL by design can work well in a sparse or delayed reward scenario, and it can still learn to maximise the long-term reward.
We also find that X-RLflow offers the ability to generalise. This is because the policy of RL is parametrised by a neural network, and it can be reused for performing inference once trained. We show that X-RLflow can generalise to various tensor shapes after it is trained in a static tensor shape environment. X-RLflow is available as open source 1.
Footnote 1: [https://github.com/ucamrl/xrlflow.git](https://github.com/ucamrl/xrlflow.git)
To summarise, our contributions are:
* We design an RL agent and environment for automatically selecting a sequence of subgraph transformation.
* Our approach works well for a wide range of deep learning models and is especially powerful for transformer-style architectures demonstrated by outperforming state-of-the-art methods by up to \(40\%\).
* We provide a detailed discussion and analysis of our solution as well as a comparison to the state-of-the-art methods in published literature.
* This work, to the best of our knowledge, is the first that has applied reinforcement learning in the tensor graph structure superoptimisation domain.
## 2 Background and Motivation
### Computation graphs for neural networks
To enable performance optimisation for DNNs, deep learning frameworks and compilers represent DNNs as dataflow/computation/tensor graphs, where tensor operations become nodes, and tensors are edges. Figure 1 shows how a dense linear layer, \(y=\texttt{ReLU}(\mathbf{w}\cdot\mathbf{x}+b)\), can be represented as a computation graph. The terms dataflow graph, computation graph and tensor graph are used interchangeably in deep learning frameworks.
### Tensor graph structure superoptimisation systems
With the graphical intermediate representation (IR), tensor graph superoptimisation systems attempt to perform subgraph substitution, aiming to reduce the end-to-end inference latency of target DNNs.
#### 2.2.1 Existing systems
TASO (Jia et al., 2019) is the first system dedicated to tensor graph structure superoptimisation. It first generates a set of rewrite rules by enumerating a list of pre-defined operators and then performs optimisation by substituting the subgraph of the target DNN. During the optimisation phase, TASO uses a cost model to rank all candidates and greedily chooses the best candidate to proceed to the next iteration.
PET (Wang et al., 2021) further builds on TASO and explores partially equivalent subgraph transformation. The rewrite rules developed in TASO only consider fully equivalent subgraph substitution, and PET relaxes this assumption by allowing non-equivalent substitution. Correction kernels are automatically generated to ensure end-to-end equivalency after the substitution.
Tensat (Yang et al., 2021) uses the same superoptimisation systems as TASO, but replaces TASO's optimiser with an equality saturation (Tate et al., 2011) based optimiser. It uses the E-graph data structure to represent many potential graph IRs simultaneously and then extracts the optimal IR from the E-graph.
#### 2.2.2 Limitations
Existing systems have employed different approaches to find the optimal tensor graph structure, but each comes with its limitations.
TASO's substitution engine finds the best candidate at each iteration of the transformation, but as the globally optimal tensor graph structure may not be the best option at each iteration, TASO is likely to miss out on the globally optimal
Figure 1: The graphical representation of a dense linear layer, where tensor operators are nodes and the directed edges show the flow of tensors through the graph.
tensor graph. Moreover, the performance of candidates is evaluated via the cost model, which assumes the summation of individual operator runtime is the same as the end-to-end inference latency. However as shown in Table 1, the discrepancy between the measurement of the cost model and the actual end-to-end inference latency can be up to \(24\%\). This indicates the best candidate ranked by the cost model may not be the best choice for actual deployment.
PET performs substitution by considering partially equivalent transformation. However, it adopts the same cost modelling principle as TASO, and therefore it suffers from the same cost modelling problem. PET does not provide an end-to-end inference interface in its artefact evaluation, so it is impossible to measure the discrepancy in this case. We also find that PET ignores all element-wise operators' runtime, which may exacerbate the cost modelling problem.
We further compare PET and TASO on two similar DNNs, but their performances are very different. As shown in Table 2, PET outperforms TASO in ResNet-18 but falls short in ResNext-50. We hypothesise this result is because PET's partially equivalent transformation is very sensitive to the shape of operators. Choosing the right operator shapes may bring significant improvement to partially equivalent transformation, and PET's paper also mentions a larger batch size offers more optimisation opportunities. However, understanding when partially equivalent transformation performs well is beyond the scope of this paper, and as a result, we will focus on TASO in this work.
Tensat employs equality saturation (Tate et al., 2011), which leverages the E-graph data structure to compactly represent many potential tensor graphs at the same time. Although in theory, a saturated E-graph can represent all possible IRs, in reality, the E-graph is never saturated due to several reasons. First, a saturated E-graph can be too large to fit into memory, and therefore the E-graph is usually upper-limited by \(10000\) nodes. Second, a large E-graph takes a long time to extract the optimal IR, and extracting IRs from a very large E-graph is non-trivial. As a result, Tensat's E-graph is not saturated and it cannot guarantee that its optimised tensor graph structure is optimal.
### Reinforcement learning
#### 2.3.1 Reinforcement learning basics
We propose to use Reinforcement learning (RL) to tackle the tensor graph structure superoptimisation problem. RL aims to compute a control policy such that an agent can maximise its cumulative reward from the environment. The agent will learn to discover the optimal strategy via a single reward signal.
Formally, RL is a class of learning problems that can be framed as a Markov decision process (MDP) (Bellman, 1957); they are represented as a 5-tuple \(\langle\mathcal{S},\mathcal{A},\mathcal{P}_{a},\mathcal{R}_{a},\rho_{0}\rangle\) where:
* \(\mathcal{S}\), is a finite set of valid states
* \(\mathcal{A}\), is a finite set of valid actions
* \(\mathcal{P}_{a}\), is the transition probability function that an action \(a\) in state \(s_{t}\) leads to a state \(s^{\prime}_{t+1}\)
* \(\mathcal{R}_{a}\), is the reward function, it returns the reward from the environment after taking an action \(a\) between state \(s_{t}\) and \(s^{\prime}_{t+1}\)
* \(\rho_{0}\), is the starting state distribution
We aim to compute a policy, denoted by \(\pi\), that when given a state \(s\in\mathcal{S}\), returns an action \(a\in\mathcal{A}\) with the optimisation objective being to find a control policy \(\pi^{*}\) that maximises the _expected reward_ from the environment as Equation (1).
\[\pi^{*}=\operatorname*{arg\,max}_{\pi}\mathbb{E}[\sum_{t=0}^{\infty}\gamma^{ t}\mathcal{R}_{t}] \tag{1}\]
Classic RL problems are formulated as MDPs in which we have a finite state space. However, such methods quickly become inefficient with large state spaces for applications such as Atari (Mnih et al., 2015; Kaiser et al., 2020) and Go (Silver et al., 2018). Therefore, we take advantage of modern deep learning function approximators, such as neural networks, which makes learning the solutions far more efficient in practice. We have seen many successful applications in a wide range of fields, for example, robotic control tasks (OpenAI et al., 2019), data centre power management, device placement (Addanki et al., 2019; Mirhoseini et al., 2018), and playing both perfect and imperfect information games to a super-human level (Silver et al., 2016; 2018).
\begin{table}
\begin{tabular}{l c|c c} \hline \hline DNNs & Cost model & E2E & Diff (\%) \\ \hline DALL-E & 1.8269 & 1.7324 & 5.2\% \\ InceptionV3 & 8.3650 & 9.2098 & 10.1\% \\ BERT & 1.0453 & 1.1264 & 7.8\% \\ Squeezenet & 1.3082 & 1.4006 & 7.1\% \\ ResNext-50 & 6.1545 & 7.6498 & 24\% \\ T-T & 2.4828 & 2.7281 & 9.9\% \\ \hline \hline \end{tabular}
\end{table}
Table 1: Discrepancy between TASO’s cost model estimates and TASO’s end-to-end inference latency on some unoptimised DNNs. E2E stands for end-to-end inference latency. Time is measured in milliseconds.
\begin{table}
\begin{tabular}{c|c|c} & ResNet-18 & ResNet-50 \\ \hline PET & 1.9619 & 10.6694 \\ TASO & 2.5534 & 6.6453 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison of the optimised graph inference latency between PET and TASO in ResNet-18 and ResNext-50. Time is measured in milliseconds.
#### 2.3.2 Graph reinforcement learning
Applying RL to the tensor graph superoptimisation domain cannot be accomplished by classic RL algorithms. This is because computation graphs are naturally graph-like data and have relationships that cannot be expressed in Euclidean space. Fortunately, graph neural networks (GNNs) is proposed to address learning for graph data, and we provide a high-level overview of commonly used GNNs.
Graph Convolutional Networks (GCNs) (Kipf & Welling, 2016) is one of the simplest yet powerful GNNs, which updates each node's features by message passing with its neighbouring nodes' features. While GCN is effective at learning graph data, it assumes equal weighting of neighbouring nodes. Graph Attentional Networks (GATs) (Velickovic et al., 2018) is later proposed to address this problem, by learning to assign weights to neighbour nodes. As a result, GAT is more expressive and is adopted in this work as a component of the system.
### Motivation for RL
Our motivation for using RL in the tensor graph superposition domain comes from several aspects.
First, RL can tolerate short-term performance decrease to maximise long-term rewards. This is important for tensor graph superoptimisation because the optimised graph is obtained by applying a sequence of subgraph substitution, and the globally optimal tensor graph structure may not be the best candidate at every iteration. As a greedy search engine only considers the best candidate, it is likely to miss out on the globally optimal solution. On the contrary, it has been shown in literature (Mnih et al., 2015; Kaiser et al., 2020), (Silver et al., 2018) that RL can learn to tolerate short-term loss and maximise the long-term episodic reward.
Second, we want to bypass the cost modelling issue and only use the end-to-end inference latency as the feedback signal to choose among potential candidates. We believe this is necessary for machine learning systems since there are multiple IRs for DNNs during progressive lowering and there is optimisation being done for each IR layer. As a result, optimisation that takes place at a higher layer IR is not aware of the optimisation in lower layer IRs. Therefore, cost modelling at a specific layer is not sufficient to form a good feedback signal.
Using end-to-end inference latency is challenging to equality saturation-based methods because extracting from the E-graph needs per-node cost modelling, and for greedy search-based methods, it brings significant overhead to rank all candidates at each iteration. We find RL a good methodology because by design it can work in a sparse or delayed reward scenario. In our design, we perform an end-to-end inference every \(N\) iteration, where \(N\) is a hyper-parameter and it controls the trade-off between the frequency of reward signals and the measurement overhead. We argue that the accuracy and measurement overhead trade-off widely exists in performance optimisation and the use of RL enables the controlling of this trade-off.
Finally, RL offers a generalisation ability. Deep learning systems often assume statically known tensor shapes and tensor graphs for optimisation, and if any of the two changes, the optimisation must start from scratch. However, there is a practical need of changing the tensor shapes. For example, a language model may need to be compiled multiple times independently because its input text length may vary and so is its corresponding tensor shapes. By applying RL, it should be able to generalise to various tensor shapes. This is because by varying the tensor shape, the tensor graph structure is still the same. Therefore, we only need to train the RL agent once and let it generalise to various tensor shapes of the target DNN.
There is recent work on model-based RL that offers better generalisation ability. For example, the world model (Ha & Schmidhuber, 2018) learns the dynamics of the environment, such that the agent can be trained in a latent space. The main benefit of the world model is sample efficiency because the agent does not need to interact with the actual environment. Moreover, the world model enables planning before making decisions. Unfortunately, learning a world model is difficult and an imperfect world model may deviate from the actual environment too much, such that even if the agent learns to act optimally in the world model, it fails to achieve good performance in reality. As such, we focus on model-free agents in this work.
GO (Zhou et al., 2020) is a model-free agent that tries to generalise RL to unseen tenor graph structures. To do this, they train RL agents on multiple graphs and evaluate them on held-out graphs. This is much more computationally demanding, so we only focus on generalisation to tensor shapes in this work.
## 3 X-RLflow
In this section, we introduce X-RLflow, a tensor graph superoptimiser that uses RL for automatic subgraph substitutions. X-RLflow encapsulates the tensor graph transformation process as the environment transition, and thus RL can iteratively transform the computation graph. We provide a detailed description of each component of X-RLflow in this section.
### Computation graphs
We use the same computation graph representation as in TASO. That is, users can manually define the computation graph via TASO's programming interface, or load
pre-trained models to the system. The pre-trained models from existing tensor frameworks, such as TensorFlow Abadi et al. (2015); 2016), PyTorch Paszke et al. (2019) and MXNet Chen et al. (2015), can be converted to a unified ONNX format Bai et al. (2019), which then can be parsed to TASO's computation graph representation. After the superoptimisation, users can export the optimised graph back to the ONNX format, and deploy it to different backends.
### Graph rewrite rules
The tensor graph rewrite rules are generated by TASO's generator. Those rules are generated before the optimisation phase, by enumerating a list of primitive operators up to a constant, and they are serialised to a text file. At the beginning of the optimisation phase, rewrite rules are generalised from the text file and activated. There are in total \(150\) rewrite rules, and one example of a graph rewrite rule is shown as per Figure 2.
Given a target DNN, a candidate will be generated by pattern-matching a rewrite rule to the target computation graph. At each iteration of the optimisation, there are typically multiple matching rules to multiple locations of the target computation graph, and therefore multiple candidates are generated. Those candidates are put into a cache, and X-RLflow selects one candidate as the transformation applied to the target DNN. More details about the selection process are provided in the next section.
### Reinforcement learning formulation
#### 3.3.1 System environment
We encapsulate the tensor graph transformation process as the environment transition in the standard RL formulation. There exists an open-source environment implementation that provides standardised APIs for RL agents, such as the OpenAI Gym Brockman et al. (2016), in which users can extend and write their environment transition logic. In this work, we design an environment that follows the OpenAI Gym API standard stepping an environment, that is, we have a step() function that hides the complexity of the graph transformation process and exposes a unified API to RL agents, and a reset() function that resets the transformation process back to the initial stage. Those are essential functions for the environmental transition. Note that this environment is re-usable for diverse and future RL algorithms.
We made use of the work by Jia et al. Jia et al. (2019) who provide an open-source version of TASO as part of the backend of the environment, where the rewrite rules pattern matching and tensor graph transformation takes place. At each iteration of the environmental transition, the environment takes as input a computation graph and applies TASO's rewrite rules to all possible locations of the graph via pattern matching, which generates a list of potential candidates. Those potential candidates become the observation for the RL agent.
#### 3.3.2 State-Action space
The environment generates a list of transformed candidates at each iteration of the optimisation, and the agent makes an action, which is mapped to the index of a potential candidate, indicating the choice of the agent. The selected candidate becomes the graph for the next iteration. This process is repeated until there are no more available candidates, or the agent outputs a No-Op to indicate an earlier termination. The termination indicates the end of the tensor graph transformation process.
To enable decision-making at each iteration, the computation graph and its potential candidates have to be embedded as a state vector, which is done via a graph neural network (GNN). Specifically, to create a graph input, we traverse the target graphs and build node attributes and edge attributes sequentially. For node, we maintain a table of operators
Figure 3: Interaction between components of X-RLflow.
Figure 2: Example of a TASO’s rewrite rule. Applying this rule means performing pattern matching to the target computation graph and substituting the source graph with the target graph. Note that there may be multiple matches in the target computation graph, in which case applying these rules generates multiple transformed candidates.
and one-hot encode the index of the operator as node attributes. There are around \(40\) different tensor operators in total. For edges, we use the corresponding tensor shape as its attributes. For example, the edge attribute \([1,3,256,256]\) means its corresponding tensor has shape \([1,3,256,256]\). For tensors whose rank is less than \(4\), zeros are padded to leading dimensions. To stabilise training, we normalise edge attributes via a constant \(M\), whose value is detailed in Appendix A. We also initialise the global attribute to \(0\) for all tensor graphs and update it through a learnable GNN layer. After we build the graph input of the current computation graph and its potential candidates, we batch them into a meta-graph, which is a comprehensive representation of the current state.
One challenge for X-RLflow is the number of potential candidates changes for every iteration. This is because as the computation graph is transformed iteratively and becomes more optimised, there are fewer applicable rewrite rules and therefore fewer potential candidates are generated. As a result, the action space of X-RLflow shrinks during the transformation process. To overcome this issue, we pad the action space to a large constant and use a mask vector to indicate the actual candidates. Specifically, for each meta-graph, we generate an associated boolean mask vector. When the agent outputs the action probability vector, the invalid action is masked out by the boolean vector. This is known as invalid action masking (Huang and Ontan on, 2022), which has been studied recently. It effectively turns the gradients to zero if they correspond to an invalid action, and thus resolves the changing action space issue. An alternative way is to penalise the agent when it makes an invalid action and terminates the episode. We find it slows down the training and thus we employ the invalid action masking method.
Figure 3 shows the architecture overview of X-RLflow. The GNN is used to encode the meta-graph input and produce a state representation \(z\), and then, the state vector is fed into a policy head and a value head, which output a categorical distribution and a value estimate respectively. The policy head and value head are both two layers of multi-layer perceptrons (MLPs). The action sampled from the categorical distribution is mapped to a candidate index, and the value estimate is stored in a cache for computing the generalised advantage estimate when updating the agent.
The GNN is critical in our system because it enables applied RL to the tensor graph superoptimisation domain and therefore its details will be further described in section 3.4.
#### 3.3.3 Reward function
Arguably the reward function is regarded as the most important part when designing RL based optimiser. Agents will maximise their policy to get the maximum cumulative reward of episodes, therefore our reward function should encourage the agent to find the optimal tensor graph structures.
Since the optimal tensor graph structure is evaluated via its end-to-end inference latency, the reward function should reward the agent positively when an action decreases the inference latency. To enable flexibility, the environment has an interface where users can register their callback function to compute the reward, and we also have implemented a default reward function that works well in our empirical evaluation.
\[r_{t}=\frac{RT_{t-1}-RT_{t}}{RT_{0}}*100 \tag{2}\]
The default reward function firstly computes the inference time difference between the last inference latency \(RT_{t-1}\) and the current inference latency \(RT_{t}\) and then normalises the difference with initial inference time \(RT_{0}\) of the DNN. Intuitively, such a design encourages the RL agent to discover candidates that reduce end-to-end inference latency compared to the current computation graph. The normalisation stabilises the training because the reward is calculated as a percentage speedup and it will not introduce very large positive or negative rewards. We find the default reward function leads to good performance in practice. For invalid action, we simply mask out its probability by assigning a large negative number in the categorical distribution.
Note that end-to-end inference is run every \(N\) iteration for the reason mentioned in section 2.4, meaning Equation 2 is used every \(N\) iteration. When the feedback signal is not available, we simply use a small constant of \(0.1\) to reward the agent for continuous exploration. We find that this constant works well in our empirical evaluation, and it can be customised by users via registering a callback function.
#### 3.3.4 Learning algorithm
All learnable components in X-RLflow are trained in an end-to-end fashion. To this end, we adopt the clip variant from the PPO algorithm (Schulman et al., 2017). PPO is an on-policy RL algorithm, meaning it first performs roll-outs for several episodes and uses the collected data to perform an update to its networks. This process is repeated until the number of update round reaches a pre-defined constant.
Being a PPO RL agent, there are multiple available variants of the objective function. The clip objective function chosen for X-RLflow is:
\[\mathcal{L}_{clip}=-\mathbb{E}_{G}\{\min(\frac{\pi_{\theta}}{\pi_{\theta_{k} }}\cdot A^{\pi_{\theta_{k}}},clip(\frac{\pi_{\theta}}{\pi_{\theta_{k}}},1- \epsilon,1+\epsilon)A^{\pi_{\theta_{k}}})\} \tag{3}\]
Where \(\pi_{\theta}\) is the current policy and \(\pi_{\theta_{k}}\) is the old policy. \(A^{\pi_{\theta_{k}}}\) represents the generalised advantages computed given
the samples generated from the old policy \(\pi_{\theta_{k}}\), as in (Schulman et al., 2015). The clip objective essentially prevents the policy network from being updated too much away from its previous weights. To update the value estimates, we simply compute the mean-square-error of the output from the value head with the target value:
\[\mathcal{L}_{vf}=\mathbb{E}_{G}\{(V_{\theta}(s_{t})-V_{target})^{2}\} \tag{4}\]
The final objective is the summation of the two losses functions and an entropy term:
\[J=\mathcal{L}_{clip}+c_{1}\mathcal{L}_{vf}+c_{2}\mathcal{L}_{entropy} \tag{5}\]
Where \(c_{1}\) and \(c_{2}\) are two small constants to weigh the value loss and the entropy loss respectively. The entropy term is calculated from the action probability \(\pi_{\theta_{k}}\) and is meant to constrain the policy network update further.
This particular RL algorithm and its objective function are chosen for the following reasons. First, it combines value loss and policy loss into a single loss function, which allows a single back-propagation to update all learnable components contributing to this loss function, thus enabling training end-to-end. Second, it has been shown that PPO is sample efficient and works well across a wide range of benchmarks (Schulman et al., 2017). Lastly, PPO can perform roll-outs in vectorised environments and update via mini-batch. This makes distributed training possible, and it is especially important for very large DNNs, where more training is needed to encode the graph attributes. In our empirical evaluation, we find training in a single machine is sufficient, but as model sizes of DNNs continue to grow, distributed training is necessary for the future. We leave accelerating RL training for future work.
Note that our system environment is agnostic to the RL algorithms, therefore it can be reused for different choices. In this work, we focus on the PPO clip variant algorithm.
### Graph embedding network architectures
In this section, we provide more details on the GNN architecture, as it is essential for applied RL in the tensor graph superoptimisation domain. The GNN consists of one node update layer, followed by \(k\) graph attention layers, and has one final global update layer at the end. This architecture is not designed without careful consideration, as each layer has its responsibility to learn specific representations. The first node update layer uses edge attributes to update node attributes:
\[\vec{h^{\prime}}_{i}=\sigma\{W(\sum_{j\in\mathcal{E}_{i}}\vec{e}_{j}\|\vec{h^ {\prime}}_{i})\} \tag{6}\]
Where \(\vec{h^{\prime}}_{i}\) is the attribute output of node \(i\), given its initial node attribute \(\vec{h_{i}}\) and its incoming edges attributes \(\vec{e}_{j}\). The \(\|\) operator stands for concatenation as commonly used in GNN terminology, and \(\sigma\) is a non-linear activation function.
This layer is necessary because each node in the graph corresponds to an operator kernel launch in the actual DNN. The kernel launch time is determined by its incoming tensors (edges), and the type of operator, such as MatMul, Conv2D, etc. As a result, the first GNN layer is responsible to learn the kernel launch time of each operator by combining the operator type and the incoming tensor shapes.
The subsequent \(k\) graph attention layers (GAT) (Velickovic et al., 2018) are used to learn the topology of the computation graph. This is achieved by message passing between nodes with their neighbouring nodes, and this mechanism has been shown effective in graph-like data than other mechanisms using Euclidean metrics.
\[\vec{h^{\prime}}_{i}=\sigma(\sum_{j\in\mathcal{N}_{i}}\alpha_{i,j}W\vec{h}_{j}) \tag{7}\]
Where \(\mathcal{N}_{i}\) represents neighbours of node \(i\), and \(\alpha_{i,j}\) is learned during back-propagation. The number of GAT layer \(k\) controls how many message-passing steps are performed and is a hyper-parameter. By performing more message-passing steps, a node's message can reach a more distant neighbour but also increases computational demands. The number of message-passing steps is specified in Appendix A.
After passing through the GAT layers, each node has updated its representation. The final layer aggregates all nodes' attributes along with the original attribute of the graph to produce a final graph representation:
\[\vec{g^{\prime}}=\sigma(\sum_{\mathcal{N}}\vec{h}\|\vec{g}) \tag{8}\]
The final global update layer is also necessary, as tensor graph superoptimisation changes the tensor graph structure at each iteration. Thus, we need graph-level representation for decision-making. This also distinguishes X-RLflow from other work, such as GO (Zhou et al., 2020), where they focus on making node-level decisions like fusion and scheduling. X-RLflow operates on a higher graph-level transformation and therefore is orthogonal to other RL-based optimisers.
## 4 Evaluation
In this section, X-RLflow is evaluated with TASO and Ten-sat over a wide range of DNNs. PET is not compared given the discussion in section 2.2.2. We seek to answer the following questions:
1. Can X-RLflow achieve further speedup than the greedy-based method?
2. Can X-RLflow finish optimisation within a tolerable
time compared to the existing systems?
3. Once X-RLflow is trained, can it generalise to other tensor shapes given the same computation graph?
### Experimental setup
#### 4.1.1 Platforms
All experiments presented were performed using a single machine running Ubuntu Linux 18.04 with a 24-core Intel [email protected], 256GB RAM and an NVIDIA GeForce GTX 1080. The hardware libraries used for running end-to-end inference are CUDA 10.2 and CuDNN 7.6.5. Each experiment is performed \(5\) times and we compute the means and standard deviations.
#### 4.1.2 Frameworks
As previously discussed, we used the open-sourced version of TASO as the graph transformation backend. The RL agent is implemented via JAX(Bradbury et al., 2018) and jraph package (Godwin* et al., 2020). While tuning hyper-parameters for RL is notoriously difficult, recent works such as (Anonymous, 2022) perform large-scale studies across a wide range of environments, and suggest empirically good hyper-parameter values. Therefore, we adopt good hyper-parameters from those works, and we keep the values fixed over all experiments.
#### 4.1.3 Workloads
We have chosen state-of-the-art DNNs over a wide range of tasks in our evaluation. InceptionV3 (Szegedy et al., 2016) is a common, high-accuracy model for image classification trained on the ImageNet dataset. ResNext-50 (He et al., 2016) is also a deep convolutional network for vision tasks. SqueezeNet (Iandola et al., 2016) is a shallower yet accurate model on the same ImageNet dataset. BERT (Devlin et al., 2019), ViT (Nayak, 2019), DALL-E (Ramesh et al., 2021), T-T (Zhang et al., 2020), are large transformer networks that succeed across a wide range of tasks, including visions, languages, texts and audios. Although the common building block is multi-head attention, they are often combined with different neural network blocks and thus overall have diverse computation graph structures. Figure 3 lists the properties of those DNNs. Note that some tensor operators are not supported by TASO and we simply skip those operators when building the computation graph.
### End-to-end speedup
Figure 4 shows the end-to-end inference latency speedup of TASO and X-RLflow. For TASO, we run with its default setting as in its artefact evaluation. For X-RLflow, each agent was trained from the respective graph, as described in Section3, and they are trained for over \(1000\) episodes. All agents use the same reward function as Equation 2 as well as the same set of hyper-parameter, and common hyper-parameters are listed in Appendix A.
Firstly, we can observe among all cases, X-RLflow achieves better speedup than TASO's substitution engine. This verifies that X-RLflow can learn to make better decisions for long-term rewards and leverage the end-to-end feedback signal to guide its decision-making. X-RLflow finds more globally optimal tensor graph structures after the transformation process.
We can also observe there are two DNNs where X-RLflow achieves much better speedup than TASO. In the case of SqueezeNet, TASO achieves a negative speedup. This is because the cost model is inaccurate, and it misleads the substitution engine. Note that the cost modelling depends on the execution hardware, so different GPUs may be evaluated differently. The cost modelling issue is also reported by Tensat, where it affects the extraction of the E-graph.
\begin{table}
\begin{tabular}{l c c} \hline \hline DNNs & Type & Complexity \\ \hline InceptionV3 & Convolutional & 50 \\ SqueezeNet & Convolutional & 20 \\ ResNext-50 & Convolutional & 13 \\ BERT & Transformer & 26 \\ DALL-E & Transformer & 20 \\ T-T & Transformer & 25 \\ ViT & Transformer & 32 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Properties of evaluated DNNs. The complexity indicates the average number of candidates at each iteration of the transformation process. Although this number may not comprehensively quantify the search space, a higher number typically indicates there are more combinatorial opportunities throughout the optimisation process.
Figure 4: End-to-end inference speedup by TASO and X-RLflow. In each case, the evaluation is run five times to measure the mean and standard deviations.
For ViT, we observe an over \(40\%\) speedup. While this appears to be a special case, we observe that after a sequence of graph-level optimisation, some operators have no data dependencies so they can be pre-processed before actually running inference. This is similar to constant folding in compiler optimisation. Cost modelling does not consider constant folding because it simply sums over all operators' execution times. Thus, by using the end-to-end feedback signal, X-RLflow can discover better tensor graph structures by considering downstream optimisation opportunities. This also indicates more optimisation opportunities can be revealed by combining the compiler optimisation pipeline across different layers of IRs.
### Transformation heatmap
Figure 5 shows the counts of rewrite rules that are applied to the tensor graphs during the optimisation process. Notably, DNNs that primarily consist of convolutional operators and the ones that consist of matrix multiplication are targeted by different tensor graph rewrite rules. In general, convolutional operators can be rewritten by more rules, but they have shorter transformation sequences, meaning they are less beneficial by long-term delayed reward. Transformer-type of DNNs on the other hand, are targeted by fewer rewrite rules but have longer transformation sequences. We conclude that RL by design maximises long-term rewards, and thus it shows more advantages in long sequential tasks.
### Optimisation time
Figure 6 shows the optimisation time required to transform the optimised graph of the evaluated DNNs. We note that the optimisation time for the X-RLflow does not include the time needed to train the RL agents. We can see TASO in all cases take less than \(75\) seconds to generate its optimised graph given its default setting. Although TASO has a configurable optimisation time budget, increasing the budget does not give more speedup while causing a much longer optimisation time. For example, the optimisation time is \(10\)x more while the performance increase is less than \(5\%\). Therefore, We conclude this phenomenon indicates greedy substitution is stuck in the local optimum and we use the default budget for TASO.
X-RLflow generally takes more time to optimise, because the RL agent performs a forward pass to make decisions at every iteration of the transformation. This can be accelerated by putting the agent's networks in GPU. However we only have one GPU and its memory is pre-allocated by the backend of the environment, i.e. TASO, so we are unable to do that. The optimisation time can be reduced significantly if the agent's inference can be accelerated by another GPU. Even in the case of the CPU, we can see it takes less than \(200\) seconds in optimisation, which is affordable before model deployment.
### Generalisation to different tensor shapes
Figure 7 shows the generalisation ability of X-RLflow to various tensor shapes on DALL-E and InceptionV3 respectively. This is possible when an update in the data processing pipeline results in different tensor shapes to input to DNN models. We show that by training X-RLflow in a static shape environment once, it is sufficient to generalise and achieve good performance for different input tensor shapes.
Performing training on multiple graphs and generalising to
Figure 5: This heatmap shows the rewrite rules applied by X-RLflow. Although there are over 100 possible rewrite rules, we only show the rules applied at least once. The count for each rewrite rule shows the number of times it has been applied. Note that each application of the rewrite rule takes place at different locations of the target computation graph. A higher count means that X-RLflow can find a long sequence of subgraph transformation before termination. This can indicate X-RLflow has discovered a performant graph via the sequence of subgraph transformation.
Figure 6: Optimisation time taken for TASO and X-RLflow.
unseen DNNs is a more significant generalisation direction we would like to investigate. However, this poses a challenge to the GNN encoder, as it is responsible to embed diverse computation graphs. As such, it is much more computationally expensive to achieve held-out graph generation. We will leave this for future work.
### Comparison with Tensat
Tensat is another tensor graph superoptimiser that employs equality saturation as mentioned in section 2.2. Figure 8 shows a comparison of X-RLflow with Tensat.
We can see that X-RLflow outperforms Tensat in BERT and InceptionV3, but falls short in Squeezenet and ResNext-50. Note that Tensat is sensitive to its parameters, and we use the default values as reported in their artefact evaluation. By varying Tensat's parameters, it is possible to have a performance increase but also raises the risk of running out of memory and optimisation timeout.
By examining the dataflow graph of SqueezeNet, InceptionV3 and ResNext-50, we notice that while they all primarily consist of convolution blocks, InceptionV3 has more unique blocks in its computation graph, indicating more combinatorial optimisation opportunities. This is also verified by the "complexity" as shown by Table 3. As RL favours more combinatorial chances, X-RLflow is more likely to perform well in complex graphs.
X-RLflow outperforms Tensat when optimising BERT. This is because BERT's multi-head attention blocks mainly consist of matrix multiplication, and there is a specific "multi-pattern rewrite rule" for matrix multiplication in Tensat that can grow the E-graph extremely large. As a result, Tensat has to limit the application of multi-pattern rewrite rules up to a constant \(k\). In their default setting, \(k\) is set to \(1\), and this is certainly not enough to explore all rewrite opportunities for BERT. Increasing \(k\) will likely increase performance, but also dramatically increase the optimisation time and memory usage. To know more about the multi-pattern rewrite rules, we refer interested readers to the section \(4\) of Tensat's paper. X-RLflow does not use the E-graph data structures, so it does not suffer from the multi-pattern rewrite rule issues.
We also want to run more experiments on transformer types of DNNs, but we are unable to do so, because Tensat needs to convert TASO's graph representation to its S-exprs representation, and strictly filters out cycles when building the E-graph. When we try to add a new transformer, the cycle filtering algorithm reports an error. As such, we are unable to perform more experiments except for those provided by Tensat. On the contrary, X-RLflow does not use a new representation, and therefore it can optimise any computation graph as long as it is supported by TASO.
In future work, we would like to combine X-RLflow with Tensat. Specifically, as the E-graph is not saturated, Tensat re-introduces the phase-ordering issue when building the E-graph, which can be addressed by RL. On the other hand, the E-graph can compactly represent many graph IRs, which can decrease the state space for RL. We believe combining RL with equality saturation will lead to better performance for optimising computation graphs.
## 5 Related Work
### Optimisation of computation graphs
Rule-based approaches such as those used in TensorFlow (Abadi et al., 2015, 2016) and TVM (Chen et al., 2018) use a pre-defined set of transformations to optimise computation graphs. On the contrary, tensor graph superoptimisers,
Figure 8: Comparison of end-to-end DNNs speedup with Tensat.
Figure 7: Generalisation to different tensor shapes on DALL-E and InceptionV3. The suffix number following the name of DNNs indicates the input tensor shape. For example, ‘InceptionV3-225’ indicates the input image has a height and width of \(225\). ‘*’ indicates the DNNs where X-RLflow is trained to optimise.
such as (Jia et al., 2019, 2019; Yang et al., 2021; Wang et al., 2021), automatically search for transformations to apply to the input graph. In addition, OCGGS (Fang et al., 2020) theoretically proves the graph substitutions problem to be NP-hard, and comes up with a dynamic programming approach to exactly solve the problem. However, the exact solution is impractical due to the long search time. As a result, an approximate sampler is also proposed to solve this problem. However, it fails to achieve better performance than existing systems. We have described in detail in section 2.2 the advantages and drawbacks of each system, and our motivation for using RL to tackle this problem. We show that the optimisation process is more globally optimal and generalisable with X-RLflow.
### RL in system optimisation
There has been an effort to apply reinforcement learning to system optimisation. Due to its theoretical soundness for handling sequential decision-making problems, RL has been applied to circuits design (Roy et al., 2021), the compiler passes ordering (Haj-Ali et al., 2020), and datacenter control (Fuhrer et al., 2023). Among existing works, NeuRewriter (Chen and Tian, 2019) is a particularly relevant work that implements an RL-based rewrite system to perform expression rewrite, job scheduling and vehicle routing. While related, X-RLflow performs optimisation in the tensor graph domain with a specific design of the RL agent and the environment to handle tensor graph transformation. As such, it is complementary to NeuRewriter.
GO (Zhou et al., 2020) is an RL-based graph optimiser that performs node fusion, node scheduling and device placement at once. As a result, GO outputs decisions for each node in the computation graph. On the other hand, X-RLflow is specific to graph-level optimisation, so the tensor graph structure changes dynamically throughout the optimisation process. As such, X-RLflow is orthogonal to existing works and can be used to optimise before performing downstream optimisation, such as GO.
### Model-based reinforcement learning
Model-based RL is a class of reinforcement learning algorithms in which we aim to learn a model (or use a given model) of the real environment where an agent acts. The world model (Ha and Schmidhuber, 2018) proposed to learn the environment dynamics using recurrent neural networks. Alternative approaches have also been proposed such as imagination-augmented agents (Racaniere et al., 2017) and model-based value estimation for model-free agents (Feinberg et al., 2018). The main benefit of model-based reinforcement learning is a world model help reduce the agent interaction with the environment, thus accelerating the training process. Moreover, the world model may provide better generalisation ability because it can model the latent transition of the state space. Recent advances in model-based reinforcement learning show a world model agent can outperform its model-free counterpart across diverse domains including vision and control tasks, with a fixed set of hyper-parameters. X-RLflow may be combined with those methods to have better sample efficiency and generalisation ability because the environment transition does not assume a fixed RL algorithm.
## 6 Limitations and future work
There are a few limitations with X-RLflow. First, training RL is computationally expensive and time-consuming. Potential solutions to mitigate the problem includes setting up a distributed training environment, where training data can be generated in parallel. This allows trading off training time with computing power. On the other hand, we could exploit the layer structure of DNNs, and simply perform optimisation on sub-graphs individually. However, this requires certain heuristics to avoid missing out on optimisation opportunities across different sub-graphs. Second, X-RLflow cannot be generalised to unseen tensor graphs at the moment. We expect by adopting methods from model-based RL and training the agent across various DNNs, X-RLflow can gain stronger generalisation ability. Alternatively, combining RL with equality saturation can reduce the state space of RL because E-graphs can compactly represent many graph IRs, and this may be another direction to explore.
## 7 Conclusion
In this work, we present X-RLflow as a novel end-to-end tensor graph structure superoptimiser. We explain our motivations for using RL and describe our formulation of the RL algorithm in the tensor graph domain. We also present the architecture design of X-RLflow in detail. Various experiment results show that X-RLflow can achieve better performance over a wide range of evaluation DNNs and have up to \(40\%\) speedup over state-of-the-art systems. We also demonstrate its generalisation ability by performing inference in unseen environments. We argue that the applicability of RL in a delayed reward environment sheds light on system optimisation when the feedback signal is expensive. The effectiveness of X-RLflow suggests that it is a promising direction for tensor graph structure superoptimisation.
## Acknowledgements
We thank Sam Ainsworth, Amitabha Roy, Wenjun Hu, Sami Alabed for their valuable comments and feedback. We also thank Zhihao Jia for insightful discussions at an early stage of our work. |
2303.07125 | Don't PANIC: Prototypical Additive Neural Network for Interpretable
Classification of Alzheimer's Disease | Alzheimer's disease (AD) has a complex and multifactorial etiology, which
requires integrating information about neuroanatomy, genetics, and
cerebrospinal fluid biomarkers for accurate diagnosis. Hence, recent deep
learning approaches combined image and tabular information to improve
diagnostic performance. However, the black-box nature of such neural networks
is still a barrier for clinical applications, in which understanding the
decision of a heterogeneous model is integral. We propose PANIC, a prototypical
additive neural network for interpretable AD classification that integrates 3D
image and tabular data. It is interpretable by design and, thus, avoids the
need for post-hoc explanations that try to approximate the decision of a
network. Our results demonstrate that PANIC achieves state-of-the-art
performance in AD classification, while directly providing local and global
explanations. Finally, we show that PANIC extracts biologically meaningful
signatures of AD, and satisfies a set of desirable desiderata for trustworthy
machine learning. Our implementation is available at
https://github.com/ai-med/PANIC . | Tom Nuno Wolf, Sebastian Pölsterl, Christian Wachinger | 2023-03-13T13:56:20Z | http://arxiv.org/abs/2303.07125v2 | Don't PANIC: Prototypical Additive Neural Network for Interpretable Classification of Alzheimer's Disease+
###### Abstract
Alzheimer's disease (AD) has a complex and multifactorial etiology, which requires integrating information about neuroanatomy, genetics, and cerebrospinal fluid biomarkers for accurate diagnosis. Hence, recent deep learning approaches combined image and tabular information to improve diagnostic performance. However, the black-box nature of such neural networks is still a barrier for clinical applications, in which understanding the decision of a heterogeneous model is integral. We propose PANIC, a prototypical additive neural network for interpretable AD classification that integrates 3D image and tabular data. It is interpretable by design and, thus, avoids the need for post-hoc explanations that try to approximate the decision of a network. Our results demonstrate that PANIC achieves state-of-the-art performance in AD classification, while directly providing local and global explanations. Finally, we show that PANIC extracts biologically meaningful signatures of AD, and satisfies a set of desirable desiderata for trustworthy machine learning. Our implementation is available at [https://github.com/ai-med/PANIC](https://github.com/ai-med/PANIC).
## 1 Introduction
It is estimated that the number of people suffering from dementia worldwide will reach 152.8 million by 2050, with Alzheimer's disease (AD) accounting for approximately 60-80% of all cases [20]. Due to large studies, like the Alzheimer's Disease Neuroimaging Initiative (ADNI; [9]), and advances in deep learning, the disease stage of AD can now be predicted relatively accurate [28]. In particular, models utilizing both tabular and image data have shown performance superior to unimodal models [4, 5, 29]. However, they are considered black-box models, as their decision-making process remains largely opaque. Explaining decisions of Convolutional Neural Networks (CNN) is typically achieved with post-hoc techniques in the form of saliency maps. However, recent studies showed that different post-hoc techniques lead to vastly different explanations of the same model [12]. Hence, post-hoc methods do not mimic the true model accurately and have low fidelity [22]. Another drawback of post-hoc techniques is that they
provide local interpretability only, i.e., an approximation of the decision of a model for a specific input sample, which cannot explain the overall decision-making of a model. Rudin [22] advocated to overcome these shortcomings with inherently interpretable models, which are interpretable by design. For instance, a logistic regression model is inherently interpretable, because one can infer the decision-making process from the weights of each feature. Moreover, inherently interpretable models do provide both local and global explanations. While there has been progress towards inherently interpretable unimodal deep neural networks (DNNs) [1, 11], there is a lack of inherently interpretable heterogeneous DNNs that incorporate both 3D image and tabular data.
In this work, we propose PANIC, a Prototypical Additive Neural Network for Interpretable Classification of AD, that is based on the Generalized Additive Model (GAM). PANIC consists of one neural net for 3D image data, one neural net for each tabular feature, and combines their outputs via summation to yield the final prediction (see Fig. 1). PANIC processes 3D images with an inherently interpretable CNN, as proposed in [11]. The CNN is a similarity-based classifier that reasons by comparing latent features of an input image to a set of class-representative latent features. The latter are representations of specific images from the training data. Thus, its decision-making can be considered similar to the way humans reason. Finally, we show that PANIC is fully transparent, because it is interpretable both locally and globally, and achieves state-of-the-art performance for AD classification.
## 2 Related Work
Interpretable Models for Tabular Data.Decision trees and linear models, such as logistic regression, are inherently interpretable and have been applied widely [16]. In contrast, multi-layer perceptrons (MLPs) are non-parametric and non-linear, but rely on post-hoc techniques for explanations. A GAM is a non-linear model that is fully interpretable, as its prediction is the sum of the outputs of univariate functions (one for each feature) [15]. Explainable Boosting Machines (EBMs) extend GAMs by allowing pairwise interaction of features. While this may boost performance compared to a standard GAM, the model is harder to interpret, because the number of functions to consider grows quadratically with the number of features. EBMs were used in [24] to predict conversion to AD.
Interpretable Models for Medical Images.ProtoPNet [2] is a case-based interpretable CNN that learns class-specific prototypes and defines the prediction as the weighted sum of the similarities of features, extracted from a given input image, to the learned prototypes. It has been applied in the medical domain for diabetic retinopathy grading [6]. One drawback is that prototypes are restricted by the size of local patches: For example, it cannot learn a _single_ prototype to represent hippocampal atrophy, because the hippocampus appears in the left and right hemisphere. As a result, the number of prototypes needs to be increased to learn a separate prototype for each hemisphere. The Deformable ProtoPNet [3]
allows for multiple fine-grained prototypical parts to extract prototypes, but is bound to a fixed number of prototypical parts that represent a prototype. XProtoNet [11] overcomes this limitation by defining prototypes based on attention masks rather than patches; it has been applied for lung disease classification from radiographic images. Wang et al. [27] used knowledge distillation to guide the training of a ProtoPNet for mammogram classification. However, their final prediction is uninterpretable, because it is the average of the prediction of a ProtoPNet and a black-box CNN. The works in [8, 19, 21, 27, 31] proposed interpretable models for medical applications, but in contrast to ProtoPNet, XProtoNet and GAMs, they do not guarantee that explanations are faithful to the model prediction [22].
## 3 Methods
We propose a prototypical additive neural network for interpretable classification (PANIC) that provides unambiguous local and global explanations for tabular and 3D image data. PANIC leverages the transparency of GAMs by adding functions that measure similarities between an input image and a set of class-specific prototypes, that are latent representations of images from the training data [11].
Let the input consist of \(N\) tabular features \(x_{n}\in\mathbb{R}\) (\(n\in\{1,\ldots,N\}\)), and a 3D grayscale image \(\mathcal{I}\in\mathbb{R}^{1\times H\times D\times W}\). PANIC is a GAM comprising \(N\) univariate functions \(f_{n}\) to account for tabular features, and an inherently interpretable CNN \(g\) to account for image data [11]. The latter provides interpretability by learning a set of \(K\times C\) class-specific prototypes (\(C\) classes, \(K\) prototypes per
Figure 1: An exemplary prediction for class MCI with PANIC: 3D FDG-PET images are processed with an interpretable CNN that computes the cosine similarity between latent representations \(z_{p_{1}^{\text{MCI}}}\),\(z_{p_{2}^{\text{MCI}}}\) and corresponding prototypes \(p_{1}^{\text{MCI}}\),\(p_{2}^{\text{MCI}}\), as seen on the left. For each categorical feature, such as gender, a linear function is learned. Each continuous feature, such as A\(\beta\) is processed with its own MLP. The final prediction is the sum of the outputs of the submodules plus a bias term.
class \(c\in\{1,\ldots,C\}\)). During inference, the model seeks evidence for the presence of prototypical parts in an image, which can be visualized and interpreted in the image domain. Computing the similarities of prototypes to latent features representing the presence of prototypical parts allows to predict the probability of a sample belonging to class \(c\):
\[p(c\,|\,x_{1},\ldots,x_{N},\mathcal{I})=\text{softmax}\left(\mu^{c}\right),\quad \mu^{c}=\beta_{0}^{c}+\sum_{n=1}^{N}f_{n}^{c}(x_{n})+\sum_{k=1}^{K}g_{k}^{c}( \mathcal{I}), \tag{1}\]
where \(\beta_{0}^{c}\in\mathbb{R}\) denotes a bias term, \(f_{n}^{c}(x_{n})\) the class-specific output of a neural additive model for feature \(n\), and \(g_{k}^{c}(\mathcal{I})\) the similarity between the \(k\)-th prototype of class \(c\) and the corresponding feature extracted from an input image \(\mathcal{I}\). We define the functions \(f_{n}^{c}\) and \(g_{k}^{c}\) below.
### Modeling Tabular Data
Tabular data often consists of continuous and discrete-valued features, such as age and genetic alterations. Therefore, we model feature-specific functions \(f_{n}^{c}\) depending on the type of feature \(n\). If it is continuous, we estimate \(f_{n}^{c}\) non-parametrically using a multi-layer perceptron (MLP), as proposed in [1]. This assures full interpretability while allowing for non-linear processing of each feature \(n\). If feature \(n\) is discrete, we estimate \(f_{n}^{c}\) parametrically using a linear model, in which case \(f_{n}^{c}\) is a step function, which is fully interpretable too. Moreover, we explicitly account for missing values by learning a class-conditional missing value indicator \(s_{n}^{c}\). To summarize, \(f_{n}^{c}\) is defined as
\[f_{n}^{c}(x_{n})=\begin{cases}s_{n}^{c},&\text{if $x_{n}$ is missing},\\ \beta_{n}^{c}x_{n},\text{ with }\beta_{n}^{c}\in\mathbb{R},&\text{if $x_{n}$ is categorical}\\ \text{MLP}_{n}^{c}(x_{n}),&\text{otherwise}.\end{cases} \tag{2}\]
Predicting a class with the sum of such univariate functions \(f_{n}^{c}\) was proposed in [1] as Neural Additive Model (NAM). Following [1], we apply an \(\ell_{2}\) penalty on the outputs of \(f_{n}^{c}(x_{n})\):
\[\mathcal{L}_{\text{Tab}}(x_{1},\ldots,x_{n})=\tfrac{1}{C}\sum_{c=1}^{C}\sum_{n =1}^{N}[f_{n}^{c}(x_{n})]^{2}.\]
We want to emphasize that NAMs retain global interpretability by plotting each univariate function \(f_{n}^{c}\) over its domain (see Fig. 2). Local interpretability is achieved by evaluating \(f_{n}^{c}(x_{n})\), which equals the contribution of a feature \(x_{n}\) to the prediction of a sample, as defined in equation (1) (see Fig. 3 on the left).
### Modeling Image Data
We model 3D image data by defining the function \(g_{k}^{c}(\mathcal{I})\) in equation (1) based on XProtoNet [11], which learns prototypes that can span multiple, disconnected regions within an image. In XProtoNet, an image is classified based on the
cosine similarity between a latent feature vector \(z_{p_{k}^{c}}\) and learned class-specific prototypes \(p_{k}^{c}\), as depicted in the top part of Fig. 1:
\[g_{k}^{c}(\mathcal{I})=\text{sim}(p_{k}^{c},z_{p_{k}^{c}})=\frac{p_{k}^{c}\cdot z _{p_{k}^{c}}}{\|p_{k}^{c}\|\|z_{p_{k}^{c}}\|}. \tag{3}\]
A latent feature vector \(z_{p_{k}^{c}}\) is obtained by passing an image \(\mathcal{I}\) into a CNN backbone \(\mathcal{U}:\mathbb{R}^{1\times H\times D\times W}\rightarrow\mathbb{R}^{R \times H^{\prime}\times D^{\prime}\times W^{\prime}}\), where \(R\) is the number of output channels. The result is passed into two separate modules: (i) the feature extractor \(\mathcal{V}:\mathbb{R}^{R\times H^{\prime}\times D^{\prime}\times W^{\prime}} \rightarrow\mathbb{R}^{L\times H^{\prime}\times D^{\prime}\times W^{\prime}}\) maps the feature map to the dimensionality of the prototype space \(L\); (ii) the occurrence module \(\mathcal{O}^{c}:\mathbb{R}^{R\times H^{\prime}\times D^{\prime}\times W^{ \prime}}\rightarrow\mathbb{R}^{K\times H^{\prime}\times D^{\prime}\times W^{ \prime}}\) produces \(K\) class-specific attention masks. Finally, the latent feature vector \(z_{p_{k}^{c}}\) is defined as
\[z_{p_{k}^{c}}=\text{GAP}[\text{sigmoid}(\mathcal{O}^{c}(\mathcal{U}(\mathcal{ I}))_{k})\odot\text{softplus}(\mathcal{V}(\mathcal{U}(\mathcal{I})))], \tag{4}\]
where \(\odot\) denotes the Hadamard product, and GAP global average pooling.
Figure 3: Explanation for the prediction of a single sample from the test set. Left: Individual contributions to the overall prediction. Right: Input FDG-PET overlayed with the attention maps (green) of the corresponding representations \(z_{p_{1}^{\text{AD}}}\),\(z_{p_{1}^{\text{AD}}}\) (columns 1,2), and the attention maps of learned prototypes \(p_{1}^{\text{AD}}\),\(p_{2}^{\text{AD}}\) (columns 3,4).
Figure 2: The plots show the log odds ratio with respect to controls for the top 10 tabular features. Boxplots show the distribution in our data.
Intuitively, \(z_{p_{k}^{c}}\) represents the GAP-pooled activation maps that a prototype \(p_{k}^{c}\) would yield if it were present in that image. For visualization, we can upsample the occurrence map \(\mathcal{O}^{c}(\mathcal{U}(\mathcal{I}))_{k}\) to the input dimensions and overlay it on the input image. The same can be done to visualize prototype \(p_{k}^{c}\) (see Fig 3).
Regularization.Training XProtoNet requires regularization with respect to the occurrence module and prototype space [11]: An occurrence and affine loss enforce sparsity and spatial fidelity of the attention masks \(\mathcal{O}^{c}\) with respect to the image domain:
\[\mathcal{L}_{\mathrm{occ}}(\mathcal{I})=\sum_{c=1}^{C}\lVert\mathcal{O}^{c}( \mathcal{U}(\mathcal{I}))\rVert_{1},\quad\mathcal{L}_{\mathrm{affine}}( \mathcal{I})=\lVert A(\mathcal{O}^{c}(\mathcal{U}(\mathcal{I})))-\mathcal{O}^ {c}(\mathcal{U}(A(\mathcal{I})))\rVert_{1},\]
with \(A\) a random affine transformation. Additionally, latent vectors \(z_{p_{k}^{c}}\) of an image \(\mathcal{I}\) with true class label \(y\) should be close to prototypes of their respective class, and distant to prototypes of other classes:
\[\mathcal{L}_{\mathrm{clst}}(\mathcal{I})=-\max_{k,c=y}\ g_{k}^{c}(\mathcal{I} ),\quad\mathcal{L}_{\mathrm{sep}}(\mathcal{I})=\max_{k,c\neq y}\ g_{k}^{c}( \mathcal{I}).\]
### Panic
As stated in equation (1), PANIC is a GAM comprising functions \(f_{1}^{c},\ldots,f_{N}^{c}\) for tabular data, and functions \(g_{1}^{c},\ldots,g_{K}^{c}\) for 3D image data (see equations (2) and (3)). Tabular features contribute to the overall prediction in equation (1) in terms of the values \(f_{1}^{c}(x_{1}),\ldots,f_{N}^{c}(x_{N})\), while the image contributes in terms of the cosine similarity between the class-specific prototype \(p_{k}^{c}\) and the latent feature vector \(z_{p_{k}^{c}}\). By restricting prototypes to contribute to the prediction of a specific class only, we encourage the model to learn discriminative prototypes for each class.
To interpret PANIC locally, we simply consider the outputs of the functions \(f_{n}^{c}\), and sum the image-based similarity scores over all prototypes: \(\sum_{k=1}^{K}g_{k}^{c}(\mathcal{I})\). To interpret the contributions due to the 3D image in detail, we examine the attention map of each prototype, the attention map of the input image, and the similarity score between each prototype and the image (see Fig. 3 on the right). To interpret PANIC globally, we compute the absolute contribution of each function to the per-class logits in equation (1), and average it over all samples in the training set, as seen in Fig. 4. In addition, we can directly visualize the function \(f_{n}^{c}\) learned from the tabular data in terms of the log odds ratio
\[\log\left[\frac{p(c\,|\,x_{1},\ldots,x_{n},\ldots,x_{N},\mathcal{I})}{p( \mathrm{CN}\,|\,x_{1},\ldots,x_{n},\ldots,x_{N},\mathcal{I})}\Big{/}\frac{p(c \,|\,x_{1},\ldots,x_{n}^{\prime},\ldots,x_{N},\mathcal{I})}{p(\mathrm{CN}\,| \,x_{1},\ldots,x_{n}^{\prime},\ldots,x_{N},\mathcal{I})}\right],\]
where \(x_{n}^{\prime}\) is the mean value of feature \(n\) across all samples for continuous features, and zero for categorical features. As an example, let us consider the AD class. If the log odds ratio for a specific value \(x_{n}\) is positive, it indicates that the odds of being diagnosed as AD, compared to CN, increases. Conversely, if it is negative, the odds of being diagnosed as AD decreases.
We train PANIC with the following loss:
\[\begin{split}\mathcal{L}(y,x_{1},\ldots,x_{n},\mathcal{I})& =\mathcal{L}_{\text{CE}}(y,\hat{y})+\lambda_{1}\mathcal{L}_{\text{Tab}}(x_{1}, \ldots,x_{n})+\lambda_{2}\mathcal{L}_{\text{clst}}(\mathcal{I})\\ &\quad+\lambda_{3}\mathcal{L}_{\text{sep}}(\mathcal{I})+\lambda_{ 4}\mathcal{L}_{\text{occ}}(\mathcal{I})+\lambda_{5}\mathcal{L}_{\text{affine}}( \mathcal{I}),\\ \hat{y}&=\arg\max_{c}\ p(c\,|\,x_{1},\ldots,x_{n}, \mathcal{I}),\end{split} \tag{5}\]
where \(\mathcal{L}_{\text{CE}}\) is the cross-entropy loss, \(\lambda_{1,\ldots,5}\) are hyper-parameters, \(y\) the true class label, and \(\hat{y}\) the prediction of PANIC.
## 4 Experiments
### Overview
#### 4.1.1 Dataset.
Data used in this work was obtained from the ADNI database.1 We select only baseline visits to avoid data leakage, and use FDG-PET images following the processing pipeline described in [18]. Tabular data consists of the continuous features age, education; the cerebrospinal fluid markers A\(\beta\), Tau, p-Tau; the MRI-derived volumes of the left/right hippocampus and thickness of the left/right entorhinal cortex. The categorical features are gender and 31 AD-related genetic variants identified in [7, 13] and having a minor allele frequency of \(\geq 5\%\). Tabular data was standardized using the mean and standard deviation across the training set. Table 1 summarizes our data.
Footnote 1: [https://adni.loni.usc.edu/](https://adni.loni.usc.edu/) [9]
#### 4.1.2 Implementation Details.
We train PANIC with the loss in equation (5) with AdamW [14] and a cyclic learning rate scheduler [26], with a learning rate of 0.002, and weight decay of 0.0005. We choose a 3D ResNet18 for the CNN backbone \(\mathcal{U}\) with \(R=256\) channels in the last ResBlock. The feature extractor \(\mathcal{V}\) and the occurrence module \(\mathcal{O}\) are CNNs consisting of \(1\times 1\times 1\) convolutional layers with ReLU activations. We set \(K=2\), \(L=64\), \(\lambda_{1}=0.01\) and \(\lambda_{2,\ldots,5}=0.5\), and norm the length of each prototype vector to one. For each continuous tabular
\begin{table}
\begin{tabular}{l l l l l} \hline \hline & CN (N=379) & Dementia (N=256) & MCI (N=610) & Total (N=1245) \\ \hline
**Age** & & & & \\ Mean (SD) & 73.5 (5.9) & 74.5 (7.9) & 72.3 (7.3) & 73.1 (7.1) \\ Range & 55.8 - 90.1 & 55.1 - 90.3 & 55.0 - 91.4 & 55.0 - 91.4 \\
**Gender** & & & & \\ Female & 193 (50.9\%) & 104 (40.6\%) & 253 (41.5\%) & 550 (44.2\%) \\ Male & 186 (49.1\%) & 152 (59.4\%) & 357 (58.5\%) & 695 (55.8\%) \\
**Education** & & & & \\ Mean (SD) & 16.4 (2.7) & 15.4 (2.8) & 16.1 (2.7) & 16.0 (2.8) \\ Range & 7.0 - 20.0 & 4.0 - 20.0 & 7.0 - 20.0 & 4.0 - 20.0 \\
**MMSE** & & & & \\ Mean (SD) & 29.0 (1.2) & 23.2 (2.2) & 27.8 (1.7) & 27.2 (2.7) \\ Range & 24.0 - 30.0 & 18.0 - 29.0 & 23.0 - 30.0 & 18.0 - 30.0 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Statistics for the data used in our experiments.
feature \(n\), \(f_{n}^{c}\) is a MLP that shares parameters for the class dependent outputs in (2). Thus, each MLP has 2 layers with 32 neurons, followed by an output layer with \(C\) neurons. As opposed to [1], we found it helpful to replace the ExU activations by ReLU activations. We apply spectral normalization [17] to make MLPs Lipschitz continuous. We add dropout with a probability of 0.4 to MLPs, and with a probability of 0.1 to all univariate functions in equation (1). The set of affine transformations \(A\) comprises all transformations with a scale factor \(\in[0.8;1.2]\) and random rotation \(\in[-180^{\circ};180^{\circ}]\) around the origin. We initialize the weights of \(\mathcal{U}\) from a pre-trained model that has been trained on the same training data on the classification task for 100 epochs with early stopping. We cycle between optimizing all parameters of the network and optimizing the parameters of \(f_{n}^{c}\) only. We only validate the model directly after prototypes \(p_{k}^{c}\) have been replaced with their closest latent feature vector \(z_{p_{k}^{c}}\) of samples from the training set of the same class. Otherwise, interpretability on an image level would be lost.
We perform 5-fold cross-validation, based on a data stratification strategy that accounts for sex, age and diagnosis. Each training set is again split such that 64% remain training and 16% are used for hyper-parameter tuning (validation set). We report the mean and standard deviation of the balanced accuracy (BAcc) of the best hyper-parameters found on the validation sets.
### Classification Performance
PANIC achieves 64.0\(\pm\)4.5% validation BAcc and 60.7\(\pm\)4.4% test BAcc. We compare PANIC to a black-box model for heterogeneous data, namely DAFT [29]. We carry out a random hyper-parameter search with 100 configurations for learning rate, weight decay and the bottleneck factor of DAFT. DAFT achieves a validation and test BAcc of 60.9\(\pm\)0.7% and 56.2\(\pm\)4.5%, respectively. This indicates that interpretability does not necessitate a loss in prediction performance.
### Interpretability
PANIC is easy to interpret on a global and local level. Figure 4 summarizes the average feature importance over the training set. It shows that FDG-PET has on average the biggest influence on the prediction, but also that importance can vary across classes. For instance, the SNP rs429358, which is located in the ApoE gene, plays a minor role for the controls, but is highly important for the AD class. This is reassuring, as it is a well known risk factor in AD [25]. The overall most important SNP is rs62097986, which is suspected to influence brain volumes early in neurodevelopment [7].
To get a more detailed inside into PANIC, we visualize the log odds ratio with respect to the function \(f_{n}^{c}\) across the domain of the nine most important tabular features in Fig. 2. We can easily see that PANIC learned that atrophy of the left hippocampus increases the odds of being diagnosed as AD. The volume of the right hippocampus is utilized similarly. For MCI, it appears as PANIC has overfit on outliers with very low right hippocampus volume. Overall, the results
for left/right hippocampus agree with the observation that the hippocampus is among the first structures experiencing neurodegeneration in AD [23]. The function for left entorhinal cortex thickness agrees with previous results too [23]. An increase in A\(\beta\) measured in CSF is associated with a decreased risk of AD [25], which our model captured correctly. The inverse relationship holds for Tau [25], which PANIC confirmed too. The function learned for age shows a slight decrease in the log odds ratio of AD and MCI, except for patients around 60 years of age, which is due to few data samples for this age range in the training data. We note that the underlying causes that explain the evolution from normal aging to AD remain unknown, but since age is considered a confounder one should control for it [10]. Overall, we observe that PANIC learned a highly non-linear function for the continuous features hippocampus volume, entorhinal thickness, A\(\beta\), Tau, and age, which illustrates that estimating the functions \(f_{n}^{c}\) via MLPs is effective. In our data, males have a higher incidence of AD (see Tab. 1), which is reflected in the decision-making of the model too. Our result that rs28834970 decreases the odds for AD does not agree with previous results [13]. However, since PANIC is fully interpretable, we can easily spot this misconception.
Additionally, we can visualize the prototypes by upscaling the attention map specific to each prototype, as produced by the occurrence module, to the input image size and highlighting activations of more than 30% of the maximum value, as proposed in [11] (see Fig. 3 on the right). The axial view of \(p_{1}^{\mathrm{AD}}\) shows attention towards the occipital lobe and \(p_{2}^{\mathrm{AD}}\) towards one side of the occipital lobe. Atrophy around the ventricles can be seen in FDG-PET [25] and both prototypes \(p_{1}^{\mathrm{AD}}\) and \(p_{2}^{\mathrm{AD}}\) incorporate this information, as seen in the coronal views. The sagittal views show, that \(p_{2}^{\mathrm{AD}}\) focuses on the cerebellum and parts of the occipital lobe. The parietal lobe is clearly activated by the prototype in the sagittal view of \(p_{1}^{\mathrm{AD}}\) and was linked to AD previously [25].
Figure 4: Ranking of the most import features learned by PANIC. FDG-PET denotes the combined importance of all prototypes related to the FDG-PET image.
We can interpret the decision-making of PANIC for a specific subject by evaluating the contribution of each function with respect to the prediction (see Fig. 3 on the left). The patient was correctly classified as AD, and most evidence supports this (red arrows). The only exceptions are the SNPs rs4147929 and rs6087771, which the models treats as evidence against AD. Hippocamp volume contributed most to the prediction, i.e., was most important. Since, the subject's left hippocampus volume is relatively low (see Fig. 2), this increases our trust in the model's prediction. The subject is heterozygous for rs429358 (ApoE), a well known marker of AD, which the model captured correctly [25]. The four variants rs9331896, rs10498633, rs4147929, and rs4147929 have been associated with nucleus accumbens volume [7], which is involved in episodic memory function [30]. Atrophy of the nucleus accumbens is associated with cognitive impairment [30]. FDG-PET specific image features present in the image show a similarity of 0.44 to the features of prototype \(p_{1}^{\text{AD}}\). It is followed by minor evidence of features similar to prototype \(p_{2}^{\text{AD}}\). During the prediction of the test subject, the network extracted prototypical parts from similar regions. As seen in the axial view, both parts \(z_{p_{1}^{\text{AD}}}\) and \(z_{p_{2}^{\text{AD}}}\) contain parts of the occipital lobe. Additionally, they cover a large part of the temporal lobe, which has been linked to AD [25].
In summary, the decision-making of PANIC is easy to comprehend and predominantly agrees with current knowledge about AD.
## 5 Desiderata for Machine Learning Models
We now show that PANIC satisfies four desirable desiderata for machine learning (ML) models, based on the work in [22].
**Explanations must be faithful to the underlying model (perfect fidelity).** To avoid misconception, an explanation must not mimic the underlying model, but equal the model. The explanations provided by PANIC are the values provided by the functions \(f_{1}^{c},\dots,f_{N}^{c},g_{1}^{c},\dots,g_{K}^{c}\) in equation (1). Since PANIC is a GAM, the sum of these values (plus bias) equals the prediction. Hence, explanations of PANIC are faithful to how the model arrived at a specific prediction. We can plot these values to gain local interpretability, as done in Fig. 3.
**Explanations must be detailed.** An explanation must be comprehensive such that it provides all information about what and how information is used by a model (global interpretability). The information learned by PANIC can be described precisely. Since PANIC is based on the sum of univariate functions, we can inspect individual functions to understand what the model learned. Plotting the functions \(f_{n}^{c}\) over their domain explains the change in odds when the value \(x_{n}\) changes, as seen in Fig. 2. For image data, PANIC uses the similarity between the features extracted from the input image and the \(K\) class-specific prototypes. Global interpretability is achieved by visualizing the training image a prototype was mapped to, and its corresponding attention map 3.
**A machine learning model should help to improve the knowledge discovery process.** For AD, the precise cause of cognitive decline remains
elusive. Therefore, ML should help to identify biomarkers and relationships, and inform researchers studying the biological causes of AD. PANIC is a GAM, which means it provides full global interpretability. Therefore, the insights it learned from data are directly accessible, and provide unambiguous feedback to the knowledge discovery process. For instance, we can directly infer what PANIC learned about FDG-PET or a specific genetic mutation, as in Figs. 2 and 4. This establishes a feedback loop connecting ML researchers and researchers studying the biological causes of AD, which will ultimately make diagnosis more accurate.
**ML models must be easy to troubleshoot.** If an ML model produces a wrong or unexpected result, we must be able to easily troubleshoot the model. Since PANIC provides local and global interpretability, we can easily do this. We can use a local explanation (see Fig. 3) to precisely determine what the deciding factor for the prediction was and whether it agrees with our current understanding of AD. If we identified the culprit, we can inspect the function \(f_{n}^{c}\), in the case of a tabular feature, or the prototypes, in the case of the image data: Suppose, the age of the patient in Fig. 3 was falsely recorded as 30 instead of 71. The contribution of age \(f_{\text{age}}^{\text{AD}}\) would increase from \(-0.138\) to \(3.28\), thus, dominate the prediction by a large margin. Hence, the local explanation would reveal that something is amiss and prompt us to investigate the learned function \(f_{\text{age}}^{\text{AD}}\) (see Fig. 2), which is ill-defined for this age.
## 6 Conclusion
We proposed an inherently interpretable neural network for tabular and 3D image data, and showcased its use for AD classification. We used local and global interpretability properties of PANIC to verify that the decision-making of our model largely agrees with current knowledge about AD, and is easy to troubleshoot. Our model outperformed a state-of-the-art black-box model and satisfies a set of desirable desiderata that establish trustworthiness in PANIC.
#### Acknowledgements
This research was partially supported by the Bavarian State Ministry of Science and the Arts and coordinated by the bidt, the BMBF (DeepMentia, 031L0200A), the DFG and the LRZ.
|
2308.13222 | Bayesian Reasoning for Physics Informed Neural Networks | We present the application of the physics-informed neural network (PINN)
approach in Bayesian formulation. We have adopted the Bayesian neural network
framework to obtain posterior densities from Laplace approximation. For each
model or fit, the evidence is computed, which is a measure that classifies the
hypothesis. The optimal solution is the one with the highest value of evidence.
We have proposed a modification of the Bayesian algorithm to obtain
hyperparameters of the model. We have shown that within the Bayesian framework,
one can obtain the relative weights between the boundary and equation
contributions to the total loss. Presented method leads to predictions
comparable to those obtained by sampling from the posterior distribution within
the Hybrid Monte Carlo algorithm (HMC). We have solved heat, wave, and Burger's
equations, and the results obtained are in agreement with the exact solutions,
demonstrating the effectiveness of our approach. In Burger's equation problem,
we have demonstrated that the framework can combine information from
differential equations and potential measurements. All solutions are provided
with uncertainties (induced by the model's parameter dependence) computed
within the Bayesian framework. | Krzysztof M. Graczyk, Kornel Witkowski | 2023-08-25T07:38:50Z | http://arxiv.org/abs/2308.13222v2 | # Bayesian Reasoning for Physics Informed Neural Networks
###### Abstract
Physics informed neural network (PINN) approach in Bayesian formulation is presented. We adopt the Bayesian neural network framework formulated by MacKay [Neural Computation 4 (3) (1992) 448-472]. The posterior densities are obtained from Laplace approximation. For each model (fit), the so-called evidence is computed. It is a measure that classifies the hypothesis. The most optimal solution has the maximal value of the evidence. The Bayesian framework allows us to control the impact of the boundary contribution to the total loss. Indeed, the relative weights of loss components are fine-tuned by the Bayesian algorithm. We solve heat, wave, and Burger's equations. The obtained results are in good agreement with the exact solutions. All solutions are provided with the uncertainties computed within the Bayesian framework.
## 1 Introduction
The deep learning (DL) revolution impacts almost every branch of life and sciences [1]. The deep learning models are also adapted and developed for solving problems in physics [2]. The statistical analysis of the data is one of the popular and straightforward domains of application of neural networks in physics [3, 4]. Another one is using DL-like models to optimize and modernize computational systems in physics, starting from solvers for fluid flow [5, 6] and finishing on the Monte Carlo generators in particle physics [7]. Deep neural networks are examples of artificial neural networks (NNs) studied and utilized in various branches of life and science for years [8]. In this paper, we consider the feed-forward neural networks with several layers of hidden units that are utilized to solve partial differential equations (PDEs).
Differential equations are the essence of physics. Indeed, they describe the system and its evolution in time in classical mechanics, electrodynamics, fluid physics, quantum mechanics, Etc. Some can be solved analytically, but a vast class of problems can be solved numerically only.
One of the exciting ideas is to adapt a neural network framework to solve PDEs numerically. The problem of numerical integration comes down to the optimization problem. Indeed, the approximate solution of a differential equation is parametrized by a feed-forward neural network that depends on the parameters (weights). The optimal solution minimizes a cost function defined for a given equation. One of the first successful formulations of this idea was provided by Lagaris et al. [9]. They applied the method to ordinary and partial differential equations. It was assumed that the feed-forward neural network, modified to satisfy initial or boundary conditions, was a solution of an equation. A similar idea was exploited by Lagaris _et al._ for solving eigenvalue problems in quantum mechanics [10].
One of the problems of the Lagaris _et al._ approach was its relatively low universality, i.e., for each differential equation, one has to encode the initial or boundary conditions in the network parametrization. Although the final solution exactly satisfies the required requirements, the training of such a network could be smoother. Therefore, in modern applications, the boundary or initial conditions
are introduced as additional contributions to the cost function. As a result, the boundary or initial constraints are satisfied by the obtained solution only on an approximate level. However, the optimization goes much smoother than in the Lagaris _et al._ approach. This new approach was formulated by Raissi _et al._[11, 12, 13] and called _Physics Informed Neural Network_ (PINN). Two classes of problems were discussed: data-driven solutions and data-driven discovery of PDEs. The PINN method has been popular, and it has been exploited for forward and inverse types of the problems [14, 15, 15, 16, 17, 18, 19, 20, 21, 22, 23]. An extensive literature review on PINNs is given by Cuomo _et al._[24].
Karniadakis _et al._[25] provides a comprehensive review of the PINN approach. In particular, they point out the major difficulties of the approach, namely, the problem of tuning the hyperparameters, fixing relative weights between various terms in the loss function, and the convergence to the global minimum. The PINN idea can be naturally extended to a broader class of problems than PDEs, in which the neural networks or machine learning systems are adapted to solve the problem or to optimize the system that solves the problem, see reviews by Faroughi _et al._[26] (the physics guided, informed, and encoded neural network approaches), Meng _et al._[27] and Hao _et al._[28] (physics informed machine learning including PINNs).
Artificial neural networks have been studied for more than sixty years [29, 30, 31, 8]. Before the deep learning revolution, shallow feed-forward neural networks had been extensively studied. Algorithms for treatments of the networks have been developed. However, many of the achievements for shallow networks do not apply to deep neural networks because of the complexity of the DL system. Nevertheless, even for the small NNs, there were several difficulties that one had to face, namely, overfitting problem, choice of the proper network architecture and model hyperparameters, and estimation of the uncertainties in network predictions. Bayesian statistics offer systematic methods that allow one to overcome these difficulties. Hence, in the nineties of the XX century, the Bayesian methods were discussed and adapted for artificial neural networks [32].
Usually, in data analysis, the methods of frequentistic rather than Bayesian statistics are utilized. There are essential differences between both methodologies [33]. The Bayesian approach seems to be more subjective in the formulation than frequentistic. However, the way the Bayesian framework (BF) is defined allows a user to control the subjectivity of assumptions. From that perspective, Bayesian statistics can be used to study the impact of the model assumptions on the analysis results[34].
In Bayesian statistics, one must evaluate the posterior probabilities. The probability is a measure of the degree of belief that an event will occur [33]. The initial (prior assumptions) are encoded in the prior probabilities. Bayes' rule is used to obtain the posterior probabilities. Note that the posterior probability can continuously be updated when new data or information arrives. Moreover, within the BF, it is possible to consider various types of uncertainties, namely, uncertainty due to the noise of the data, the lack of knowledge of the model parameters, and model-dependence of the assumptions.
Ultimately, it is possible to quantitatively and, in some sense, objectively compare various models and choose the one which is the most favorable to the data. Eventually, BF embodies Occam's razor principle [34, 35]. The models that are too complex (with too many parameters, in the case of a neural network with too many units) are naturally penalized.
When the PDEs are solved with the PINN method, initial or boundary conditions can be naturally understood as prior assumptions. Moreover, the BF allows one to infer from the data the parameters that parameterize PDEs or directly the form of the PDEs. Eventually, the Bayesian PINN solution is provided with uncertainties. Therefore the Bayesian approach to the PINN is one of the directions explored in the last years [36].
Indeed, there are many papers where various algorithmic formulations of the Bayesian PINNs are given [37, 38, 39]. The initial works concerned adapting the Gaussian process technique [40] for regression [41, 42] - the infinitely wide one-hidden layer network can be understood as a Gaussian process [43]. In many applications to obtain the posterior distributions, the Hamiltonian Monte Carlo and the variational inference are adapted [44, 45, 46, 47]. The broad class of Bayesian algorithms utilized for training neural networks is discussed in a survey by Magris and Iosifidis [48].
In this paper, we follow MacKay's Bayesian framework to neural networks [49, 50, 51], see also a pedagogical review by Bishop [32] (Chapter 10). We shall demonstrate that with this approach and PINN framework, one can numerically solve PDFs such as heat, wave, and Burger's equations. The Bayesian framework allows us to fine-tune the relative weights of the loss components and, as a result, control the impact of the boundary terms on the solution.
In MacKay's framework, all necessary probabilities have Gaussian form. The Laplace approximation is imposed to evaluate the posterior distribution. As a result, all probability densities are obtained in analytic form. The so-called evidence is obtained - a measure that classifies the models. The framework allows us to choose objectively the most optimal network architecture for a given problem, compare objectively distinct network models, estimate the uncertainties for the network predictions and the network weights, and fine-tune the model's hyperparameters. In the past, we have adapted MacKay's framework to study the electroweak structure of the nucleons [52, 53, 54, 4, 55, 3]. For that project we developed our own C++ Bayesian neural network library. In contrast, in the present project, we utilize PyTorch environment [56].
The paper is organized as follows: after the introduction in Sect. 2 we describe shortly PINNs. The BF is reviewed in Sec. 3. In Sec. 4, we show the solutions of heat (Sec. 4.1), wave (Sec. 4.2), and Burger's (Sec. 4.3) equations. In Sec. 5, we summarize.
## 2 Pinn
### Multilayer perceptron configuration
We shall consider the feed-forward neural network in the multilayer perceptron (MLP) configuration. The MLP, denoted by \(\mathcal{N}\), represents a nonlinear map from the input space of the dimension \(d_{in}\) to the output space of the dimension \(d_{out}\),
\[\mathcal{N}:\mathbb{R}^{d_{in}}\mapsto\mathbb{R}^{d_{out}}. \tag{1}\]
Note that \(\mathcal{N}=\mathcal{N}(\mathbf{x},\mathbf{w})\) is a function of input vector \(\mathbf{x}\in\mathbb{R}^{d_{in}}\) and its parameters called weights, denoted by the vector \(\mathbf{w}=(w_{1},\dots,w_{i},\dots,w_{W})\), where \(W\) is the total number of weights in a given network.
The MLP is represented by a graph consisting of several layers of connected units (nodes). Each edge of the graph corresponds to a function parameter, a weight, \(w_{i}\). A unit (node) in a given layer is a single-valued function (activation function). Note that every unit, except the bias unit, is connected with the nodes from the previous layer. In a graph representing a neural network, we distinguish an input (input units), hidden layers of units, and an output layer. Each layer (except the output layer) has a bias unit connected only with the following layer units. It is a single-valued constant
Figure 1: The above MLP contains two hidden unit layers. Empty squares denote the input, whereas filled circles represent the output; Blue filled and open circles indicate hidden and bias units, respectively.
function equal to one. An example of a graph of a network with three-dimensional input and two-dimensional output is given in Fig. 1. This network has two hidden layers with five and three hidden units. The properties of the network, \(\mathcal{N}\), are determined by its structure (architecture - topology of the connections) and the type of activation functions that settle in the neurons (units).
The universal approximation theorem [57, 58, 59] states that a feed-forward neural network with at least one layer of hidden units can approximate well any given continuous function. It is a fundamental property of neural networks used extensively in nonlinear regression and classification problems.
Note that the increasing number of units in the neural network broadens its adaptive abilities. However, if a network is large (contains many hidden units), then there is a danger that it can overfit the data, and as a result, it has low generalization abilities. On the other hand, if the network is small (low number of hidden units), it tends to underfit the data. This dilemma is known as the bias-variance trade-off [60]. There are some rules of thumb on how to solve the problem of under/over-fitting. The most popular is to constrain the values of the weight parameters by adding to the loss (6) the penalty term of the form:
\[E_{w}=\frac{\alpha}{2}\sum_{i=1}^{W}w_{i}^{2}, \tag{2}\]
where \(\alpha\) is the decay width parameter. There are other popular methods. However, we shall utilize the one described above because of its Bayesian interpretation. Note that we shall consider several decay width parameters accommodated in the vector \(\mathbf{\alpha}\) below.
### PINN: a formulation
We formulate the PINN approach similarly as in Ref. [61]. Let us consider the sets of equations:
\[Eq_{i}(\mathbf{x},\hat{f},\partial\hat{f},\partial\partial\hat{ f}) = 0,\quad i=1,2,\ldots,q,\ \mathbf{x}\in\mathcal{M}, \tag{3}\] \[\mathcal{B}_{j}(\mathbf{x},\hat{f},\partial\hat{f}) = 0,\quad j=1,2,\ldots,b,\ \mathbf{x}\in\partial\mathcal{M}, \tag{4}\]
where \(Eq_{i}\)'s and \(\mathcal{B}_{j}\)'s are the differential and boundary operators respectively; \(\mathcal{M}\subseteq\mathbb{R}^{d}\), \(d=1,2,\ldots;\hat{f}\) is the solution of the equations (3)-(4). Note that \(\partial\hat{f}\) and \(\partial\partial\hat{f}\) refer to any first-order partial derivative and second-order derivatives with respect to the input variables, respectively.
In the simplest version of the PINN approach, it is assumed that an approximate solution of the (3) with boundary conditions (4) is given by neural network \(\mathcal{N}(\mathbf{x};\mathbf{w})\). Then, the network solution is so that
\[\mathbf{w}^{*}=\arg\min_{\mathbf{w}}E, \tag{5}\]
where
\[E = \frac{1}{2}\sum_{i=1}^{q}\beta_{q}^{i}\sum_{k=1}^{N_{q}}\left[Eq_{ i}(\mathbf{x}_{k};\mathcal{N}(\mathbf{x}_{k},\mathbf{w}),\partial\mathcal{N}( \mathbf{x}_{k},\mathbf{w}),\partial\partial\mathcal{N}(\mathbf{x}_{k},\mathbf{ w}))\right]^{2} \tag{6}\] \[+\frac{1}{2}\sum_{j=1}^{b}\beta_{j}^{j}\sum_{k=1}^{N_{b}}\left[ \mathcal{B}_{i}(\mathbf{x}_{k};\mathcal{N}(\mathbf{x}_{k},\mathbf{w}), \partial\mathcal{N}(\mathbf{x}_{k},\mathbf{w}),\partial\partial\mathcal{N}( \mathbf{x}_{k},\mathbf{w}))\right]^{2}.\]
The parameters \(\beta_{q}^{i}\)'s and \(\beta_{b}^{j}\)'s control the relative weights of the components of the loss \(E\). To simplify the discussion, we shall consider only one \(\beta_{q}^{i}=\beta_{q}\) and one \(\beta_{b}^{j}=\beta_{b}\). We keep them together in the vector \(\mathbf{\beta}=(\beta_{q},\beta_{b})\). Moreover, in this paper, we consider the PDEs with \(q=1\). Hence the loss consists of
\[E_{D}^{q}=\frac{1}{2}\sum_{k=1}^{N_{q}}\left(Eq(\mathbf{x}_{k};\mathcal{N}( \mathbf{x}_{k},\mathbf{w}),\partial\mathcal{N}(\mathbf{x}_{k},\mathbf{w}), \partial\partial\mathcal{N}(\mathbf{x}_{k},\mathbf{w}))\right)^{2} \tag{7}\]
and
\[E_{D}^{b}=\frac{1}{2}\sum_{j=1}^{b}\sum_{k=1}^{N_{b}}\left(\mathcal{B}_{j}( \mathbf{x}_{k};\mathcal{N}(\mathbf{x}_{k},\mathbf{w}),\partial\mathcal{N}( \mathbf{x}_{k},\mathbf{w}),\partial\partial\mathcal{N}(\mathbf{x}_{k},\mathbf{ w}))\right)^{2}, \tag{8}\]
where
\[E=\beta_{q}E_{D}^{q}+\beta_{b}E_{D}^{b} \tag{9}\]
By \(N_{q/b}\), we denote a number of points in \(\mathcal{M}/\partial\mathcal{M}\) collected to optimize the network model. The total training dataset consists of points from \(\mathcal{M}\) and \(\partial\mathcal{M}\), namely, \(\mathcal{D}=\mathcal{D}_{q}\cup\mathcal{D}_{b}\).
Bayesian framework for neural networks
Solving the PDE in the PINN approach comes down to optimization tasks. Namely, it is assumed that the neural network parameterizes the solution, then the loss, such as in Eq. 6, is formulated, and model parameters are optimized. Solving the PDE can be understood as the Bayesian inference process. Indeed, we assume prior knowledge about the solution and its parameters and then search for the posterior realization. Another problem is the proper fine-tuning of the beta parameters, which controls the impact of the boundary conditions on the solution. The choice of optimal network architecture seems to be essential too. Eventually, it is evident that the solution given by the neural network is only an approximation of the _true_ solution, or it can be treated as a statistical prediction of the desired solution. Therefore, the PINN solution should be provided with uncertainties. We shall argue in the next section that the Bayesian statistical methods can help to establish the optimal structure of the network architecture; moreover, with the help of Bayesian reasoning, one may obtain the optimal values for the hyperparameters such as \(\beta_{q}\) and \(\beta_{b}\), and \(\alpha\) (a decay width) and compute uncertainties for the obtained fits.
### General foundations
The Bayesian Neural Network (BNN) framework [49, 50, 51, 32] we adapt aims to obtain the posterior densities for the network parameters, weights \(\mathbf{w}\), and hyperparameters \(\boldsymbol{\alpha}\) and \(\boldsymbol{\beta}\).
A choice of network architecture impacts the analysis results. Indeed, the model, too simple, with a small number of neurons, cannot be flexible enough to describe the problem. On the other hand, the model too complex with many neurons tends to overfit the data. As we mentioned above BF shall allow us to compare fits obtained for different networks and get posterior densities for all necessary model parameters and hyperparameters.
In the BNN the posteriors for the model parameters and hyperparameters are obtained in the ladder approximation described below. Finally, the approach leads to the computation of the so-called evidence, a measure that ranks models.
The starting point of the procedure is to obtain the posterior density for the network weights. Assuming that the hyperparameters, \(\boldsymbol{\alpha}\) and \(\boldsymbol{\beta}\), and model architecture \(\mathcal{N}\) are fixed
\[P(\mathbf{w}\mid\boldsymbol{\alpha},\boldsymbol{\beta},\mathcal{N },\mathcal{D}) = \frac{P(\mathcal{D}\mid\mathbf{w},\boldsymbol{\beta},\mathcal{N })P(\mathbf{w}\mid\boldsymbol{\alpha},\mathcal{N})}{P(\mathcal{D}\mid \boldsymbol{\alpha},\boldsymbol{\beta},\mathcal{N})} \tag{10}\] \[P(\mathcal{D}\mid\boldsymbol{\alpha},\boldsymbol{\beta}, \mathcal{N}) = \int P(\mathcal{D}\mid\mathbf{w},\boldsymbol{\beta},\mathcal{N })P(\mathbf{w}\mid\boldsymbol{\alpha},\mathcal{N})d\mathbf{w} \tag{11}\]
In the above, we silently assumed that the hyperparameter \(\boldsymbol{\beta}\) appears in the likelihood \(P(\mathcal{D}\mid\mathbf{w},\boldsymbol{\beta},\mathcal{N})\), while hyperparameter \(\boldsymbol{\alpha}\) appears in prior \(P(\mathbf{w}\mid\boldsymbol{\alpha},\mathcal{N})\) only.
In principle, the hyperparameters should be integrated out from the model's prior densities, but it is not easy to perform. In the BNN framework, it is assumed that the posterior density \(p(\boldsymbol{\alpha},\boldsymbol{\beta}\mid\mathcal{N},\mathcal{D})\) has a sharp peak at \(\boldsymbol{\alpha}_{MP}\), \(\boldsymbol{\beta}_{MP}\) (\(MP\) - maximum of posterior) then prior for weights reads \(p(\mathbf{w}\mid\mathcal{N},\mathcal{D})\approx p(\mathbf{w}\mid\boldsymbol {\alpha}_{MP},\boldsymbol{\beta}_{MP},\mathcal{N},\mathcal{D})\). We see that to obtain desired prior, one needs to find \(\boldsymbol{\alpha}_{MP}\), \(\boldsymbol{\beta}_{MP}\).
In the second step of the ladder, we use the likelihood for the hyperparameters, Eq. 11, to evaluate the posterior density for the hyperparameters, namely,
\[P(\boldsymbol{\alpha},\boldsymbol{\beta}\mid\mathcal{N}, \mathcal{D}) = \frac{P(\mathcal{D}\mid\boldsymbol{\alpha},\boldsymbol{\beta}, \mathcal{N})P(\boldsymbol{\alpha},\boldsymbol{\beta}\mid\mathcal{N})}{P( \mathcal{D}\mid\mathcal{N})} \tag{12}\] \[P(\mathcal{D}\mid\mathcal{N}) = \int P(\mathcal{D}\mid\boldsymbol{\alpha},\boldsymbol{\beta}, \mathcal{N})P(\boldsymbol{\alpha},\boldsymbol{\beta}\mid\mathcal{N})d \boldsymbol{\alpha}\,d\boldsymbol{\beta} \tag{13}\]
Note that the density (13) is called the model evidence. In reality, the prior for the hyperparameters factorizes into the product of two densities, namely,
\[P(\boldsymbol{\alpha},\boldsymbol{\beta}\mid\mathcal{N})=P(\boldsymbol{ \alpha}\mid\mathcal{N})\,P(\boldsymbol{\beta}\mid\mathcal{N}). \tag{14}\]
In the third step, we compute the posterior density \(P(\mathcal{N}\mid\mathcal{D})\). It is the quantity that ranks the models, i.e., a model the most favorable by the data has the highest value of \(P(\mathcal{N}\mid\mathcal{D})\).
\[P(\mathcal{N}\mid\mathcal{D}) = \frac{P(\mathcal{D}\mid\mathcal{N})P(\mathcal{N})}{P(\mathcal{D})}. \tag{15}\]
In practice, any model is equally likely at the beginning of the analysis, and \(P(\mathcal{N})\) is uninformative. Hence
\[P(\mathcal{N}\mid\mathcal{D}) \sim P(\mathcal{D}\mid\mathcal{N}) \tag{16}\]
and to rank the model, it is enough to compute the evidence.
### BNN framework
This section summarizes the main steps of the BNN framework. For more details, see chapter 10 of [32]. The main idea is to assume that all the densities should have a Gaussian-like shape.
#### Step 1
The prior density for the weights is constructed based on several observations. Namely, a network works the most efficiently for weights parameters that take values around zero. There are no sign preferences for the weights. Moreover, one can distinguish several classes of weights in the network. It is the result of the internal symmetries of a network model. For instance, the interchange of two units in the same hidden layer does not change the functional form of the network map. The weights which belong to the same class should have the same scaling property [32]. According to that, we assume the following Gaussian prior density for the weights:
\[p(\mathbf{w}\mid\boldsymbol{\alpha},\mathcal{N})=\frac{1}{Z_{W}}\exp\left(- \sum_{i=1}^{C}\alpha_{i}E_{w}^{i}\right)\equiv\frac{1}{Z_{W}}\exp\left(-E_{w} \right), \tag{17}\]
and
\[E_{w}^{i} = \frac{1}{2}\sum_{k=s_{i-1}}^{s_{i}}w_{k}^{2},\quad s_{i}=\sum_{l =1}^{i}W_{l},\;s_{0}=1 \tag{18}\] \[Z_{W}(\boldsymbol{\alpha}) = \prod_{i=1}^{C}\left(\frac{2\pi}{\alpha_{i}}\right)^{\frac{W_{i}} {2}}, \tag{19}\]
In the above, \(C\) denotes the number of weight classes; \(w_{j}\) refers to the \(j\)-th weight, \(W_{i}\) is the number of weights in \(i\)-th class; \(\alpha_{i}\) is a decay width for \(i\)-th class of weights. The vector \(\boldsymbol{\alpha}=(\alpha_{1},\ldots,\alpha_{C})\) contains decay weights.
Note that the \(\alpha_{i}\)'s play two roles. They control the underfitting/overfitting. If \(\alpha\) is too small, the network tends to overfit, whereas when \(\alpha\) is too large, the network does not adapt to the data. The \(1/\sqrt{\alpha_{i}}\)'s defines the with of the Gaussian prior for weights.
Now, we shall construct the likelihood so that it contains all the contributions of the PINN loss terms. Assume that we have the constraints coming from the differential equations and one determined by the boundary condition, then we have two corresponding hyper-parameters \(\beta_{q}\), \(\beta_{b}\), and the likelihood reads
\[p(\mathcal{D}\mid\mathbf{w},\boldsymbol{\beta})=\frac{1}{Z_{D}(\boldsymbol{ \beta})}\exp\left(-\beta_{q}E_{D}^{q}-\beta_{b}E_{D}^{b}\right), \tag{20}\]
where the normalization factor reads
\[Z_{D}(\boldsymbol{\beta})=Z_{D}(\beta_{q})Z_{D}(\beta_{b}) \tag{21}\]
and
\[Z_{D}(\beta_{q(b)})=\left(\frac{2\pi}{\beta_{q(b)}}\right)^{\frac{N_{q(b)}}{ 2}}. \tag{22}\]
Having the posterior and the likelihood, we compute the posterior for the weights
\[p(\mathbf{w}\mid\mathcal{D},\boldsymbol{\alpha},\boldsymbol{ \beta}) = \frac{p(\mathcal{D}\mid\mathbf{w},\boldsymbol{\beta})p(\mathbf{w },\boldsymbol{\alpha})}{\int d\mathbf{w}\,p(\mathcal{D}\mid \mathbf{w},\boldsymbol{\beta})p(\mathbf{w},\boldsymbol{\alpha})} \tag{24}\] \[= \frac{1}{Z_{T}(\boldsymbol{\alpha},\boldsymbol{\beta})}\exp\left( -\left\{\sum_{i=1}^{C}\alpha_{i}E_{w}^{i}+\beta_{q}E_{D}^{q}+\beta_{b}E_{D}^{ b}\right\}\right),\]
where
\[E_{T} = \sum_{i=1}^{C}\alpha_{i}E_{w}^{i}+\beta_{q}E_{D}^{q}+\beta_{b}E_{D}^ {b} \tag{25}\] \[Z_{T}(\mathbf{\alpha},\mathbf{\beta}) = \int d\mathbf{w}\exp\left(-E_{T}(\mathbf{w})\right). \tag{26}\]
Now it is assumed that the prior has a sharp local maximum. To obtain the posterior, one expands the \(E_{T}\) around its minimum
\[E_{T}(\mathbf{w})\approx E_{T}(\mathbf{w}_{MP})+\frac{1}{2}\left(\mathbf{w}- \mathbf{w}_{MP}\right)^{T}\mathbf{A}\left(\mathbf{w}-\mathbf{w}_{MP}\right), \tag{27}\]
where \(\mathbf{A}\) is defined by
\[\mathbf{A}=\mathbf{H}+\mathbf{I}_{W}(\mathbf{\alpha}), \tag{28}\]
where
\[\mathbf{I}_{W}(\mathbf{\alpha})=\mathrm{diag}(\underbrace{\alpha_{1},\ldots, \alpha_{1}}_{W_{1}\;times},\ldots,\underbrace{\alpha_{C},\ldots,\alpha_{C}}_{W _{C}\;times}) \tag{29}\]
and
\[\mathbf{H}_{ij}=\beta_{q}\nabla_{i}\nabla_{j}E_{D}^{q}+\beta_{b}\nabla_{i} \nabla_{j}E_{D}^{b} \tag{30}\]
is the Hessian.
Within the above approximation, one gets:
\[p(\mathbf{w}\mid\mathcal{D},\mathbf{\alpha},\mathbf{\beta})=\frac{1}{Z_{E_{T}}^{*}} \exp\left(-E_{T}(\mathbf{w}_{MP})-\frac{1}{2}\left(\mathbf{w}-\mathbf{w}_{MP} \right)^{T}\mathbf{A}\left(\mathbf{w}-\mathbf{w}_{MP}\right)\right), \tag{31}\]
where \(Z_{E_{T}}\to Z_{E_{T}}^{*}\),
\[Z_{E_{T}}^{*}=\exp\left(-E_{T}(\mathbf{w}_{MP})\right)\sqrt{\frac{(2\pi)^{W}} {|\mathbf{A}|}}. \tag{32}\]
\(|\mathbf{A}|\) is determinant of matrix \(\mathbf{A}\). Eventually, we obtain the posterior
\[p(\mathbf{w}\mid\mathcal{D},\mathbf{\alpha},\mathbf{\beta})=\sqrt{\frac{|\mathbf{A}| }{(2\pi)^{W}}}\exp\left(-\frac{1}{2}\left(\mathbf{w}-\mathbf{w}_{MP}\right)^ {T}\mathbf{A}\left(\mathbf{w}-\mathbf{w}_{MP}\right)\right). \tag{33}\]
Note that to find the \(\mathbf{w}_{MP}\), we shall find the minimum of the loss (25).
#### Step 2
We search for the hyperparameters \(\mathbf{\alpha}_{MP}\), \(\mathbf{\beta}_{MP}\) that maximize the posterior \(p(\mathbf{\alpha},\mathbf{\beta}\mid\mathcal{N},\mathcal{D})\) but because there is no prior knowledge about them usually the non-informative priors are dicussed [62]. Hence the density \(p(\mathcal{D}\mid\mathbf{\alpha},\mathbf{\beta},\mathcal{N})\) should has the peak in the same position as \(p(\mathbf{\alpha},\mathbf{\beta}\mid\mathcal{N},\mathcal{D})\). Than we compute
\[p(\mathcal{D}\mid\mathbf{\alpha},\mathbf{\beta},\mathcal{N})=\int d\mathbf{w}p( \mathcal{D}\mid\mathbf{w},\mathbf{\beta})p(\mathbf{w}\mid\mathbf{\alpha},\mathcal{N})= \frac{Z_{E_{T}}^{*}}{Z_{D}(\mathbf{\beta})Z_{W}(\mathbf{\alpha})}, \tag{34}\]
where it was assumed that prior depends only on \(\mathbf{\alpha}\) whereas the likelihood on \(\mathbf{\beta}\) only. Then one gets
\[\ln p(\mathcal{D}\mid\mathbf{\alpha},\mathbf{\beta},\mathcal{N}) \approx -\sum_{i=1}^{C}\alpha_{i}E_{w}^{i}(\mathbf{w}_{MP})-\beta_{q}E_{D}^ {q}(\mathbf{w}_{MP})-\beta_{b}E_{D}^{q}(\mathbf{w}_{MP}) \tag{35}\] \[-\frac{1}{2}\ln|\mathbf{A}|-\sum_{i=1}^{C}\frac{W_{i}}{2}\ln \left(\frac{2\pi}{\alpha_{i}}\right)-\frac{N_{q}}{2}\ln\left(\frac{2\pi}{ \beta_{q}}\right)-\frac{N_{b}}{2}\ln\left(\frac{2\pi}{\beta_{b}}\right).\]
We search for the maximum of \(p(\mathcal{D}\mid\mathbf{\alpha},\mathbf{\beta},\mathcal{N})\), in the original MacKay's framework, the hyperparameters are iterated according to the necessary condition for the maximum, namely from the equations:
\[\frac{\partial}{\partial\mathbf{\alpha}}p(\mathcal{D}\mid\mathbf{\alpha},\mathbf{\beta}, \mathcal{N})=0,\quad\frac{\partial}{\partial\mathbf{\beta}}p(\mathcal{D}\mid\mathbf{ \alpha},\mathbf{\beta},\mathcal{N})=0.\]
In our approach, we propose to consider additional loss given by \(E_{hyp}=-\ln p(\mathcal{D}\mid\mathbf{\alpha},\mathbf{\beta})\) and optimize it with respect to hyperparameters. The loss, after neglecting constant terms, has the form
\[E_{hyp} = \sum_{i=1}^{C}\alpha_{i}E_{w}^{i}(\mathbf{w}_{MP})+\beta_{q}E_{D} ^{q}(\mathbf{w}_{MP})+\beta_{b}E_{D}^{b}(\mathbf{w}_{MP})+\frac{1}{2}\ln| \mathbf{A}| \tag{36}\] \[-\sum_{i=1}^{C}\frac{W_{i}}{2}\ln\alpha_{i}-\frac{N_{q}}{2}\ln \beta_{q}-\frac{N_{b}}{2}\ln\beta_{b}.\]
#### Step 3
To compute the evidence, one has to integrate out the hyperparameters from \(p(\mathcal{D}\mid\mathbf{\alpha},\mathbf{\beta},\mathcal{N})\) using the similar arguments as for the prior for the weights, namely, the posterior for the hyperparameters has a sharp peak in their parameters. As it is argued in [32, 62] are the scaling parameters that are always positive. Therefore, instead of considering \(p(\mathcal{D}\mid\mathbf{\alpha},\mathbf{\beta},\mathcal{N})d\mathbf{\alpha}d\beta\), one discusses \(p(\mathcal{D}\mid\ln\mathbf{\alpha},d\ln\mathbf{\beta},\mathcal{N})d\mathbf{ln}\mathbf{ \alpha}d\mathbf{ln}\beta\). Then it is assumed that the prior for \(\ln\) of hyperparameters is flat. As a result, the log of evidence reads
\[\ln p(\mathcal{D}\mid\mathcal{N})=\ln p(\mathcal{D}\mid\mathbf{\alpha},\mathbf{\beta })+\ln\sigma_{\ln\alpha}+\ln\sigma_{\ln\beta_{Eq}}+\ln\sigma_{\ln\beta_{b}}+ comb, \tag{37}\]
where
\[\ln p(\mathcal{D}\mid\mathbf{\alpha},\mathbf{\beta}) = -\sum_{i=1}^{C}\alpha_{i}E_{w}^{i}(\mathbf{w}_{MP})-\beta_{Eq}E_ {D}^{q}(\mathbf{w}_{MP})-\beta_{b}E_{D}^{B}(\mathbf{w}_{MP}) \tag{38}\] \[-\frac{1}{2}\ln|\mathbf{A}|+\sum_{i=1}^{C}\frac{W_{i}}{2}\ln \alpha_{i}+\frac{N_{q}}{2}\ln\beta_{q}+\frac{N_{b}}{2}\ln\beta_{b}\]
and
\[\ln\sigma_{\ln\alpha_{i}} = -\frac{1}{2}\ln\left[-\alpha_{i}^{2}\frac{\partial^{2}}{\partial \alpha_{i}^{2}}\ln p(\mathcal{D}\mid\mathbf{\alpha},\mathbf{\beta})\right] \tag{39}\] \[\ln\sigma_{\ln\beta_{i}} = -\frac{1}{2}\ln\left[-\beta_{i}^{2}\frac{\partial^{2}}{\partial \beta_{i}^{2}}\ln p(\mathcal{D}\mid\mathbf{\alpha},\mathbf{\beta})\right]. \tag{40}\]
The expression (37) consists of the symmetry contribution, as discussed in Ref. [63], namely,
\[comb=\sum_{i=1}^{L}\left(\ln M_{i}!+M_{i}\ln 2\right), \tag{41}\]
where \(M_{i}\) denotes number of units in the \(i\)-th hidden layer, \(L\) denotes number of hidden layers.
#### Uncertainty for the network predictions
We compute the uncertainty for network predictions in the covariance matrix approximation. The \(1\sigma\) uncertainty due to the variation of the weights reads [64]
\[\Delta^{2}\mathcal{N}(\mathbf{x}_{q(b)},\mathbf{w}_{MP})=\sum_{i,j}\nabla_{i} \mathcal{N}(\mathbf{x}_{q(b)},\mathbf{w}_{MP})\;(\mathbf{A}^{-1})_{ij}\nabla_ {j}\mathcal{N}(\mathbf{x}_{q(b)},\mathbf{w}_{MP}), \tag{42}\]
where \(\mathbf{x}_{q(b)}\) refers to point from \(\mathcal{M}(\partial\mathcal{M})\); \(\nabla_{i}\equiv\partial/\partial w_{i}\).
### Training algorithm
Here we shortly summarize the training algorithm.
* Set the network architecture and initial values for the weights.
* Set initial values of \(\alpha_{i}=\alpha_{i}^{(0)}\) and \(\beta_{a}=\beta_{a}^{(0)}\).
* To get \(\mathbf{w}^{*}\) minimize the loss (25).
* To get hyperparameters \(\alpha\) and \(\beta\) minimize the loss (36).
* The best network has the highest evidence value computed from (37).
Note that during the training process, one has to update weights and the values of hyperparameters simultaneously. But it turned out that to maintain the convergence of the process, we had to start optimizing \(\alpha\)'s and \(\beta\)'s several epochs after we started optimizing the weights.
## 4 Numerical experiments
We shall solve three types of PDEs: heat, wave, and Burger's equations. For the first two, the analytic solutions are known. For the last one, there is no analytical solution. Our approach is implemented using PyTorch framework [56]. We collected 100 of solutions for each PDEs to choose the best one according to the evidence.
### Heat equation
Let us consider the heat equation of the form
\[\frac{\partial u}{\partial t}-\kappa\frac{\partial^{2}u}{\partial x ^{2}} = 0,\quad x\in[0,1],\,\,\,t\in[0,1], \tag{43}\] \[u(0,t)=u(1,t) = 0,\] (44) \[u(x,0) = \sin(\pi x) \tag{45}\]
For the above case the analytic solution reads \(u_{\mathrm{true}}(x,t)=\sin(\pi x)\exp(-\kappa\pi^{2}t)\).
We consider the network with three layers of units to solve the heat equation (43) with boundary conditions (44-45). Each hidden layer has 16 neurons with hyperbolic tangent activation functions. In the output layer, there are linear activation functions.
Figure 2: In the left panel: the training data, the blue/red points correspond to the equation/boundary contribution to the loss for the heat equation. In the right panel: the histogram of the relative log of evidence (the difference between a given log of evidence and the highest log of evidence).
The data on the boundary Eqs. (44-45) were randomly drawn from the uniform distribution (50 points). Similarly, we obtained data in \((0,1)\times(0,1)\) (100 points). The data considered in the training are shown in Fig. 2 (left panel). The model was optimized within the AdamW algorithm run for about 800 epochs. The AdamW was also utilized to optimize \(alpha\) and \(\beta\). The Bayesian training was performed in the simplest fashion. Namely, we kept one \(\alpha\) parameter for all weight classes (as in Ref. [3]) and one \(\beta=\beta_{q}=\beta_{b}\) for equation and boundary contribution.
We optimized the hyperparameters after 400 epochs to maintain the procedure's convergence. Then every 25 epoch, we updated the \(\alpha\) and \(\beta\) parameters to minimize the error \(E_{hyp}\). In Fig. 3, we show one of the registered training examples. The method leads to the convergence of both discussed losses \(E_{T}\) (with respect to weights) and \(E_{hyp}\) with respect to hyperparameters. We started the optimization from a low value of \(\alpha\) (minor relevance of the penalty term). During the training, \(\alpha\) changes by the order of magnitude. The \(\beta\) parameter slowly varies during the optimization process.
For every fit, we compute the evidence. In Fig. 2 (right panel), we show the histogram of evidence values for 100 models. The best model has the highest evidence. In Fig. 4 we show our best fit together with the exact analytic solution. The fit is plotted together with uncertainty. Note that we show the \(x\)-dependence of the solution in five time steps.
Figure 4: The exact (\(u_{true}\)) and PINN (\(u_{pred}\)) solutions for the heat equation.
Figure 3: An example of loss \(E_{T}\) (the first panel in the left), \(E_{hyp}\) (the last panel) and \(\alpha\) (the second from the right) and \(\beta\) (the third from the right) hyperparameters evolution during the training of the network that solves the heat equation.
### Wave equation
Let us consider the following wave equation
\[\frac{\partial^{2}u}{\partial t^{2}}-\frac{\partial^{2}u}{\partial x^ {2}} = 0,\quad x\in[0,1],\ t\in[0,1] \tag{46}\] \[u(0,t)=u(1,t) = 0\] (47) \[\frac{\partial}{\partial t}u(0,t) = 0 \tag{48}\]
with
\[u(x,0)=\sin(\pi x)+\frac{1}{2}\sin(2\pi x). \tag{49}\]
or
\[u(x,0)=\sin(\pi x) \tag{50}\]
For the Eqs. (46) (47), (48) and either (49) or (50) the analytic solution reads either \(u_{\rm true}(x,t)=\sin(\pi x)\cos(\pi t)+\frac{1}{2}\sin(2\pi x)\cos(2\pi t)\) or \(u_{\rm true}(x,t)=\sin(\pi x)\cos(\pi t)\), respectively.
As demonstrated in the previous subsection, the BNN framework works well for the heat equation. Indeed, to find the solution, it was enough to consider rather a small network, and the optimization took a relatively small number of epochs. Solving the wave equation with BNN turned out to be more challenging. In this case, we consider a network with four hidden layers. Each layer has 16 hidden units. The hyperbolic tangent as an activation function is set for the hidden units. We generated a larger data set for the heat equation case to obtain successful fits. Namely, we drew from a uniform distribution 500 and 100 points for \(\mathcal{M}\) and \(\partial\mathcal{M}\), respectively. We show the distribution of dataset in the left panel of Fig. 5.
The wave equation with boundary condition (49) was solved in the simplified framework with only one \(\alpha\) and \(\beta\) hyperparameters. For this case, the learning process lasted significantly longer than the heat equation and took more than 3000 epochs. We started optimization of the hyperparameters in the 1500th epoch. However, because the computation time of the Hessian took quite a long time, we started taking into account the covariance matrix term in the \(E_{Hyp}\) above 2500 epochs when the algorithm was close to the minimum of 25. The training example for this case is given in Fig. 6.
Figure 5: In the left panel: the training data, the blue/red points correspond to equation/boundary contribution to the loss for the wave equation. In the right panel: the histogram of the relative log of evidence (the difference between a given log of evidence and the highest log of evidence).
Similarly, as for the heat equation, we show the histogram of the relative log of evidence values in the right panel of Fig. 5. Our best fit agrees with the true solution; see Fig. 7. However, one can notice a slight difference between the wave equation's exact and network model solutions. To visualize this feature, we plot \(u_{true}(x,t)-u_{pred}(x,t)\) together with uncertainty in Fig. 8.
Figure 6: An example parameters evolution during the wave equation training
Figure 7: The exact (\(u_{true}\)) and PINN (\(u_{pred}\)) solutions for the heat equation with boundary conditions (49).
Keeping more than one \(\alpha\) and \(\beta\) parameters complicates the numerical computation, making it more challenging to obtain convergence. However, for the wave equation with simpler boundary conditions (50) we performed complete analysis, keeping for every layer of units weights one \(\alpha\) and separate \(\alpha\) for bias unit. We also hold two betas, namely \(\beta_{q}\) for the equation and \(\beta_{b}\).
To perform the analysis, we prepared the dataset containing 50 points for \(\mathcal{M}\) and 50 points for \(\partial\mathcal{M}\). The training was enclosed in 1500 epochs. The optimization of hyperparameters started in the 750th epoch. The neural network contained three hidden layers with 16 neurons each, so we had 8 \(\alpha\) parameters (two for each layer: one for ordinary weights, and one for bias weights). We obtained the fit that agrees with the true solution; see Fig. 9.
### Burger's equation
For the last example of PDEs, we consider Burger's equation with Dirichlet boundary conditions.
\[\frac{\partial u}{\partial t}+u\,\frac{\partial u}{\partial x}- \frac{1}{100\pi}\frac{\partial^{2}u}{\partial x^{2}} = 0,\quad x\in[-1,1],\,\,\,t\in[0,1] \tag{51}\] \[u(0,x) = -\sin(\pi x)\] (52) \[u(t,-1) = u(t,1)=0 \tag{53}\]
Figure 8: Comparison of the obtained uncertainties and real error for the solution of the wave equation with boundary condition (49). The \(y=0\) axes correspond to the exact agreement between ”true” and network solutions, and it is included, for practically all \(x\), in the confidence interval marked by the grey area.
Figure 9: The exact (\(u_{true}\)) and PINN (\(u_{pred}\)) solutions for the wave equation with boundary conditions (50).
It is the same equation as discussed in Ref. [11].
Burger's equation is the most difficult to solve among the three PDEs discussed in this paper. To find the solution, we consider a network with four hidden layers of units. In each layer, there are 20 hidden neurons. The dataset contains 5000 and 1000 for \(\mathcal{M}\) and \(\partial\mathcal{M}\), respectively. We show the data distribution in Fig. 10 (left panel). The network is trained in a mini-batch configuration (10 batches for each epoch). Similarly, we collected 100 fits for two previous PDEs examples. The histogram of the log of evidence for collected fits is given in Fig. 10 (right panel). In Fig. 11, we show our best fit with the numerical solution obtained in Ref. [11]. A good agreement between the predictions of both approaches is seen.
Figure 11: The ”true” solution (\(u_{true}\)) - the numerical solution from [11]), and our best model predictions (\(u_{pred}\)) for Burger’s equation.
Figure 10: In the left panel: the training data, the blue/red points correspond to the equation/boundary contribution to the loss for Burger’s equation. In the right panel: the histogram of the relative log of evidence (the difference between a given and highest log of evidence).
Summary
We adopted the Bayesian framework for the PINN to solve heat, wave, and Burger's equations. The network solutions agree with the "true" ones. The method allowed us to compute the \(1\sigma\) uncertainty due to variation of network parameters. We have demonstrated (for the wave equation problem) that the relative weights of the loss components, such as the boundary condition loss term, can be treated as the model's hyperparameters and fined-tuned within the Bayesian algorithm. Indeed, the network parameters, weights, and hyperparameters were optimized in parallel during the training process. One of the merits of the Bayesian framework is that there is no need to use the validation data set. The entire collected data can be utilized in the analysis. However, when the network is defined by a large number of weights, the numerical evaluation of the Hessian takes a long time and is imprecise. In this case, the method might work ineffectively. However, for small-size networks, the discussed Bayesian approach works well.
## Acknowledgments
K.M.G was supported by the program "Excellence initiative--research university" (2020-2026 University of Wroclaw).
|
2302.09235 | Generalization and Stability of Interpolating Neural Networks with
Minimal Width | We investigate the generalization and optimization properties of shallow
neural-network classifiers trained by gradient descent in the interpolating
regime. Specifically, in a realizable scenario where model weights can achieve
arbitrarily small training error $\epsilon$ and their distance from
initialization is $g(\epsilon)$, we demonstrate that gradient descent with $n$
training data achieves training error $O(g(1/T)^2 /T)$ and generalization error
$O(g(1/T)^2 /n)$ at iteration $T$, provided there are at least
$m=\Omega(g(1/T)^4)$ hidden neurons. We then show that our realizable setting
encompasses a special case where data are separable by the model's neural
tangent kernel. For this and logistic-loss minimization, we prove the training
loss decays at a rate of $\tilde O(1/ T)$ given polylogarithmic number of
neurons $m=\Omega(\log^4 (T))$. Moreover, with $m=\Omega(\log^{4} (n))$ neurons
and $T\approx n$ iterations, we bound the test loss by $\tilde{O}(1/n)$. Our
results differ from existing generalization outcomes using the
algorithmic-stability framework, which necessitate polynomial width and yield
suboptimal generalization rates. Central to our analysis is the use of a new
self-bounded weak-convexity property, which leads to a generalized local
quasi-convexity property for sufficiently parameterized neural-network
classifiers. Eventually, despite the objective's non-convexity, this leads to
convergence and generalization-gap bounds that resemble those found in the
convex setting of linear logistic regression. | Hossein Taheri, Christos Thrampoulidis | 2023-02-18T05:06:15Z | http://arxiv.org/abs/2302.09235v2 | # Generalization and Stability of Interpolating Neural Networks with Minimal Width
###### Abstract
We investigate the generalization and optimization properties of shallow neural-network classifiers trained by gradient descent in the interpolating regime. Specifically, in a realizable scenario where model weights can achieve arbitrarily small training error \(\epsilon\) and their distance from initialization is \(g(\epsilon)\), we demonstrate that gradient descent with \(n\) training data achieves training error \(O(g(1/T)^{2}/T)\) and generalization error \(O(g(1/T)^{2}/n)\) at iteration \(T\), provided there are at least \(m=\Omega(g(1/T)^{4})\) hidden neurons. We then show that our realizable setting encompasses a special case where data are separable by the model's neural tangent kernel. For this and logistic-loss minimization, we prove the training loss decays at a rate of \(\tilde{O}(1/T)\) given polylogarithmic number of neurons \(m=\Omega(\log^{4}(T))\). Moreover, with \(m=\Omega(\log^{4}(n))\) neurons and \(T\approx n\) iterations, we bound the test loss by \(\tilde{O}(1/n)\). Our results differ from existing generalization outcomes using the algorithmic-stability framework, which necessitate polynomial width and yield suboptimal generalization rates. Central to our analysis is the use of a new self-bounded weak-convexity property, which leads to a generalized local quasi-convexity property for sufficiently parameterized neural-network classifiers. Eventually, despite the objective's non-convexity, this leads to convergence and generalization-gap bounds that resemble those found in the convex setting of linear logistic regression.
## 1 Introduction
Neural networks have remarkable expressive capabilities and can memorize a complete dataset even with mild overparameterization. In practice, using gradient descent (GD) on neural networks with logistic or cross-entropy loss can result in the objective reaching zero training error and close to zero training loss. Zero training error, often referred to as "interpolating" the data, indicates perfect classification of the dataset. Despite their strong memorization ability, these networks also exhibit remarkable generalization capabilities to new data. This has motivated a surge of studies in recent years exploring the optimization and generalization properties of first-order gradient methods in overparameterized neural networks, with a specific focus in the so-called Neural Tangent Kernel (NTK) regime. In the NTK regime, the model operates as the first-order approximation of the network at a sufficiently large initialization or at the large-width limit (Jacot et al., 2018; Chizat et al., 2019). Prior works on this topic mostly focused on quadratic-loss minimization and their optimization/generalization guarantees required network widths that increased polynomially with the sample size \(n\). This, however, is not in line with practical experience. Improved results were obtained more recently by (Ji and Telgarsky, 2020; Chen et al., 2020) who have investigated the optimization and generalization of ReLU neural networks with logistic loss, which is more suitable for classification tasks. Assuming that the NTK with respect to the model can interpolate the data (i.e. separate them with positive margin \(\gamma\)), they showed through a Rademacher complexity analysis that GD on neural networks with polylogarithmic width can achieve generalization guarantees that decrease with the sample size \(n\) at a rate of \(\tilde{O}(\frac{1}{\sqrt{n}})\).
In this paper, we provide rate-optimal optimization and generalization analyses of GD for shallow neural networks of minimal width assuming that the model itself can interpolate the data. We focus on two-layer networks with smooth activations that can almost surely separate \(n\) training samples from the data distribution. Concretely, we consider a realizability condition where data and initialization are such that model weights can achieve arbitrarily small training error \(\varepsilon\) while their distance from initialization is \(g(\varepsilon)\) for some function \(g:\mathbb{R}_{+}\rightarrow\mathbb{R}_{+}\). Under this condition, we demonstrate generalization guarantees of order \(O(\frac{g(\frac{1}{T})^{2}}{n}).\) More generally, for any iteration \(T\) of GD and assuming network width \(m=\Omega(g(\frac{1}{T})^{4})\), we obtain an expected test-loss rate \(O(\frac{g(\frac{1}{T})^{2}}{T}+\frac{g(\frac{1}{T})^{2}}{n})\). Additional to the generalization bounds, we provide optimization guarantees under the same setting by showing that the training loss approaches zero at rate \(O(\frac{g(\frac{1}{T})^{2}}{T})\). We note that these results are derived without NTK-type analyses. For demonstration and also for connection to prior works on neural-tangent data models, we specialize our generalization and optimization results to the class of NTK-separable data. We show this is possible because the NTK-data separability assumption implies our realizability condition holds. Thus, for logistic-loss minimization on NTK-separable data, we show that the expected test loss of GD is \(\tilde{O}(\frac{1}{T}+\frac{1}{n})\) provided polylogarithmic number of neurons \(m=\Omega(\log^{4}(T))\). This further suggests that a network of width \(m=\Omega(\log^{4}(n))\), attains expected test loss \(\tilde{O}(\frac{1}{n})\) after \(T\approx n\) iterations.
In contrast to prior optimization and generalization analyses that often depend on the NTK framework, which requires the first-order approximation of the model, we build on the algorithmic stability approach (Bousquet and Elisseeff, 2002) for shallow neural-network models of finite width. Although the stability analysis has been utilized in previous studies to derive generalization bounds for (stochastic) gradient descent in various models, most results that are rate-optimal heavily rely on the convexity assumption. Specifically, the stability-analysis framework has been successful in achieving optimal generalization bounds for convex objectives in (Lei and Ying, 2020; Bassily et al., 2020; Schliserman and Koren, 2022). On the other hand, previous studies on non-convex objectives either resulted in suboptimal bounds or relied on assumptions that are not in line with the actual practices of neural network training. For instance, (Hardt et al., 2016) derived a generalization bound of \(O(\frac{T^{\beta e/(\beta e+1)}}{n})\) for general \(\beta\)-smooth and non-convex objectives, but this required a time-decaying step-size \(\eta_{t}\leq c/t\), which can degrade the training performance. More recently, (Richards and Rabbat, 2021) explored the use of the stability approach specifically for logistic-loss minimization of a two-layer network. By refining the model-stability analysis framework introduced by (Lei and Ying, 2020), they derived generalization-error bounds provided the hidden width increases polynomially with the sample size. In comparison, our analysis leads to improved generalization and optimization rates and under standard separability conditions such as NTK-separability, only requires a polylogarithmic width for both global convergence and generalization.
### Notation
We denote \([n]:=\{1,2,\cdots,n\}\). We use the standard notation \(O(\cdot),\Omega(\cdot)\) and use \(\tilde{O}(\cdot),\tilde{\Omega}(\cdot)\) to hide polylogarithmic factors. Occasionally we use \(\lesssim\) to hide numerical constants. The Gradient and Hessian of a function \(\Phi:\mathbb{R}^{d_{1}\times d_{2}}\rightarrow\mathbb{R}\) with respect to the \(i\)th input (\(i=1,2\)) are denoted by \(\nabla_{i}\Phi\) and \(\nabla_{i}^{2}\Phi\), respectively. All logarithms are in base \(e\). We use \(\left\|\cdot\right\|\) for the \(\ell_{2}\) norm of vectors and the operator norm of matrices. We denote \([w_{1},w_{2}]:=\{w\;:\;w=\alpha w_{1}+(1-\alpha)w_{2},\alpha\in[0,1]\}\) the line segment between \(w_{1},w_{2}\in\mathbb{R}^{d^{\prime}}\).
## 2 Problem Setup
Given \(n\) i.i.d. samples \((x_{i},y_{i})\sim\mathcal{D},i\in[n]\) from data distribution \(\mathcal{D}\), we study unconstrained empirical risk minimization with objective \(\widehat{F}:\mathbb{R}^{d^{\prime}}\rightarrow\mathbb{R}\):
\[\min_{w\in\mathbb{R}^{d^{\prime}}}\Big{\{}\widehat{F}(w):=\frac{1}{n}\sum_{i=1 }^{n}\widehat{F}_{i}(w)=\frac{1}{n}\sum_{i=1}^{n}f\left(y_{i}\Phi\left(w,x_{i} \right)\right)\Big{\}}. \tag{1}\]
This serves as a proxy for minimizing the _test loss_\(F:\mathbb{R}^{d^{\prime}}\to\mathbb{R}\):
\[F(w):=\mathbb{E}_{(x,y)\sim\mathcal{D}}\left[f\left(y\Phi(w,x)\right)\right]. \tag{2}\]
We introduce our assumptions on the data \((x,y)\), the model \(\Phi(\cdot,x)\), and the loss function \(f(\cdot)\), below. We start by imposing the following mild assumption on the data distribution.
**Assumption 1** (Bounded features).: _Assume any \((x,y)\sim\mathcal{D}\) has almost surely bounded features, i.e. \(\|x\|\leq R\), and binary label \(y\in\{\pm 1\}\)._
The model \(\Phi:\mathbb{R}^{d^{\prime}}\times\mathbb{R}^{d}\to\mathbb{R}\) is parameterized by trainable weights \(w\in\mathbb{R}^{d^{\prime}}\) and takes input \(x\in\mathbb{R}^{d}\). For our main results, we assume \(\Phi\) is a one-hidden layer neural-net of \(m\) neurons, i.e.
\[\Phi(w,x):=\frac{1}{\sqrt{m}}\sum_{j=1}^{m}a_{j}\,\sigma(\langle w_{j},x\rangle), \tag{3}\]
where \(\sigma:\mathbb{R}\to\mathbb{R}\) is the activation function, \(w_{j}\in\mathbb{R}^{d}\) denotes the weight vector of the \(j\)th hidden neuron and \(\frac{a_{j}}{\sqrt{m}},j\in[m]\) are the second-layer weights. For the second layer weights, we assume that they are fixed during training taking values \(a_{j}\in\{\pm 1\}\). We assume that for half of second layer weights we have \(a_{j}=1\) and for the other half \(a_{j}=-1\). On the other hand, all the first-layer weights are updated during training. Thus, the total number of trainable parameters is \(d^{\prime}=md\) and we denote \(w=[w_{1};w_{2};\ldots;w_{m}]\in\mathbb{R}^{d^{\prime}}\) the vector of trainable weights. Throughout, we make the following assumptions on the activation function.
**Assumption 2** (Lipschitz and smooth activation).: _The activation function \(\sigma:\mathbb{R}\to\mathbb{R}\) satisfies the following for non-negative constants \(\ell,L\):_
\[|\sigma^{\prime}(u)|\leq\ell,\;\;\;|\sigma^{\prime\prime}(u)|\leq L,\qquad \forall u\in\mathbb{R}.\]
We note that the smoothness assumption which is required by our framework excludes the use of ReLU. Examples of activation functions that satisfy the smoothness condition include Softplus \(\sigma(u)=\log(1+e^{u})\), Gaussian error linear unit (GELU) \(\sigma(u)=\frac{1}{2}u(1+\operatorname{erf}(\frac{u}{\sqrt{2}}))\), and Hyperbolic-Tangent where \(\sigma(u)=\frac{e^{u}-e^{-u}}{e^{u}+e^{-u}}\). On the other hand, Lipschitz assumption is rather mild, since it is possible to restrict the parameter space to a bounded domain.
Next, we discuss conditions on the loss function. Of primal interest is the commonly used logistic loss function \(f(u)=\log(1+e^{-u})\). However, our results hold for a broader class of convex, non-negative and monotonically decreasing functions (\(\lim_{u\to\infty}f(u)=0\)) that satisfy the following:
**Assumption 3** (Lipschitz and smooth loss).: _The convex loss function \(f:\mathbb{R}\to\mathbb{R}_{+}\) satisfies for all \(u\in\mathbb{R}\)_
_3.A:_ _Lipschitzness:_ \(|f^{\prime}(u)|\leq G_{f}\)_._
_3.B:_ _Smoothness:_ \(f^{\prime\prime}(u)\leq L_{f}\)_._
**Assumption 4** (Self-bounded loss).: _The convex loss function \(f:\mathbb{R}\to\mathbb{R}_{+}\) is self-bounded with some constant \(\beta_{f}>0\), i.e., \(|f^{\prime}(u)|\leq\beta_{f}f(u),\forall u\in\mathbb{R}\)._
The self-boundedness Assumption 4 is the key property of the loss that drives our analysis and justifies the polylogarithmic width requirement, as will become evident. Note that the logistic loss naturally satisfies Assumptions 3.A and 3.B (with \(G_{f}=1,L_{f}=1/4\)), as well as, Assumption 4 with \(\beta_{f}=1\). Other interesting examples of loss functions satisfying those assumptions include polynomial losses, with the tail behavior \(f(u)=1/u^{\beta}\) for \(\beta>0\), which we discuss in Remark 2. To lighten the notation and without loss of generality, we set \(G_{f}=L_{f}=\beta_{f}=1\) for the rest of the paper. We remark that our training-loss results also hold for the exponential loss \(e^{-u}\). The exponential loss is self-bounded and while it is not Lipschitz or smooth it satisfies a second-order self-bounded property \(f^{\prime\prime}(u)\leq f(u)\), which we can leverage instead; see Appendix A for details.
Main Results
We present bounds on the train loss and generalization gap of gradient-descent (GD) under the setting of Section 2. Formally, GD with step-size \(\eta>0\) optimizes (1) by performing the following updates starting from an initialization \(w_{0}\):
\[\forall t\geq 0\,:\ w_{t+1}=w_{t}-\eta\nabla\widehat{F}(w_{t}).\]
### Key properties
The key challenge in both the optimization and generalization analysis is the non-convexity of \(f(y\Phi(\cdot,x))\), and consequently of the train loss \(\widehat{F}(\cdot)\). Despite non-convexity, we derive bounds analogous to the convex setting, e.g. corresponding bounds on linear logistic regression in (Ji and Telgarsky, 2018; Shamir, 2021; Schliserman and Koren, 2022). We show this is possible provided the loss satisfies the following key property, which we call _self-bounded weak convexity_.
**Definition 1** (Self-bounded weak convexity).: _We say a function \(\widehat{F}:\mathbb{R}^{d^{\prime}}\to\mathbb{R}\) is self-bounded weakly convex if there exists constant \(\kappa>0\) such that for all \(w\),_
\[\lambda_{\min}\left(\nabla^{2}\widehat{F}(w)\right)\geq-\kappa\, \widehat{F}(w)\,. \tag{4}\]
Recall a function \(G:\mathbb{R}^{d^{\prime}}\to\mathbb{R}\) is weakly convex if \(\exists\kappa\geq 0\) such that uniformly over all \(w\in\mathbb{R}^{d^{\prime}}\), \(\lambda_{\min}\left(\nabla^{2}G(w)\right)\geq-\kappa.\) If \(\kappa=0\), the function is convex. Instead, property (4) lower bounds the curvature by \(-\kappa\,G(w)\) that changes proportionally with the function value \(G(w)\). We explain below how this is exploited in our setting.
To begin with, the following lemma shows that property (4) holds for the train loss under the setting of Section 2: training of a two-layer net with smooth activation and self-bounded loss. The lemma also shows that the gradient of the train loss is self bounded. Those two properties together summarize the key ingredients for which our analysis applies.
**Lemma 1** (Key self-boundedness properties).: _Consider the setup of Section 2 and let Assumptions 1-2 hold. Further assume the loss is self-bounded as per Assumption 4. Then, the objective satisfies the following self-boundedness properties for its Gradient and Hessian:_
1. _Self-bounded gradient:_ \(\left\|\nabla\widehat{F}_{i}(w)\right\|\leq\ell R\,\widehat{F}_{i}(w),\ \ \forall i\in[n]\)_._
2. _Self-bounded weak convexity:_ \(\lambda_{\min}\left(\nabla^{2}\widehat{F}(w)\right)\geq-\frac{LR^{2}}{\sqrt{ m}}\widehat{F}(w)\)_._
Both of these properties follow from the self-boundedness of the convex loss \(f\) combined with Lipshitz and smoothness of \(\sigma\). The self-boundedness of the gradient is used for generalization analysis and in particular in obtaining the model stability bound. The self-bounded weak convexity plays an even more critical role for our optimization and generalization results. In particular, the wider the network the closer the loss to having convex-like properties. Moreover, the "self-bounded" feature of this property provides another mechanism that favors convex-like optimization properties of the loss. To see this, consider the minimum Hessian eigenvalue \(\lambda_{\min}(\nabla^{2}\widehat{F}(w_{t}))\) at gradient descent iterates \(\{w_{t}\}_{t\geq 1}\): As training progresses, the train loss \(\widehat{F}(w_{t})\) decreases, and thanks to the self-bounded weak convexity property, the gap to convexity also decreases. We elaborate on the role of self-bounded weak convexity in our proofs in Section 5.
### Training loss
We begin with a general bound on the training loss and the parameter's norm, which is also required for our generalization analysis.
**Theorem 2** (Training loss - General bound).: _Suppose Assumptions 1-4 hold. Fix any training horizon \(T\geq 0\) and any step-size \(\eta\leq 1/L_{\widehat{F}}\) where \(L_{\widehat{F}}\) is the objective's smoothness parameter. Assume any \(w\in\mathbb{R}^{d^{\prime}}\) and hidden-layer width \(m\) such that \(\|w-w_{0}\|^{2}\geq\max\{\eta T\widehat{F}(w),\eta\widehat{F}(w_{0})\}\) and \(m\geq 18^{2}L^{2}R^{4}\|w-w_{0}\|^{4}\). Then, the training loss and the parameters' norm satisfy_
\[\widehat{F}(w_{T}) \;\leq\;\frac{1}{T}\sum_{t=1}^{T}\widehat{F}(w_{t})\;\leq\;2 \widehat{F}(w)+\frac{5\|w-w_{0}\|^{2}}{2\eta T}, \tag{5}\] \[\forall t\in[T] :\;\;\|w_{t}-w_{0}\|\;\leq\;4\|w-w_{0}\|.\]
A few remarks are in place regarding the theorem. First, Eq. (5) upper bounds the running average (also known as regret) of train loss for iterations \(1,\ldots,T\) by the value, at an arbitrarily chosen point \(w\), of a ridge-regularized objective with regularization parameter inversely proportional to \(\eta T\). Because of smoothness and Lipschitz Assumption 3 of \(f\), it turns out that the training objective is \(L_{\widehat{F}}\)-smooth. Hence, by the descent lemma of GD for smooth functions, the same upper bound holds in Eq. (5) for the value of the loss at time \(T\), as well. Moreover, the theorem provides a uniform upper bound of the norm of all GD iterates in terms of \(\|w-w_{0}\|\). Notably, and despite the non-convexity in our setting, our bounds are same up to constants to analogous bounds for logistic linear regression in (Shamir, 2021; Schliserman and Koren, 2022). As discussed in Sec. 3.1 this is possible thanks to the self-bounded weak convexity property.
The condition \(m\gtrsim\|w-w_{0}\|^{4}\) on the norm of the weights controls the maximum deviations of weights \(w\) from initialization (with respect to network width) required for our results to guarantee arbitrarily small train loss. Specifically, to get the most out of Theorem 2 we need to choose appropriate \(w\) that satisfies both the condition \(m\gtrsim\|w-w_{0}\|^{4}\) and keeps the associated ridge-regularized loss \(\widehat{F}(w)+\|w-w_{0}\|^{2}/(\eta T)\) small. This combined requirement is formalized in the neural-net realizability Assumption 5 below. As we will discuss later in Section 4, this assumption translates into an assumption on the underlying data distribution that ultimately enables the application of Theorem 2 to achieve vanishing training error.
**Assumption 5** (Nn-Realizability).: _There exists a decreasing function \(g:\mathbb{R}_{+}\rightarrow\mathbb{R}_{+}\) which measures the norm of deviations from initialization of models that achieve arbitrarily small training error._
_Formally, for almost surely all \(n\) training samples and for any sufficiently small \(\varepsilon>0\) there exists \(w^{(\varepsilon)}\in\mathbb{R}^{d^{\prime}}\) such that_
\[\widehat{F}(w^{(\varepsilon)})\leq\varepsilon,\;\;\;\text{and} \;\;\;g(\varepsilon)=\left\|w^{(\varepsilon)}-w_{0}\right\|.\]
Since Assumption 5 holds for arbitrarily small \(\varepsilon\), it guarantees that the model has enough capacity to interpolate the data, i.e., attain train error that is arbitrarily small (\(\varepsilon\)). Additionally, this is accomplished for model weights whose distance from initialization is managed by the function \(g(\varepsilon)\). By using these model weights to select \(w\) in Theorem 2 we obtain train loss bounds for interpolating models.
**Theorem 3** (Training loss under interpolation).: _Let Assumptions 1-5 hold. Let \(\eta\leq\min\{\frac{1}{L_{\widehat{F}}},g(1)^{2},\frac{g(1)^{2}}{\widehat{F}( w_{0})}\}\) and assume the width satisfies \(m\geq 18^{2}L^{2}R^{4}\,g(\frac{1}{T})^{4}\) for a fixed training horizon \(T\). Then,_
\[\widehat{F}(w_{T})\;\leq\frac{2}{T}+\frac{5\,g(\frac{1}{T})^{2}} {2\eta T}, \tag{6}\] \[\forall t\in[T]\;:\;\;\|w_{t}-w_{0}\|\;\leq\;4\,g(\frac{1}{T}).\]
To interpret the theorem's conclusions suppose that the function \(g(\cdot)\) of Assumption 5 is at most logarithmic; i.e., \(g(\frac{1}{T})=O(\log(T))\). Then, Theorem 3 implies that \(m=\Omega(\log^{4}(T))\) neurons suffice to achieve train loss \(\tilde{O}(\frac{1}{T})\) while GD iterates at all iterations satisfy \(\|w_{t}-w_{0}\|=O(\log(T))\). In Section 4 (see also Remark 1), we will give examples of data separability conditions that guarantee the desired logarithmic growth of \(g(\cdot)\) for logistic loss minimization, which in turn imply the favorable convergence guarantees described above. Under the same conditions we will show that the step-size requirement simplifies to
\(\min\{3,1/L_{\widehat{F}}\}\) (see Corollary 6.1). Finally, we remark that Theorem 3 provides sufficient parameterization conditions under which GD with \(T=\tilde{\Omega}(n)\) iterations finds weights \(w_{T}\) that yield an interpolating classifier and thus, achieve zero training error. To see this, assume logistic loss and observe setting \(T\gtrsim n\) in Eq. (6) gives \(\widehat{F}(w_{T})\leq\log(2)/n\). This in turn implies that every sample loss satisfies \(\widehat{F}_{i}(w_{T})\leq\log(2)\), equivalently \(y_{i}=\operatorname{sign}\left(\Phi(w_{T},x_{i})\right)\).
### Generalization
Our main result below bounds the generalization gap of GD for training two-layer nets with self-bounded loss functions. We remark that all expectations that appear below are over the training set.
**Theorem 4** (Generalization gap - General bound).: _Suppose Assumptions 1-4 hold. Fix any time horizon \(T\geq 1\) and any step size \(\eta\leq 1/L_{\widehat{F}}\) where \(L_{\widehat{F}}\) is the objective's smoothness parameter. Let any \(w\in\mathbb{R}^{d^{\prime}}\) such that \(\|w-w_{0}\|^{2}\geq\max\{\eta T\,\widehat{F}(w),\eta\widehat{F}(w_{0})\}.\) Suppose hidden-layer width \(m\) satisfies \(m\geq 64^{2}L^{2}R^{4}\|w-w_{0}\|^{4}\). Then, the generalization gap of GD at iteration \(T\) is bounded as_
\[\mathbb{E}\Big{[}F(w_{T})-\widehat{F}(w_{T})\Big{]}\leq\frac{8\ell^{2}R^{2}} {n}\,\mathbb{E}\left[\eta T\,\widehat{F}(w)+2\|w-w_{0}\|^{2}\right].\]
A few remarks regarding the theorem are in place. The theorem's assumptions are similar to those in Theorem 2, which bounds the training loss. The condition \(\|w-w_{0}\|^{2}\geq\max\{\eta T\widehat{F}(w),\eta\widehat{F}(w_{0})\}\) needs to hold almost surely over the training data, which is non-restrictive, as in later applications of the theorem, the choice of \(w\) arises from Assumption 5. The condition \(m\geq 64^{2}L^{2}R^{4}\|w-w_{0}\|^{4}\) on the width of the network, is also the same as that of Theorem 2 but with a larger constant. This means that the last-iterate train loss bound from Theorem 2 (Eq. (5)) holds under the setting of Theorem 4. Hence, it applies to the expected train loss \(\mathbb{E}[\widehat{F}(w_{T})]\) and, combined with the generalization-gap bound, yields a bound on the expected test loss \(\mathbb{E}[F(w_{T})]\).
To optimize the bound, a proper \(w\) must be selected by minimizing the population version of a ridge-regularized training objective. In interpolation settings, the procedure for selecting \(w\) follows the same guidelines as in Assumption 5 and in a similar style as obtaining Theorem 3.
**Theorem 5** (Generalization gap under interpolation).: _Let Assumptions 1-5 hold. Fix \(T\geq 1\) and let \(m\geq 64^{2}L^{2}R^{4}\,g(\frac{1}{T})^{4}\). Then, for any \(\eta\leq\min\{\frac{1}{L_{F}},g(1)^{2},\frac{g(1)^{2}}{\widehat{F}(w_{0})}\}\) the expected generalization gap at iteration \(T\) satisfies_
\[\mathbb{E}\Big{[}F(w_{T})-\widehat{F}(w_{T})\Big{]}\leq\frac{24\ell^{2}R^{2} \,g(\frac{1}{T})^{2}}{n}\,. \tag{7}\]
Note the width condition is similar in order to that of Theorem 3. Thus, provided \(g(\frac{1}{T})\lesssim\log(T)\) (see Remark 1 and Section 4 for examples), we have generalization gap of order \(\tilde{O}(\frac{1}{n})\) with \(m=\Omega(\log^{4}(T))\) neurons. Combined with the training loss guarantees from Theorem 3, we have test loss rate \(\tilde{O}(\frac{1}{T}+\frac{1}{n})\). This further implies that with \(m\approx\log^{4}(n)\) neurons and \(T=n\) iterations, the test loss reaches the optimal rate of \(\tilde{O}(\frac{1}{n})\). On the other hand, previous stability-based generalization bounds (e.g., [Richards and Rabbat, 2021]) required polynomial width \(m\gtrsim T^{2}\) and eventually obtained sub-optimal generalization rates of order \(O(\frac{T}{n})\). We further discuss the technical novelties resulting in these improvements in Section 5.
**Remark 1** (Example: Linearly-separable data).: _Consider logistic-loss minimization, \(\tanh\) activation \(\sigma(u)=\frac{e^{u}-e^{-u}}{e^{u}+e^{-u}}\) and data distribution that is linearly separable with margin \(\gamma\), i.e., for almost surely all \(n\) samples there exists unit-norm vector \(v^{\star}\in\mathbb{R}^{d}\) such that \(\forall i\in[n]:y_{i}\langle v^{\star},x_{i}\rangle\geq\gamma\). We initialize the weights to zero, i.e. \(w_{0}=0\) and show that the realizability Assumption 5 naturally holds in this setting. To see this, for any fixed \(\varepsilon>0,\) set \(\alpha=\frac{2(\log(1/\varepsilon))}{\gamma\sqrt{m}}\) and assume \(m\geq 4\log^{2}(1/\varepsilon)\). With this choice, select weights \(w_{j}^{(\varepsilon)}:=\alpha v^{\star},a_{j}=\frac{1}{\sqrt{m}}\) for \(j\in[1,\cdots,\frac{n}{2}]\) and \(w_{j}^{(\varepsilon)}:=-\alpha v^{\star},a_{j}=\frac{-1}{\sqrt{m}}\) for \(j\in\{\frac{m}{2}+1,\cdots,m\}\). Then, the
model output for any sample \((x_{i},y_{i})\) satisfies_
\[y_{i}\Phi(w^{(\varepsilon)},x_{i})=\frac{y_{i}\sqrt{m}}{2}\left(\sigma(\alpha \langle v^{\star},x_{i}\rangle)-\sigma(-\alpha\langle v^{\star},x_{i}\rangle) \right)=y_{i}\sqrt{m}\sigma(\alpha\langle v^{\star},x_{i}\rangle)\geq\sqrt{m} \sigma(\alpha\gamma)\geq\frac{\sqrt{m}}{2}\alpha\gamma=\log(\frac{1}{ \varepsilon})\]
_where the first equality uses the fact that \(tanh\) is odd, the first inequality follows by the increasing nature of \(tanh\) and data separability, and the last inequality follows since \(\alpha\gamma\leq 1\) and \(\sigma(u)\geq u/2\) for all \(u\in[0,1]\). Thus, the loss satisfies \(\widehat{F}(w^{(\varepsilon)})\leq\varepsilon\) since for the logistic function \(\log(1+e^{u})\leq e^{u}.\) Moreover, our choice of \(\alpha\) implies \(g(\varepsilon)=\|w^{(\varepsilon)}-w_{0}\|=\|w^{(\varepsilon)}\|=\alpha\sqrt {m}=2\log(1/\varepsilon)/\gamma\). To conclude, the NN-Realizability Assumption 5 holds with \(g(\varepsilon)=2\log(1/\varepsilon)/\gamma\) and thus applying Theorems 3, 5 shows that with \(m=\Omega(\log^{4}(T))\) neurons, the training loss and generalization gap are bounded by \(\tilde{O}(\frac{1}{\gamma^{2}T})\) and \(\tilde{O}(\frac{1}{\gamma^{2}n})\), respectively. We note that the same conclusion as above holds for other smooth activations such as Softmax or GELU._
## 4 On Realizability of NTK-Separable Data
In this section, we interpret our results for NTK-separable data by showing that our realizability condition holds for this class. We recall the definition of NTK-separability below (Nitanda et al., 2019; Chen et al., 2020; Cao and Gu, 2020).
**Assumption 6** (Separability by NTK).: _For almost surely all \(n\) training samples from the data distribution there exists \(w^{\star}\in\mathbb{R}^{d}\) and \(\gamma>0\) such that \(\|w^{\star}\|=1\) and for all \(i\in[n]\),_
\[y_{i}\Big{\langle}\nabla_{1}\Phi(w_{0},x_{i}),w^{\star}\Big{\rangle}\geq\gamma. \tag{8}\]
We also assume a bound on the model's output at initialization. Similar assumptions, but for the value of the loss, also appear in prior works that study generalization using the algorithmic stability framework (Richards and Kuzborskij, 2021; Lei et al., 2022).
**Assumption 7** (Initialization bound).: _There exists parameter \(C\) such that \(\forall i\in[n]:|\Phi(w_{0},x_{i})|\leq C,\) for almost surely all \(n\) training samples from the data distribution_
The next proposition relates the NTK-separability assumption to our realizability assumption. The proofs for this section are given in Appendix C.
**Proposition 6** (Realizability of NTK-separable data).: _Let Assumptions 1-2,6-7 hold. Assume \(f(\cdot)\) to be the logistic loss. Fix \(\varepsilon>0\) and let \(m\geq\frac{L^{2}R^{4}}{4\gamma^{4}C^{2}}(2C+\log(1/\varepsilon))^{4}\). Then the realizability Assumption 5 holds with \(g(\varepsilon)=\frac{1}{\gamma}(2C+\log(1/\varepsilon))\). In other words, there exists \(w^{(\varepsilon)}\) such that_
\[\widehat{F}(w^{(\varepsilon)})\leq\varepsilon,\ \ \text{and}\ \ \ \left\|w^{( \varepsilon)}-w_{0}\right\|=\frac{1}{\gamma}\left(2C+\log(1/\varepsilon) \right). \tag{9}\]
Having established realizability, the following is an immediate corollary of the general results presented in the last section.
**Corollary 6.1** (Results under NTK-separability).: _Let Assumptions 1-2,6-7 hold and assume logistic loss. Suppose \(m\geq\frac{64^{2}L^{2}R^{4}}{\gamma^{4}}(2C+\log(T))^{4}\) for a fixed training horizon \(T\). Then for any \(\eta\leq\min\{3,\frac{1}{L_{F}}\}\), the training loss and generalization gap are bounded as follows:_
\[\widehat{F}(w_{T})\leq\frac{5(2C+\log(T))^{2}}{\gamma^{2}\eta T},\] \[\mathbb{E}\left[F(w_{T})-\widehat{F}(w_{T})\right]\leq\frac{24 \ell^{2}R^{2}}{\gamma^{2}n}(2C+\log(T))^{2}.\]
A few remarks are in place regarding the corollary. By Corollary 6.1, we can conclude that the expected generalization rate of GD on logistic loss and NTK-separable data as per Assumption 6 is \(\tilde{O}(\frac{1}{n})\) provided
width \(m=\Omega(\log^{4}(T))\). Moreover, the expected training loss is \(\mathbb{E}[\tilde{F}(w_{T})]=\tilde{O}(\frac{1}{T})\). Thus, the expected test loss after \(T\) steps is \(\tilde{O}(\frac{1}{T}+\frac{1}{n})\). In particular for \(T=\Omega(n)\), the expected test loss becomes \(\tilde{O}(\frac{1}{n})\). This rate is optimal with respect to sample size and only requires polylogarithmic hidden width with respect to \(n\), specifically, \(m=\Omega(\log^{4}(n))\). Notably, it represents an improvement over prior stability results, e.g., (Richards and Rabbat, 2021) which required polynomial width and yielded suboptimal generalization rates of order \(O(T/n)\). It is worth noting that the test loss bound's dependence on the margin, particularly the \(\frac{1}{\gamma^{2}n}\)-rate obtained in our analysis, bears similarity to the corresponding results in the convex setting of linearly separable data recently established in (Shamir, 2021; Schliserman and Koren, 2022). Additionally, our results improve upon corresponding bounds for neural networks obtained via Rademacher complexity analysis (Ji and Telgarsky, 2020; Chen et al., 2020) which yield generalization rates \(\tilde{O}(\frac{1}{\sqrt{n}})\). Moreover, these works have a \(\gamma^{-8}\) dependence on margin for the minimum network width, whereas in Corollary 6.1 this is reduced to \(\gamma^{-4}\). We also note that in general, both \(\gamma\) and \(C\) may depend on the data distribution, the data dimension, or the nature of initialization. This is demonstrated in the next section where we apply the corollary above to the noisy XOR data distribution and Gaussian initialization.
**Remark 2** (Benefits of exponential tail).: _We have stated Corollary 6.1 for the logistic loss, which has an exponential tail behavior. For general self-bounded loss functions and by following the same steps, we can show a bound on generalization gap of order \(O(\frac{1}{n}(f^{-1}(\frac{1}{T}))^{2})\) provided \(m=\Omega((f^{-1}(\frac{1}{T}))^{4})\). Hence, the tail behavior of \(f\) controls both the generalization gap and minimum width requirement. In particular, under Assumption 6, polynomial losses with tail behavior \(f(u)\sim 1/u^{\beta}\) result in generalization gap \(O(T^{2/\beta}/n)\) for \(m=\Omega(T^{4/\beta})\). Thus, increasing the rate of decay \(\beta\) for the loss, improves both bounds on generalization and width. This suggests the benefits of self-bounded fast-decaying losses such as exponentially-tailed loss functions for which the dependence on \(T\) is indeed only logarithmic._
### Example: Noisy XOR data
Next, we specialize the results of the last section to the noisy XOR data distribution (Wei et al., 2019) and derive the corresponding margin and test-loss bounds. Consider the following \(2^{d}\) points,
\[x_{i}=(x_{i}^{1},x_{i}^{2},\cdots,x_{i}^{d})\in\{(1,0),(0,1),(-1,0),(0,-1)\} \times\{-1,1\}^{d-2},\]
where \(\times\) denotes the Cartesian product and the labels are determined as \(y_{i}=-1\) if \(x_{i}^{1}=0\) and \(y_{i}=1\) if \(x_{i}^{1}=\pm 1\). Moreover, consider normalization \(\overline{x}_{i}=\frac{1}{\sqrt{d-1}}x_{i}\) so that \(R=1.\) The noisy XOR data distribution is the uniform distribution over the set with elements \((\overline{x}_{i},y_{i})\). For this dataset and Gaussian initialization, (Ji and Telgarsky, 2020) have shown for ReLU activation that the NTK-separability assumption holds with margin \(\gamma=\Omega(1/d)\). In the next result, we compute the margin for activation functions that are convex, Lipshitz and locally strongly convex.
**Proposition 7** (Margin).: _Consider the noisy XOR data \((\overline{x}_{i},y_{i})\in\mathbb{R}^{d}\times\{\pm 1\}\). Assume the activation function is convex, \(\ell\)-Lipschitz and \(\mu\)-strongly convex in the interval \([-2,2]\) for some \(\mu>0\), i.e., \(\min_{t\in[-2,2]}\sigma^{\prime\prime}(t)\geq\mu\). Moreover, assume Gaussian initialization \(w_{0}\in\mathbb{R}^{d^{\prime}}\) with entries iid \(N(0,1)\). If \(m\geq\frac{80^{2}d^{3}\ell^{2}}{2\mu^{2}}\log(2/\delta)\), then with probability at least \(1-\delta\) over the initialization, the NTK-separability Assumption 6 is satisfied with margin \(\gamma=\frac{\mu}{80d}\)._
An interesting example of an activation function that satisfies the mentioned assumptions is the Softplus activation where \(\sigma(u)=\log(1+e^{u})\). This activation function has \(\mu=0.1\) and \(\ell=1\), and it is also smooth with \(L=1/4\). Therefore, the results on generalization and training loss presented in Corollary 6.1 hold for it. For noisy XOR data, Proposition 7 shows the margin in Assumption 6 is \(\gamma\gtrsim 1/d\). Additionally, for standard Gaussian initialization we have by Lemma C.5 that with high-probability the initialization bound in Assumption 7 satisfies \(C\lesssim\sqrt{d}\). Putting these together, and applying Corollary 6.1 shows that GD with \(n\) training samples reaches test loss rate \(\tilde{O}(\frac{d^{3}}{n})\) after \(T\approx n\) iterations and given \(m=\tilde{\Omega}(d^{6})\) neurons. It is worth noting that the number of training samples can be exponentially large with respect to \(d\). In this case the minimum width requirement is only polylogarithmic in \(n\).
Proof Sketches
We discuss here high-level proof ideas for both optimization and generalization bounds of Theorems 2 and 4. Formal proofs are deferred to Appendices A and B.
### Training loss
As already discussed in Section 3.1, the key insight we use to obtain bounds that are analogous to results for optimizing convex objectives, is to exploit the self-bounded weak convexity property of the objective in Eq. (4). Thanks to this property, the Hessian minimum eigenvalue \(\lambda_{\min}(\nabla^{2}\widehat{F}(w_{t}))\) becomes less negative at the same rate at which the train loss \(\widehat{F}(w_{t})\) decreases.
The technical challenge at formalizing this intuition arises as follows. Controlling the rate at which \(\widehat{F}(w_{t})\) converges to \(\widehat{F}(w)\) for the theorem's \(w\) requires controlling the Hessian at _all_ intermediate points \(w_{\alpha t}:=\alpha w_{t}+(1-\alpha)w,\alpha\in[0,1]\) between \(w\) and GD iterates \(w_{t}\). This is due to Taylor's theorem used to relate \(\widehat{F}(w_{t})\) to the target value \(\widehat{F}(w)\) as follows:
\[\widehat{F}(w)\geq\widehat{F}(w_{t})+\left\langle\nabla\widehat{F}(w_{t}),w-w _{t}\right\rangle+\frac{1}{2}\,\lambda_{\min}\left(\nabla^{2}\widehat{F}(w_{ \alpha t})\right)\left\|w-w_{t}\right\|^{2}.\]
Thus from self-bounded weak convexity, to control the last term above we need to control \(\widehat{F}(w_{\alpha t})\) for any intermediate point \(w_{\alpha t}\) along the GD trajectory. This is made possible by establishing the following generalized local quasi-convexity property.
**Proposition 8** (Generalized Local Quasi-Convexity).: _Suppose \(\widehat{F}:\mathbb{R}^{d^{\prime}}\to\mathbb{R}\) satisfies the self-bounded weak convexity property in Eq. (4) with parameter \(\kappa\). Let \(w_{1},w_{2}\in\mathbb{R}^{d^{\prime}}\) be two arbitrary points with distance \(\|w_{1}-w_{2}\|\leq D<\sqrt{2/\kappa}\). Set \(\tau:=\left(1-\kappa D^{2}/2\right)^{-1}\). Then,_
\[\max_{v\in[w_{1},w_{2}]}\widehat{F}(v)\leq\tau\cdot\max\{\widehat{F}(w_{1}), \widehat{F}(w_{2})\}. \tag{10}\]
Recall that quasi-convex functions satisfy Eq. (10) with \(\tau=1\) and \(D\) can be unboundedly large. The Proposition 8 indicates that our neural-net objective function is approximately quasi-convex (since \(\tau>1\)) and this property holds locally, i.e. provided that \(w_{1},w_{2}\) are sufficiently close.
Applying (10) for \(w_{1}=w_{t},w_{2}=w\) allows controlling \(\widehat{F}(w_{\alpha t})\) in terms of the train loss \(\widehat{F}(w_{t})\) and the target loss \(\widehat{F}(w)\). The only additional requirement in Proposition 8 for this to hold is that
\[1/\kappa\propto\sqrt{m}\gtrsim\|w_{t}-w\|^{2}. \tag{11}\]
This condition exactly determines the required neural-net width. Formally, we have the following.
**Corollary 8.1** (GLQC of sufficiently wide neural nets).: _Let Assumptions 1,2, 4 hold. Fix arbitrary \(w_{1},w_{2}\in\mathbb{R}^{d^{\prime}}\), any constant \(\lambda>1\), and \(m\) large enough such that \(\sqrt{m}\geq\lambda\frac{LR^{2}}{2}\|w_{1}-w_{2}\|^{2}\). Then,_
\[\max_{v\in[w_{1},w_{2}]}\widehat{F}(v)\leq\left(1-1/\lambda\right)^{-1}\cdot \max\{\widehat{F}(w_{1}),\widehat{F}(w_{2})\}. \tag{12}\]
To conclude, using Corollary 8.1, we can show the regret bound in Eq. (5) provided (by (11)) that \(\sqrt{m}\gtrsim\|w_{t}-w\|^{2}\) is true for all \(t\in[T].\) To make the width requirement independent of \(w_{t}\), we then use a recursive argument to prove that \(\|w_{t}-w\|\leq 3\|w-w_{0}\|\). These things put together, lead to the parameter bound \(\|w_{t}-w_{0}\|\leq 4\|w-w_{0}\|\) and the width requirement \(\sqrt{m}\gtrsim\|w-w_{0}\|^{2}\) in the theorem's statement. We note that the GLQC property is also crucially required for the generalization analysis which we discuss next.
### Generalization gap
We bound the generalization gap using stability analysis (Bousquet and Elisseeff, 2002; Hardt et al., 2016). In particular, we use (Lei and Ying, 2020, Thm. 2) that relates the generalization gap to the "on average model stability". Formally, let \(w_{t}^{\neg i}\) denote the \(t\)-th iteration of GD on the leave-one-out loss \(\widehat{F}^{\neg i}(w):=\frac{1}{n}\sum_{j\neq i}\widehat{F}_{j}(w)\). As before, \(w_{t}\) denotes the GD output on full-batch loss \(\widehat{F}\). We will use the fact (see Corollary D.2.1) that \(f(y\Phi(\cdot,x))\) is \(G_{\widehat{F}}\)-Lipschitz with \(G_{\widehat{F}}=\ell R\) under Assumptions 2 and 3.A. Then, using (Lei and Ying, 2020, Thm. 2(a)) (cf. Lemma B.3) it holds that
\[\mathbb{E}\Big{[}F(w_{T})-\widehat{F}(w_{T})\Big{]}\leq 2G_{\widehat{F}}\ \mathbb{E}\Big{[}\frac{1}{n}\sum_{i=1}^{n}\lVert w_{T}-w_{T}^{\neg i}\rVert \Big{]}. \tag{13}\]
In order to bound the on-average model-stability term on the right-hand side above we need to control the degree of expansiveness of GD. Recall that for convex objectives GD is non-expansive (e.g. (Hardt et al., 2016)), that is \(\lVert(w-\eta\nabla\widehat{F}(w))-(w^{\prime}-\eta\nabla\widehat{F}(w^{ \prime}))\rVert\leq\lVert w-w^{\prime}\rVert\) for any \(w,w^{\prime}\). For the non-convex objective in our setting, the lemma below establishes a generalized non-expansiveness property via leveraging the structure of the objective's Hessian for the two-layer net.
**Lemma 9** (GD-Expansiveness).: _Let Assumptions 1 and 2 hold. For any \(w,w^{\prime}\in\mathbb{R}^{d^{\prime}}\), any step-size \(\eta>0\), and \(w_{\alpha}:=\alpha w+(1-\alpha)w^{\prime}\) it holds for \(H(w):=\eta\frac{LR^{2}}{\sqrt{m}}\widehat{F}^{\prime}(w)+\max\Big{\{}1,\eta \ell^{2}R^{2}\widehat{F}^{\prime\prime}(w)\Big{\}}\) that_
\[\Big{\lVert}\Big{(}w-\eta\nabla\widehat{F}(w)\Big{)}-\Big{(}w^{\prime}-\eta \nabla\widehat{F}(w^{\prime})\Big{)}\Big{\rVert}\leq\max_{\alpha\in[0,1]}H(w _{\alpha})\ \lVert w-w^{\prime}\rVert\,,\]
_where we define \(\widehat{F}^{\prime}(w):=\frac{1}{n}\sum_{i=1}^{n}\lvert f^{\prime}(y_{i}\Phi (w,x_{i}))\rvert\) and \(\widehat{F}^{\prime\prime}(w):=\frac{1}{n}\sum_{i=1}^{n}f^{\prime\prime}(y_{i }\Phi(w,x_{i}))\)._
This lemma can be further simplified for the class of self-bounded loss functions. Specifically, using \(\lvert f^{\prime}(u)\rvert\leq f(u)\) and \(f^{\prime\prime}(u)\leq 1\) from Assumptions 4 and 3.B, we immediately deduce the following.
**Corollary 9.1** (Expansiveness for self-bounded losses).: _In the setting of Lemma 9, further assume the loss satisfies Assumptions 3.B and 4. Provided \(\eta\leq 1/(\ell^{2}R^{2})\), it holds for all \(w,w^{\prime}\in\mathbb{R}^{d^{\prime}}\) that_
\[\Big{\lVert}\left(w-\eta\nabla\widehat{F}(w)\right)-\left(w^{\prime}-\eta \nabla\widehat{F}(w^{\prime})\right)\Big{\rVert}\leq\left(1+\eta\frac{LR^{2} }{\sqrt{m}}\max_{\alpha\in[0,1]}\widehat{F}(w_{\alpha})\right)\Big{\lVert}w- w^{\prime}\Big{\rVert}\,. \tag{14}\]
In Eq. (14) the expansiveness is weaker than in a convex scenario, where the coefficient would be \(1\) instead of \(1+\frac{\eta LR^{2}}{\sqrt{m}}\max_{\alpha\in[0,1]}\widehat{F}(w_{\alpha})\). However, for self-bounded losses (i.e. \(\lvert f^{\prime}(u)\rvert\leq f(u)\)) the "gap to convexity" \(\frac{LR^{2}}{\sqrt{m}}\max_{\alpha\in[0,1]}\widehat{F}(w_{\alpha})\) in Corollary 9.1 is better than the gap from Lemma 9 for \(1\)-Lipschitz losses (i.e. \(\lvert f^{\prime}(u)\rvert\leq 1\)), which would be \(\frac{\eta LR^{2}}{\sqrt{m}}\). Indeed, after unrolling the GD iterates, the latter eventually leads to polynomial width requirements (Richards and Rabbat, 2021).
Instead, to obtain a polylogarithmic width, we use the expansiveness bound in Eq. (14) for self-bounded losses together with the generalized-local quasi-convexity property in Corollary 8.1 as follows. From Corollary 8.1, if \(m\) is large enough such that
\[\sqrt{m}\geq LR^{2}\lVert w_{t}-w_{t}^{\neg i}\rVert^{2},\ \ \ \ \forall t\in[T],\ \forall i\in[n],\]
then Eq. (12) holds on the GD path. This further simplifies the result of Corollary 9.1 applied for \(w=w_{t},w^{\prime}=w_{t}^{\neg i}\) into
\[\Big{\lVert}(w_{t}-\eta\nabla\widehat{F}^{\neg i}(w_{t}))-(w_{t}^{\neg i}- \eta\nabla\widehat{F}^{\neg i}(w_{t}^{\neg i}))\Big{\rVert}\leq\widetilde{H }_{t}^{\,i}\left\lVert w_{t}-w_{t}^{\neg i}\right\rVert,\]
where \(\widetilde{H}_{t}^{\,i}:=1+\frac{2\eta LR^{2}}{\sqrt{m}}\max\{\widehat{F}^{ \neg i}(w_{t}),\widehat{F}^{\neg i}(w_{t}^{\neg i})\}.\) Now from the optimization analyses in Sec. 5.1, we know intuitively that \(\widehat{F}^{\neg i}(w_{t})\leq\widehat{F}(w_{t})\) decays at rate \(\tilde{O}(1/t)\); thus, so does \(\widehat{F}^{\neg i}(w_{t}^{\neg i})\). Therefore, for all \(i\in[n]\)
the expansivity coefficient \(\widehat{H}_{t}^{i}\) in the above display is decaying to \(1\) as GD progresses.
To formalize all these and connect them to the model-stability term in (13), note using triangle inequality and the Gradient Self-boundedness property of Lemma 1 that
\[\left\|w_{t}-w_{t}^{\neg i}\right\|\leq\left\|(w_{t}-\eta\nabla\widehat{F}^{ \neg i}(w_{t}))-(w_{t}^{\neg i}-\eta\nabla\widehat{F}^{\neg i}(w_{t}^{\neg i})) \right\|+\frac{\eta\ell R}{n}\widehat{F}_{i}(w_{t})\,.\]
Unrolling this display over \(t\in[T]\), averaging over \(i\in[n]\), and using our expansiveness bound above we show in Appendix B the following bound for the model stability term
\[\frac{1}{n}\sum_{i=1}^{n}\left\|w_{T}-w_{T}^{\neg i}\right\|\leq\frac{\eta \ell Re^{\beta}}{n}\sum_{t=0}^{T-1}\widehat{F}(w_{t})\,, \tag{15}\]
where \(\beta\lesssim\left(\sum_{t=1}^{T}\widehat{F}(w_{t})+\sum_{t=1}^{T}\widehat{F} ^{\neg i}(w_{t}^{\neg i})\right)/\sqrt{m}\,.\) But, we know from training-loss bounds in Theorem 2 that \(\sum_{t=1}^{T}\widehat{F}(w_{t})\lesssim\|w-w_{0}\|^{2}\) (and similar for \(\sum_{t=1}^{T}\widehat{F}^{\neg i}(w_{t}^{\neg i})\)). Thus, \(\beta\lesssim\|w-w_{0}\|^{2}/\sqrt{m}\). At this point, the theorem's conditions guarantees \(\sqrt{m}\gtrsim\|w-w_{0}\|^{2}\), so that \(\beta=O(1)\). Plugging back in (15) we conclude with the following stability bound: \(\frac{1}{n}\sum_{i=1}^{n}\|w_{T}-w_{T}^{\neg i}\|\lesssim\sum_{t=0}^{T} \widehat{F}(w_{t})/n.\) Applying the train-loss bounds of Theorem 2 once more completes the proof.
## 6 Prior Works
The theoretical study of generalization properties of neural networks (NN) is more than two decades old (Bartlett, 1996; Bartlett et al., 1998). Recently, there has been an increased interest in understanding and improving generalization of SGD/GD on over-parameterized neural networks, e.g. (Allen-Zhu et al., 2019; Oymak and Soltanolkotabi, 2020; Javanmard et al., 2020; Richards and Rabbat, 2021). These results however typically require very large width where \(m=\text{poly}(n)\). We discuss most-closely related-works below.
**Quadratic loss.** For quadratic loss, (Li and Liang, 2018; Soltanolkotabi et al., 2018; Allen-Zhu et al., 2019; Oymak and Soltanolkotabi, 2020; Liu et al., 2022) showed that sufficiently over-parameterized neural networks of polynomial width satisfy a local Polyak-Lojasiewicz (PL) condition \(\|\nabla\widetilde{F}(w)\|^{2}\!\!\geq 2\mu(\widetilde{F}(w)-\widetilde{F}^{ \star})\), where \(\mu\) is at least the smallest eigenvalue of the neural tangent kernel matrix. The PL property in this case implies that the training loss converges linearly with the rate \(\widehat{F}(w_{t})=O((1-\eta\mu)^{t})\) if the GD iterates remain in the PL region. Moreover, (Charles and Papailiopoulos, 2018; Lei and Ying, 2020), have used the PL condition to further characterize stability properties of corresponding non-convex models. Notably, (Lei and Ying, 2020) derived order-optimal rates \(O(\frac{1}{\mu n})\) for the generalization loss. However these rates only apply to quadratic loss. Models trained with logistic or exponential loss on separable data do _not_ satisfy the PL condition even for simple interpolating linear models. Aside from the PL condition-related results, but again for quadratic loss, (Oymak et al., 2019) showed under specific assumptions on the data translating to low-rank NTK, that logarithmic width is sufficient to obtain classification error of order \(O(n^{-1/4})\). In general, they achieve error rate \(O(n^{-1/2})\), but for \(m=\tilde{\Omega}(n^{2})\).
**Logistic-loss minimization with linear models.** Logistic-loss minimization is more appropriate for classification and rate-optimal generalization bounds for GD have been obtained recently in the linear setting, where the training objective is convex. In particular, for linear logistic regression on data that are linearly separable with margin \(\gamma>0\), (Shamir, 2021) proved a finite-time test-error bound \(O(\frac{\log^{2}T}{\gamma^{2}T}+\frac{\log^{2}T}{\gamma^{2}n})\). Ignoring log factors, this is order-optimal with the sample size \(n\) and training horizon \(T\). Their proof uses exponential-decaying properties of the logistic loss to control the norm of gradient iterates, which it cleverly combines with Markov's inequality to bound the fraction of well-separated datapoints at any iteration. This in turn translates to a test-error bound by standard margin-based generalization bounds. More recently, (Schliserman and Koren, 2022) used algorithmic-stability analysis proving same rates (up to log factors) for the test loss. Their results hold for general convex, smooth, self-bounded and decreasing objectives under a realizability assumption suited for convex objectives (analogous to Assumption 5). Specifically,
this includes linear logistic regression with linearly separable data. Here, we show that analogous rates on the test loss hold true for more complicated nonconvex settings where data are separable by shallow neural networks.
**Stability of GD in NN.** State-of-the-art generalization bounds on shallow neural networks via the stability-analysis framework have appeared very recently in (Richards and Rabbat, 2021; Richards and Kuzborskij, 2021; Lei et al., 2022). For Lipschitz losses, (Richards and Rabbat, 2021) shows that the empirical risk is weakly convex with a weak-convexity parameter that improves as the neural-network width \(m\) increases. Leveraging this observation, they establish stability bounds for GD iterates at time \(T\) provided sufficient parameterization \(m=\tilde{\Omega}(T^{2})\). Since the logistic loss is Lipschitz, these bounds also apply to our setting. Nevertheless, our work improves upon (Richards and Rabbat, 2021) in that: (i) we require significantly smaller width, poly-logarithmic rather than polynomial, and (ii) we show \(\tilde{O}(1/n)\) test loss bounds in the realizable setting, while their bounds are \(O(T/n)\). Central to our improvements is a largely refined analysis of the curvature of the loss via identifying and proving a generalized quasi-convexity property for neural networks of polylogarithmic width trained with self-bounded losses (see Section 5 for details). Our results also improve upon the other two works (Richards and Kuzborskij, 2021; Lei et al., 2022), which both require polynomial widths. However, we note that these results are not directly comparable since (Richards and Kuzborskij, 2021; Lei et al., 2022) focus on quadratic-loss minimization. See also Appendix E.
**Uniform convergence in NN.** Uniform bounds on the generalization loss have been derived in literature via Rademacher complexity analysis (Bartlett and Mendelson, 2002); see for example (Neyshabur et al., 2015; Arora et al., 2019; Golowich et al., 2020; Vardi et al., 2022; Frei et al., 2022) for a few results in this direction. These works typically obtain the bounds of order \(O(\frac{\mathcal{R}}{\sqrt{n}})\), where \(\mathcal{R}\) depends on the Rademacher complexity of the hypothesis space. Recent works by (Ji and Telgarsky, 2020; Chen et al., 2020) also utilized Rademacher complexity analysis to obtain test loss rates of \(O(1/\sqrt{n})\) under an NTK separability assumption (see also (Nitanda et al., 2019)) with polylogarithmic width requirement for shallow and deep networks, respectively. Instead, while maintaining minimal width requirements, we obtain test-loss rates \(\tilde{O}(1/n)\), which are order-optimal. Our approach, which is based on algorithmic-stability, is also different and uncovers new properties of the optimization landscape, including a generalized local quasi-convexity property. On the other hand, the analysis of (Ji and Telgarsky, 2020; Chen et al., 2020) applies to ReLU activation and bounds the test loss with high-probability over the sampling of the training set. Instead, we require smooth activations similar to other studies such as (Oymak et al., 2019; Chatterji et al., 2021; Bai and Lee, 2020; Nitanda et al., 2019; Richards and Rabbat, 2021; Richards and Kuzborskij, 2021; Lei et al., 2022) and we bound the test loss in expectation over the training set. Finally, we also note that data-specific generalization bounds for two-layer nets have also appeared recently in (Cao et al., 2022; Frei et al., 2022). However, those results require that data are nearly-orthogonal.
**Convergence/implicit bias of GD.** Convergence and implicit bias of GD for logistic/exponential loss functions on linear models and neural networks have been investigated in (Ji and Telgarsky, 2018; Soudry et al., 2018; Nacson et al., 2019; Lyu and Li, 2020; Chizat and Bach, 2020; Chatterji et al., 2021). In particular, (Lyu and Li, 2020; Ji and Telgarsky, 2020) have shown for homogeneous neural-networks that GD converges in direction to a max-margin solution. While certainly powerful, this implicit-bias convergence characterization becomes relevant only when the number \(T\) of GD iterations is exponentially large. Instead, our convergence bounds apply for finite \(T\) (on the order of sample size), thus are more practically relevant. Moreover, their results assume a GD iterate \(t_{0}\) such that \(\widehat{F}(w_{t_{0}})\leq\log(2)/n\). Similar assumption appears in (Chatterji et al., 2021), which require initialization \(\widehat{F}(w_{0})\leq 1/n^{1+C}\) for constant \(C>0\). Our approach is entirely different: we prove that sufficient parameterization benefits the loss curvature and suffices for GD steps to find an interpolating model and attain near-zero training loss, provided data satisfy an appropriate realizability condition.
Conclusions
In this paper we study smooth shallow neural networks trained with self-bounded loss functions, such as logistic loss. Under interpolation, we provide minimal sufficient parameterization conditions to achieve rate-optimal generalization and optimization bounds. These bounds improve upon prior results which require substantially large over-parameterization or obtain sub-optimal generalization rates. Specifically, we significantly improve previous stability-based analyses in terms of both relaxing the parameterization requirements and obtaining improved rates. Although our focus was on binary classification with shallow networks, our approach can be extended to multi-class settings and deep networks, which will be explored in future studies. Extending our results to the stochastic case by analyzing SGD is another important future direction. Moreover, while our current treatment relies on smoothness of the activation function to exploit properties of the curvature of the training objective, we aim to examine the potential of our results to extend to non-smooth activations. Finally, our generalization analysis bounds the expectation of the test loss (over data sampling) and it is an important future direction extending these guarantees to a high-probability setting.
|
2306.11833 | Convolutional neural networks for large-scale dynamical modeling of
itinerant magnets | Complex spin textures in itinerant electron magnets hold promises for
next-generation memory and information technology. The long-ranged and often
frustrated electron-mediated spin interactions in these materials give rise to
intriguing localized spin structures such as skyrmions. Yet, simulations of
magnetization dynamics for such itinerant magnets are computationally difficult
due to the need for repeated solutions to the electronic structure problems. We
present a convolutional neural network (CNN) model to accurately and
efficiently predict the electron-induced magnetic torques acting on local
spins. Importantly, as the convolutional operations with a fixed kernel
(receptive field) size naturally take advantage of the locality principle for
many-electron systems, CNN offers a scalable machine learning approach to spin
dynamics. We apply our approach to enable large-scale dynamical simulations of
skyrmion phases in itinerant spin systems. By incorporating the CNN model into
Landau-Lifshitz-Gilbert dynamics, our simulations successfully reproduce the
relaxation process of the skyrmion phase and stabilize a skyrmion lattice in
larger systems. The CNN model also allows us to compute the effective receptive
fields, thus providing a systematic and unbiased method for determining the
locality of the original electron models. | Xinlun Cheng, Sheng Zhang, Phong C. H. Nguyen, Shahab Azarfar, Gia-Wei Chern, Stephen S. Baek | 2023-06-20T18:45:12Z | http://arxiv.org/abs/2306.11833v1 | # Convolutional neural networks for large-scale dynamical modeling of itinerant magnets
###### Abstract
Complex spin textures in itinerant electron magnets hold promises for next-generation memory and information technology. The long-ranged and often frustrated electron-mediated spin interactions in these materials give rise to intriguing localized spin structures such as skyrmions. Yet, simulations of magnetization dynamics for such itinerant magnets are computationally difficult due to the need for repeated solutions to the electronic structure problems. We present a convolutional neural network (CNN) model to accurately and efficiently predict the electron-induced magnetic torques acting on local spins. Importantly, as the convolutional operations with a fixed kernel (receptive field) size naturally take advantage of the locality principle for many-electron systems, CNN offers a scalable machine learning approach to spin dynamics. We apply our approach to enable large-scale dynamical simulations of skyrmion phases in itinerant spin systems. By incorporating the CNN model into Landau-Lifshitz-Gilbert dynamics, our simulations successfully reproduce the relaxation process of the skyrmion phase and stabilize a skyrmion lattice in larger systems. The CNN model also allows us to compute the effective receptive fields, thus providing a systematic and unbiased method for determining the locality of the original electron models.
## I Introduction
Itinerant frustrated magnets with electron-mediated spin-spin interactions often exhibit complex non-collinear or non-coplanar spin textures. Of particular interest are particle-like objects such as magnetic vortices and skyrmions which are not only of fundamental interest in magnetism but also have important technological implications in the emerging field of _spintronics_[1, 2, 3, 4, 5, 6, 7]. These nanometer-sized localized spin textures are characterized by nontrivial topological invariants and are rather stable objects with long lifetimes. In itinerant electron magnets, skyrmions can be moved, created, and annihilated not only by magnetic fields but also by electrical currents thanks to electron-spin interactions. The presence of such complex textures could also give rise to intriguing electronic and transport properties, such as the topological Hall effects and topological Nernst effects [8, 9, 10, 7], due to a nontrivial Berry phase acquired by electrons when traversing over closed loops of non-coplanar spins [11].
Dynamical modeling of complex textures in itinerant spin systems, however, is a computationally challenging task. While magnetic moments in most metallic skyrmion materials can be well approximated as classical spin vectors, the local effective magnetic fields, analogous to forces in molecular dynamics, originate from exchange interactions with itinerant electrons and must be computed quantum mechanically. Dynamics simulations of such itinerant magnets thus require solving an electronic structure problem associated with the instantaneous spin configuration at every time step. Repeated quantum calculations would be prohibitively expensive for large-scale simulations. Consequently, empirical classical spin Hamiltonians, from which the local fields can be explicitly calculated, are often employed in large-scale dynamical simulations of skyrmion magnets [12, 13]. Yet, such classical spin models often cannot capture the intricate long-range spin-spin interactions mediated by electrons.
The computational complexity of the above quantum approaches to spin dynamics is similar to the so-called quantum or _ab initio_ molecular dynamics (MD) methods. Contrary to classical MD methods that are based on empirical force fields, the atomic forces in quantum MD are computed by integrating out electrons on-the-fly as the atomic trajectories are generated [14]. Various many-body methods, notably the density functional theory, have been used for the force calculation of quantum MD. However, the computational cost of repeated electronic structure solutions significantly restricts the accessible scales of atomic simulations. To overcome this computational difficulty, machine learning (ML) methods have been exploited to develop force-field models by accurately emulating the time-consuming many-body calculations, thus enabling large-scale MD simulations with the desired quantum accuracy.
Crucial to the remarkable scalability of ML-based force-field models is the divide-and-conquer approach proposed in the pioneering works of Behler and Parrinello [15], and Bartok _et al._[16]. In this approach, the total energy of the system is partitioned into local contributions \(E=\sum_{i}\epsilon_{i}\), where \(\epsilon_{i}\) is called the atomic energy and only depends on the local environment of the \(i\)-th atom [15, 16]. The atomic forces are then obtained from the derivatives of the predicted energy: \(\mathbf{F}_{i}=-\partial E/\partial\mathbf{r}_{i}\), where \(\mathbf{r}_{i}\) is the atomic position vector. Crucially, the complicated dependence of atomic energy \(\epsilon_{i}\) on its local neighborhood is approximated by the ML model, which
is trained on the condition that both the predicted individual forces \(\mathbf{F}_{i}\) as well as the total energy \(E\) agree with the quantum calculations [15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26]. It is worth noting that physically the principle of locality, or the so-called near-sightedness of electronic matters, lies at the heart of this approach [27; 28].
The tremendous success of ML methods in quantum MD simulations has spurred similar approaches to multiscale dynamical modeling of other functional electronic systems in condensed matter physics [29; 30; 31; 32; 33; 34; 35]. In particular, the Behler-Parrinello (BP) ML scheme [15; 16] was generalized to build effective magnetic energy or torque-field models with the accuracy of quantum calculations for itinerant electron magnets [33; 34; 36; 37]. Notably, large-scale dynamical simulations enabled by such ML models uncovered intriguing phase separation dynamics that results from the nontrivial interplay between electrons and local spins. While the conventional BP scheme can only represent conservative forces, a generalized potential theory for the Landau-Lifshitz equation allows one to extend the BP scheme to describe non-conserved spin torques that are crucial to the dynamical modeling of out-of-equilibrium itinerant spin systems [35].
In this paper, we present an ML torque model for itinerant magnets based on convolutional neural networks (CNN). CNN is a class of neural networks that can be characterized by its local connectivity, implemented via finite-sized convolution kernels. Importantly, the convolution operation with a finite-sized kernel naturally incorporates the locality principle into the ML structure, thus offering an efficient implementation of the ML torque model that can be straightforwardly scaled to larger systems. Our CNN model is designed to directly predict the vector torque field at every site without the need for the introduction of local energies as in the BP scheme. Data augmentation techniques are employed to incorporate the spin-rotational symmetry and the lattice symmetry into the CNN spin-torque model. We demonstrate our approach on an itinerant spin model which exhibits a skyrmion crystal phase at an intermediate magnetic field. We show that dynamical simulations with magnetic torques computed from the trained CNN model faithfully reproduce the relaxation process of the itinerant spin systems. Moreover, the CNN model, while trained by datasets from small systems, is capable of stabilizing a skyrmion lattice on larger systems, thus demonstrating the transferability and scalability of our ML approach.
The rest of the paper is organized as follows. In Section II, we discuss the methods for simulating the spin dynamics of itinerant electron magnets. A triangular-lattice s-d model, a well-studied itinerant spin system, is used as a concrete example to highlight the complexity of the dynamical simulations. We also briefly review BP-type ML approaches, where a neural network is trained to approximate a local energy function. Section III presents the CNN structure used for the prediction of spin-torque. Details of the data augmentation for incorporating symmetries and how the ML model can be scaled to larger systems are also discussed. Using the s-d model as an example, a benchmark of the CNN models and simulation results based on the trained models are presented in Section IV. We also ascertain the scalability and symmetry of the proposed CNN method, as well as its compliance with the locality principle. Finally, we summarize our work and discuss future directions in Sec. V.
## II Magnetization dynamics of the itinerant magnets
The magnetization dynamics in spin systems is governed by the Landau-Lifshitz-Gilbert (LLG) equation [38]
\[\frac{d\mathbf{S}_{i}}{dt}=\mathbf{T}_{i}-\alpha\,\mathbf{S}_{i}\times\mathbf{ T}_{i}+\mathbf{\tau}_{i}, \tag{1}\]
where \(\mathbf{T}_{i}\) is the magnetic torque defined as
\[\mathbf{T}_{i}=\gamma\mathbf{S}_{i}\times\mathbf{H}_{i}. \tag{2}\]
Here \(\gamma\) is the gyromagnetic ratio, and \(\mathbf{H}_{i}\) is an effective exchange field acting on spin-\(i\), \(\alpha\) is the damping coefficient, and \(\mathbf{\tau}_{i}(t)=\mathbf{S}_{i}\times\mathbf{\eta}_{i}(t)\) is a fluctuating torque generated by a random local field \(\mathbf{\eta}_{i}\) of zero mean. The stochastic field \(\mathbf{\eta}_{i}\) is assumed to be a Gaussian random variable with the variance determined from \(\alpha\) and temperature \(T\) through the fluctuation-dissipation theorem. The LLG simulations are widely used to study dynamical phenomena in a wide range of magnetic systems, including spin waves in unusual magnetic phases and dynamical behaviors of skyrmions and other spin-textures.
For adiabatic spin dynamics, the local exchange field is given by the derivative of the system energy \(E=E(\mathbf{S}_{i})\):
\[\mathbf{H}_{i}=-\frac{\partial E}{\partial\mathbf{S}_{i}}. \tag{3}\]
For magnetic insulators, interactions between spins are often short-ranged. The resultant magnetic energy has the form of bilinear interactions between a few nearest-neighbor spins on the lattice, e.g. \(E=\sum_{ij}(J_{ij}\mathbf{S}_{i}\cdot\mathbf{S}_{j}+\mathbf{D}_{ij}\cdot \mathbf{S}_{i}\times\mathbf{S}_{j})\), where \(J_{ij}\) denotes the isotropic Heisenberg exchange interaction and \(\mathbf{D}_{ij}\) represents the anisotropic exchange, also known as the Dzyaloshinskii-Moriya interaction [12; 13]. The exchange field of such models is explicitly given by \(\mathbf{H}_{i}=-\sum_{j}(J_{ij}\mathbf{S}_{j}+\mathbf{D}_{ij}\times\mathbf{ S}_{j})\), where the summation is restricted to a few nearest neighbors, and can be very efficiently computed for large-scale LLG simulations.
On the other hand, the exchange fields in a metallic magnet come from interactions between local spins and itinerant electrons. Here we consider spin dynamics in the adiabatic approximation, which is analogous to the Born-Oppenheimer approximation in quantum molecular dynamics [14]. In the adiabatic limit, electron relaxation is assumed to be much faster than the time scale of local magnetic moments. As a result, the magnetic energy \(E\) in
Eq. (3) can be obtained by freezing the spin configuration and integrating out the electrons. The resultant spin-dependent energy function, \(E=E(\mathbf{S}_{i})\), can be viewed as a potential energy surface (PES) in the high-dimensional spin space, similar to the PES in Born-Oppenheimer MD simulations. In practice, the calculation of this magnetic PES requires solving the electron structure problem that depends on the instantaneous spin structure \(\{\mathbf{S}_{i}(t)\}\).
For concreteness, here we consider a generic single-band s-d model for such itinerant magnets. The s-d model describes the interaction between itinerant \(s\)-band electrons and magnetic moments \(\mathbf{S}_{i}\) of localized \(d\)-electrons. Its Hamiltonian reads
\[\mathcal{H}=\sum_{ij}\sum_{\alpha=\uparrow,\downarrow}t_{ij}c_{i\alpha}^{ \dagger}c_{j\alpha}-J\sum_{i}\sum_{\alpha,\beta=\uparrow,\downarrow}\mathbf{S }_{i}\cdot c_{i\alpha}^{\dagger}\mathbf{\sigma}_{\alpha\beta}c_{i\beta}, \tag{4}\]
where \(c_{i\alpha}^{\dagger}/c_{i,\alpha}\) are creation/annihilation operators of an electron with spin \(\alpha=\uparrow,\downarrow\) at site \(i\), \(t_{ij}\) is the electron hopping constant between a pair of sites \((i,j)\), \(J\) denotes the strength of local Hund's rule coupling between electron spin and magnetic moment \(\mathbf{S}_{i}\) of localized \(d\)-electrons. For most skyrmion magnets, these local magnetic moments can be well approximated as classical spins of fixed length \(|\mathbf{S}_{i}|=S\).
For small Hund's coupling \(J\ll t_{ij}\), the effective energy of spins can be obtained by integrating out electrons via a second-order perturbation calculation, giving rise to a long-ranged oscillatory interaction, similar to the so-called Ruderman-Kittel-Kasuya-Yosida (RKKY) interaction Ruderman and Kittel (1954); Kasuya (1954); Yosida (1954). However, for intermediate and large Hund's coupling, the effective energy to be used for the force calculation in Eq. (3) has to be obtained by integrating out electrons on the fly:
\[E=\langle\mathcal{H}\rangle=\mathrm{Tr}(\rho\mathcal{H}), \tag{5}\]
where \(\rho=\exp(-\mathcal{H}/k_{B}T)\) is the density matrix of the equilibrium electron liquid within the adiabatic approximation. The calculation of the density matrix, in the absence of electron-electron interaction, amounts to solving a disordered tight-binding Hamiltonian for a given spin configuration. The standard method for solving tight-binding models is based on exact diagonalization, whose complexity scales cubically with the system size. As a result, for large-scale LLG simulations of the s-d model, repeated ED calculations of the electron density matrix can be overwhelmingly time-consuming.
As discussed in Sec. I, the BP scheme has been generalized to develop ML-based models for the effective spin energy \(E(\{\mathbf{S}_{i}\})\) of itinerant magnets Ruderman and Kittel (1954); Kasuya (1954); Kasuya (1954); Kasuya (1954). In this approach, the total energy is partitioned into local contributions
\[E=\sum_{i}\epsilon_{i}=\sum_{i}\varepsilon(\mathcal{C}_{i}), \tag{6}\]
where the energy \(\epsilon_{i}=\varepsilon(\mathcal{C}_{i})\) is associated with the \(i\)-th lattice site and is assumed to depend only on spin configuration \(\mathcal{C}_{i}=\{\mathbf{S}_{j}\mid\|\mathbf{r}_{j}-\mathbf{r}_{i}\|<r_{c}\}\) in its neighborhood. This local energy function \(\varepsilon(\mathcal{C}_{i})\) can be viewed as the building block of the magnetic PES. Importantly, the complicated dependence of the PES on the neighborhood spins is to be approximated by fully connected neural networks Ruderman and Kittel (1954); Kasuya (1954); Kasuya (1954). To preserve the SO(3) spin rotation symmetry, the inner product between spin pairs \(b_{jk}=\mathbf{S}_{j}\cdot\mathbf{S}_{k}\) and scalar product between spin triplets \(\chi_{jkl}=\mathbf{S}_{j}\cdot\mathbf{S}_{k}\times\mathbf{S}_{l}\) within the neighborhood are used as building blocks to construct feature variables that are input to the neural network. Finally, exchange fields \(\mathbf{H}_{i}\) acting on spins are obtained by applying automatic differentiation to the ML energy model.
## III CNN spin torque model
The BP-type schemes described here essentially provide an energy-based ML model for force field calculations. A crucial step is the partitioning of the total energy into local contributions \(\epsilon_{i}\), which cannot be directly computed from electronic structure methods that are used to generate the training dataset. As a result, the loss function \(L\) cannot be directly determined from the predicted energies \(\epsilon_{i}\). Instead, it is constructed from the, _e.g.,_ mean square error (MSE) or "forces", or in our case, the spin torque fields, and only implicitly depends on the predicted energy through automatic differentiation. However, the uncertainties due to the introduction of such intermediate local energies often complicate the training of BP-type models. While one advantage of the BP-type scheme is the explicit inclusion of the physical constraint of conservative forces, such energy-based ML approaches, however, are also restricted to the representation of only conservative forces. In this Section, we present an alternative ML approach that directly predicts the vector forces without going through intermediate energy.
### Convolutional Neural Networks
The fact that spins in metallic magnets are defined on well-known lattices suggests that spin configurations can be treated as generalized "images," which can then be processed using powerful image-processing techniques developed in recent years, such as CNNs. Below, we present a CNN model for the direct prediction of torques \(\mathbf{T}_{i}\) that drive the spin dynamics. As illustrated in Figure 1, the proposed network takes the spin configuration \(\{\mathbf{S}_{i}\}\) on the lattice as input and returns the torques \(\{\mathbf{T}_{i}\}\) that drive the spin dynamics as output. The model is comprised of multiple convolution layers \(f_{m}\) with associated activation (nonlinearity) layers \(\sigma_{m}\) to model the complex nonlinear relationship between \(\{\mathbf{S}_{i}\}\) and \(\{\mathbf{T}_{i}\}\) as a composition of such layers: \(f_{\mathrm{CNN}}=(\sigma_{L}\circ f_{L})\circ\cdots\circ(\sigma_{1}\circ f_{1})\), where \(L\) is the number of layers or the \(depth\) of the CNN model.
Given an input vector field \(V\in C^{\infty}(\mathbb{R}^{2},\mathbb{R}^{d})\), each convolution layer \(f_{m}\) maps the vector field onto an output vector field \(W\in C^{\infty}(\mathbb{R}^{2},\mathbb{R}^{k})\) by convolving a _kernel_ ten
sor field \(h_{m}(X):=h(X;\theta_{m})\) with trainable parameters \(\theta_{m}\), via the convolution operation:
\[W(\mathbf{r}):=\int_{\mathbb{R}^{2}}V(\mathbf{q})h_{m}(\mathbf{r}-\mathbf{q})d \mathbf{q}. \tag{7}\]
Each vector element of the vector field \(W\) then undergoes the activation function \(\sigma_{m}:\mathbb{R}\rightarrow\mathbb{R}\) to produce the output vector field \(A\in C^{\infty}(\mathbb{R}^{2},\mathbb{R}^{k})\), called _activation maps_. A variety of activation functions can be used in CNNs. In the current work, we use the _rectified linear unit_, or _ReLU_[42] as an activation function:
\[\sigma_{m}(x):=\max(0,x). \tag{8}\]
for \(m=1,\ldots,L-1\). Note that the final layer \(f_{L}\) has no activation function associated with it, or technically, \(\sigma_{L}(x)=x\).
Typically in CNNs, the support of a kernel \(\text{supp}(h_{m})\), _i.e.,_ the region where \(h_{m}\) has nonzero values (also known as the _receptive field_ of the kernel), is limited to a small region (_e.g.,_\(5\times 5\) lattice sites) such that the activation response \(W(\mathbf{r})\), thereby \(A(\mathbf{r})\), at position \(\mathbf{r}\) is limited to the patterns of \(V\) only within the close proximity of \(\mathbf{r}\). It is worth noting that the physical justification of employing such finite-size kernels is the principle of locality: _viz.,_ local physical quantities, such as local spin torque \(\mathbf{T}_{i}=\mathbf{T}(\mathbf{r}_{i})\), are predominately determined by the local environment of site-\(i\):
\[\mathbf{T}_{i}=\mathbf{\mathcal{T}}(\mathcal{C}_{i}), \tag{9}\]
where \(\mathcal{C}_{i}\) is the magnetic environment in the vincinity of site-\(i\), and the vector function \(\mathbf{\mathcal{T}}(\cdot)\) is to be modeled by the CNN. The range of the neighborhood \(\mathcal{C}_{i}\) is determined by the sizes of kernels and the number of convolution layers.
Note that Eqs. (7) & (8) imply that the output activation \(A(\mathbf{r})\) at position \(\mathbf{r}\) will be of a large, positive magnitude, only if the input vector field \(V(\mathbf{r})\) is closely correlated to the (shifted) kernel \(h_{m}(\mathbf{q}-\mathbf{r})\). Therefore, the goal of training the CNN is to find the unknown kernel parameters \(\theta_{m=1,\ldots,L}\) that determines the function shapes of \(\{h_{m}\}\), in order to produce the adequate activation values such that the final output of the model \(f_{\text{CNN}}(\{\mathbf{S}_{i}\})\) can reasonably approximate the ground truth spin torque \(\{\mathbf{T}_{i}\}\) in the training data.
Meanwhile, the composition of convolution layers enables hierarchical modeling of the spin-torque relationship. That is, while an individual kernel limited to a small region may only represent rather simplistic patterns (_e.g.,_ small blobs), the composition of such kernels across layers alongside the nonlinear ReLU activation can produce fairly complex, nonlinear patterns. Furthermore, the composition of convolution layers \((\sigma_{n}\circ f_{n})\circ(\sigma_{m}\circ f_{m})\) in effect produces a larger receptive field area, equal to the Minkowski sum1 of the receptive fields of the individual layers \(\text{supp}(h_{n}\otimes h_{m})=\text{supp}(h_{n})\oplus\text{supp}(h_{m})\). Therefore, stacked convolution layers produce a natural hierarchy, in which earlier layers represent local, primitive patterns in a small proximity while latter (deeper) layers represent more global, sophisticated patterns in a relatively larger periphery.
Footnote 1: Note that the definition of receptive fields using the notion of Minkowski sum may only be applicable to our particular setting, in which we are considering both the input and output vector fields over the same domain \(\mathbb{R}^{2}\).
Finally, a purely convolutional CNN, without any conventional fully connected (dense) layers, can restrict the overall receptive field size of the entire model \(\text{supp}(h_{L}\otimes\mathbf{T}_{i})\).
Figure 1: Schematic diagram of the CNN-based ML model for spin-torque prediction of itinerant electron magnets. The spin configuration \(\{\mathbf{S}_{i}\}\) on a lattice is first flattened to give three arrays, corresponding to the three components of spins, which are input to a series of ResNet blocks. Details of the ResNet are presented in Fig. 2. The output of the ResNet blocks is then processed through additional convolution layers. The final output are three arrays which after deflattening correspond to the torques \(\{\mathbf{T}_{i}\}\) that drive the spin dynamics.
\(\cdots\otimes h_{1}\)) to a predetermined lattice size, presenting a distinct advantage of _built-in locality_. Further, with a purely convolutional design, since the sizes of kernels in a CNN are fixed for a given itinerant model, the successfully trained CNN model can then be used in much larger lattice systems without the need to rebuild or retrain a new neural network. The CNN structure thus provides a natural approach to implementing scalable ML models based on the locality principle.
### Model Architecture
Figure 1 shows a schematic diagram of our CNN architecture. The input to the CNN is the spin vector field \(\{\mathbf{S}_{i}\}\) transformed from triangular lattice to square ("flattening") as most convolution operation expects a square input. We employed four ResNet blocks, inspired by He _et al._[43], as our backbone, which processes the input into an activation map of 512 features. These features then undergo two additional convolution layers, which in the end produce the torques as the output of the network. In our model, these torque vector outputs are obtained in a normalized range of values with a mean magnitude of 1. Such normalized outputs are then scaled by the mean magnitude of the torque vectors in the training data set. Finally, the predicted torques on the square grid are "unflattened" onto the original triangular lattice.
Compared to fully connected (dense) layers, in which each neuron aggregates values across the entire domain into a single scalar value via the weighted sum, convolution layers preserve the spatial structure of the input domain. Therefore, with the proper boundary condition (_e.g._, 'padding' in the machine learning jargon), the output lattice is guaranteed to have the same size and resolution as the input lattice. Therefore, a CNN comprised of purely convolution layers, without any fully connected layers, can be scaled to an arbitrary lattice of practically any size, as long as the lattice element has locally the same geometric and topological structures.
Meanwhile, the architecture of the ResNet block in the backbone is described in Figure 2. Similar to the original ResNet, the input goes through two separate pathways. One pathway (_right_ path in Figure 2) is comprised of two convolution layers with the ReLU activation [42], stacked on top of each other, in order to develop a feature vector characterizing patterns of the input vector field at each local neighborhood on the pixel grid. The other pathway (_left_ path in Figure 2) can be either the skip connection, in which the input values are directly copied without any transformation, or a 1\(\times\)1 convolution, in which the input feature vector at each grid location undergoes a dimensionality reduction. The outputs from the two pathways are then added together to produce the overall output of the ResNet block. Here, note that we do not employ the batch normalization technique, which is a technique used in the original ResNet model to avoid the vanishing gradient problem. Empirically, we found that batch normalization overly regularized the network causing severe underfitting of the spin torque and overall deteriorating the prediction performance. Moreover, since the input spin vectors are already well normalized to have a length of 1, it is not necessary to employ batch normalization.
### Training
Our training and testing sets consist of 60 independent spin dynamics simulations, respectively, performed on a 48\(\times\)48 triangular lattice. The following parameters are used for the s-d Hamiltonian Eq. (4): The nearest-neighbor hopping was set to \(t_{1}=1\), which also provides the reference unit for energy. A third-neighbor hopping \(t_{3}=-0.85\) is included in order to stabilize a triple-\(Q\) magnetic order that underlies the skyrmion lattice (SkL) phase [44]. The electron-spin coupling constant is set at \(J=1\). An electron chemical potential \(\mu=-3.5\) was used, and an external magnetic field \(H_{\text{ext}}=0.005\) was included to explicitly break the time-reversal symmetry and induce the SkL [44]. As discussed in Section II, the exchange fields \(\mathbf{H}_{i}\) acting on spins are obtained by solving the electron Hamiltonian. Specifically, using Eq. (3) and the s-d Hamiltonian Eq. (4), the exchange fields are given by
\[\mathbf{H}_{i}=J\sum_{\alpha,\beta=\uparrow,\downarrow}\boldsymbol{\sigma}_{ \alpha\beta}\,\rho_{i\beta,i\alpha}, \tag{10}\]
where \(\rho_{i\alpha,j\beta}:=\langle c_{j\beta}^{\dagger}c_{i\alpha}\rangle\) is the electron correlation function, or single-electron density matrix. The kernel poly
Figure 2: Diagram of a ResNet block. The input to a Resnet block goes through two different pathways: the skip connection, where no operation is performed if input and output have the same number of channels or a \(1\times 1\) convolution layer otherwise; and normal connection, where two \(5\times 5\) convolution - ReLU activation blocks are stacked on top of each other. The results of the two pathways are then added together as the output.
nomial method (KPM) [45; 46] was used to compute the electron density matrix for generating the training dataset. The KPM is more efficient compared with exact diagonalization, yet is considered numerically exact when a large number of Chebyshev polynomials and random vectors are used.
The time-scale of the precession dynamics of the LLG equation (1) is given by \(t_{0}=(\gamma JS)^{-1}\), where \(\gamma\) is the gyromagnetic ratio, \(J\) is the electron-spin coupling, and \(S\) is the length of the localized magnetic moments. The damping term introduces another time-scale \(t_{\text{damping}}=t_{0}/\alpha\) which characterizes the rate of energy dissipation, where \(\alpha\) is a dimensionless coefficient. In the following, the simulation time is measured in terms of \(t_{0}\), and a damping coefficient \(\alpha=0.05\) is used.
The initial conditions of the simulations are divided into two types. The first one is perturbed SkL, where a periodic array of skyrmions is baked into the initial condition but spins had random noise added. The other type is random initialization where the spins are totally randomly generated. For each type of initial condition, a total of 30 simulations were generated, each of them comprised of 5,000 time steps. For a given initial condition, a semi-implicit second-order scheme [47] which preserves the spin length was employed to integrate the LLG equation (1) with a time-step \(\Delta t=0.1\).
The spins and their corresponding exchange fields at all lattice sites were collected every 10 other steps in the simulation. We focused on the training of the electron-induced exchange field, so the external constant field of \(H_{\text{ext}}=0.005\) in the \(z\) direction was removed. The field \(\mathbf{H}_{i}\) was then decomposed into components that are parallel and perpendicular to spin components, and only the perpendicular component, which is equivalent to the torque \(\mathbf{T}_{i}\), was kept as the parallel component has no effect in the evolution of spin configuration and is around two orders of magnitude larger than the perpendicular component. The perpendicular fields were then normalized to have a mean magnitude of 1 over the entire dataset. Then 70% of the entire dataset was used for the training, while the rest was set aside for validation. The split of the dataset is stratified so the training and testing set has the same proportion of the two types of simulations.
The triangular-lattice s-d Hamiltonian in Eq. (4) is invariant under two independent symmetry groups: the SO(3)/SU(2) rotation of spins and the D\({}_{6}\) point group of the triangular lattice. Here the rotation symmetry refers to the global rotation of local magnetic moments \(\mathbf{S}_{i}\rightarrow\mathcal{R}\cdot\mathbf{S}_{i}\) (treated as classical vectors), and a simultaneous unitary transformation of the electron spinor \(\hat{c}_{i\alpha}\rightarrow\hat{U}_{\alpha\beta}\hat{c}_{i\beta}\), where \(\mathcal{R}\) is an orthogonal \(3\times 3\) matrix and \(\hat{U}=\hat{U}(\mathcal{R})\) is the corresponding \(2\times 2\) unitary rotation operator. The ML model, corresponding to an effective force-field model by integrating out electrons, needs to preserve the SO(3) rotation symmetry of spin, which means under uniform rotation \(\mathcal{R}\) of all spins in the neighborhood, the ML predicted spin torques should undergo the same rotation transformation \(\mathbf{T}_{i}\rightarrow\mathcal{R}\cdot\mathbf{T}_{i}\). On the other hand, under a symmetry operation \(g\) of the D\({}_{6}\) point group centered at some lattice point, both spins and torques transform under the D\({}_{6}\) point group as: \(\mathbf{S}_{i}\rightarrow\mathbf{S}_{j}\) and \(\mathbf{T}_{i}\rightarrow\mathbf{T}_{j}\), where the lattice points \(\mathbf{r}_{j}=\mathcal{R}(g)\cdot\mathbf{r}_{i}\), and \(\mathcal{R}(g)\) denotes the \(3\times 3\) matrix corresponding to \(g\).
To incorporate both symmetries into the CNN model, we introduced data augmentation during our training phase. Specifically, for each input spin configuration and the corresponding torque field, a random SO(3) rotation field was applied to the spins \(\{\mathbf{S}_{i}\}\) and a random D\({}_{6}\) symmetry operation was applied to the lattice points. The same symmetry operations, both for spin-space and real-space lattice, were also applied to the torque fields \(\{\mathbf{T}_{i}\}\). These additional symmetry-generated input/output configurations were included along with the original ones to the dataset for supervised training. We note that, contrary to previous ML models where the symmetry is explicitly included through descriptors, the symmetry of the itinerant electron Hamiltonian is enforced on the ML vector model statistically in this data-driven approach.
Since the torques in the dataset could differ at most by one order of magnitude, we find that the usual mean absolute error or mean square error loss functions do not perform very well. Instead, we adopted a mean percentage absolute loss:
\[L=\frac{1}{N}\sum_{i=1}^{N}\frac{|T_{i}^{x}-\hat{T}_{i}^{x}|+|T_{i}^{y}-\hat{ T}_{i}^{y}|+|T_{i}^{z}-\hat{T}_{i}^{z}|}{|\hat{\mathbf{T}}_{i}|}, \tag{11}\]
where \(N\) is the total number of lattice sites within each batch summed across all lattices, \(\hat{\mathbf{T}}_{i}\) is the ground truth field vector at \(i\)-th lattice site and \(\mathbf{T}_{i}=(T_{i}^{x},T_{i}^{y},T_{i}^{z})\) is the predicted field vector and its three components.
An Adam [48] optimizer with an initial learning rate of \(10^{-3}\) was used for the training. The learning rate was later reduced to \(10^{-6}\) upon the plateau of loss value in the testing set. We did not use any regularization methods, such as dropout or weight decay, and there is no evidence of overfitting when comparing training and testing set loss values. The model and its training process are implemented in PyTorch [49], and training was performed on one Nvidia A100 for roughly 72 hours.
## IV Results
Here we present benchmarks of the CNN models by comparing spin torque predictions and small-scale dynamical simulations against exact methods. We further demonstrate the restoration and stability of a skyrmion lattice in large-scale LLG simulations, highlighting the scalability and transferability of our ML approach.
### Benchmark of Spin Torque Prediction
The spin torques \(\mathbf{T}_{i}\) predicted from the trained CNN model are compared against the ground truth in Figure 3 using configurations from the test dataset. Two types of testing data are used for this benchmark: LLG simulations of an initially perturbed SkL state, and LLG simulations starting from random spins. In both cases, the predicted torque components closely follow the ground truth with roughly equal variance across the entire range. Note the values of torque components in the random spin case span a range nearly twice larger than that of the SkL case. As can be expected, the ML model performs better in the case of the SkL simulations since spin configurations here correspond to a rather small and special set of the whole state space. Yet, a fairly good agreement was obtained even for the testing dataset with completely random initial spins.
We further examine the magnitude of predicted torques versus the ground truth, as well as the angle between the predicted field vector and ground truth vector in Figure 4. Again, an overall satisfactory agreement was obtained, with the majority of the predictions close to or symmetrically distributed around the ground truth value. Note that due to the distortion of the logarithmic function, the same deviation from ground truth at large and small magnitudes will look asymmetric and "biased" towards smaller values. Therefore, two red dotted lines with constant deviation of \(10^{-2}\) (outer) and \(10^{-3}\) (inner) have been added in Figure 4(a). Even at a large magnitude where the error of the ML model is also the largest, the difference in field vector magnitude is almost guaranteed to be smaller than \(10^{-2}\). At a small magnitude, the difference in field vector magnitude is most likely to be smaller than \(4\times 10^{-3}\) and would typically be around \(10^{-3}\). We did not notice any bias in our ML prediction
Figure 4: (a) Comparison of predicted field vector magnitude against ground truth. The red line indicates prediction equalling to ground truth, the outer red dotted line represents a \(10^{-2}\) deviation from ground truth magnitude while the inner one represents a deviation of \(10^{-3}\). The color denotes the log density. (b) Angular difference between ground truth field vector and predicted field vector.
Figure 3: Predicted spin torque components \((T_{x},T_{y},T_{z})\) versus ground truth components from the testing set. The red-dotted diagonal lines indicate perfect prediction. The top row shows the prediction results based on spin configurations obtained from LLG simulations of a perturbed SkL. Results from LLG simulations with random initial states are shown in the bottom row.
results. The ML-predicted vectors are also very closely aligned with ground truth field vectors. As shown in Figure 4(b), most vectors would have an angle smaller than \(10^{\circ}\), and it is almost impossible to find a predicted vector with a more than 30 degrees angle from its ground truth counterpart.
### Dynamical Benchmark
In addition to accurate predictions of spin torques, another important benchmark is whether the trained ML model can also faithfully capture the dynamical evolution of the itinerant spin model. To this end, we integrated the trained CNN model into the LLG dynamics simulations and compared the results with LLG simulations based on KPM [45; 46]. We consider simulations of a thermal quench process where an initially random magnet is suddenly quenched to nearly zero temperature at time \(t=0\). While our trained CNN model produces fairly accurate spin torques, small prediction errors still persist, as discussed in the previous section. Statistically, these prediction errors are similar to the stochastic noise \(\mathbf{\tau}_{i}(t)\) in the LLG equation (1). These site-dependent fluctuating random torques are similar to the thermal forces in Langevin dynamics. Both random forces are physically due to thermal fluctuations through coupling to a thermal bath at a fixed temperature. As a result, while the temperature of the ML-LLG simulations was set to exactly zero, a very low yet nonzero temperature \(T=0.001\) was introduced in the exact LLG dynamics to mimic the prediction error.
The model parameters of the s-d Hamiltonian are chosen to stabilize a spontaneous SkL ground state. Importantly, the emergence of skyrmion crystal not only breaks the spin-rotation symmetry but also breaks the lattice translational symmetry. The periods of this spatial modulation, _i.e._, the lattice constant of the skyrmion lattice, are determined by the underlying electron Fermi surface. Indeed, while an SkL state can be intuitively thought of as a periodic array of particle-like spin-textures, physically SkL phases often result from an instability caused by quasi-nesting of the electron Fermi surface that gives rise to a multiple-\(Q\) magnetic order [44; 50; 51].
In our case, the geometry of the Fermi surface at the chemical potential \(\mu=-3.5\) allows significant segments to be connected by three wave vectors \(\mathbf{Q}_{1}=(\pi/3a,0)\) and \(\mathbf{Q}_{2,3}=\mathcal{R}_{\pm 2\pi/3}\cdot\mathbf{Q}_{1}\), related to each other by symmetry operations of the \(\mathrm{D}_{6}\) group. Here \(a\) is the lattice constant of the underlying triangular lattice. This means that maximum energy gain through electron-spin coupling is realized by spin helical orders with one of the above three wave vectors. Further analysis shows that the electron energy is further lowered by the simultaneous ordering of all three wave vectors, giving rise to an emergent triangular lattice of skyrmions.
The relaxation of the magnet after the thermal quench is dominated by the formation of the triangular SkL. A perfect SkL is distinguished by six Bragg peaks at \(\mathbf{q}=\pm\mathbf{Q}_{1}\), \(\pm\mathbf{Q}_{2}\), and \(\pm\mathbf{Q}_{3}\) in momentum space. Yet, since the spin interactions are local in nature, the crystallization of skyrmions is inherently an incoherent process. Small crystallites of skyrmions are nucleated randomly separated by large domains of disjointed structures. To quantitatively characterize this crystallization process, we compute the time-dependent spin structure factor, which is defined as the square of the Fourier transform of the spin field
\[\mathcal{S}(\mathbf{q},t)=\bigg{\langle}\bigg{|}\frac{1}{N}\sum_{i=1}^{N} \mathbf{S}_{i}(t)\exp(i\mathbf{q}\cdot\mathbf{r}_{i})\bigg{|}^{2}\bigg{\rangle}, \tag{12}\]
where the bracket \(\langle\cdots\rangle\) indicates averaging over thermal ensemble as well as initial conditions. The structure factor is itself the Fourier transform of the spin-spin correlation function in real space and can be directly measured in neutron scattering experiments. The spin structure factors at various times after the quench, obtained from LLG simulations based on both KPM and ML models,
Figure 5: A comparison of spin structural factors obtained by averaging 30 independent LLG simulations based on KPM (left) and ML model (right). The same set of random initial conditions on a \(48\times 48\) triangular lattice were used in both simulations. The red dashed lines indicate the first Brillouin zone of the momentum space.
are shown in Figure 5. Due to the stochastic nature of such simulations, the results are obtained by averaging 30 independent runs. Overall, the results from LLG simulations with the trained CNN model agree very well with those based on the numerically exact KPM.
Both simulations show that a ring-like structure quickly emerges in the structure factor after the quench. The radius of the ring is close to the length of the three nesting wave vectors \(\mathbf{Q}_{\eta}\), indicating the initial formation of skyrmions. As the system relaxed towards equilibrium, the ring-like structure becomes sharper. Moreover, the spectral weight starts to accumulate at the six spots corresponding to the \(\pm\mathbf{Q}_{\eta}\) wave vectors. Physically, the emergence of the six broad segments corresponds to the growth of domains of the skyrmion lattice. The size of these intermediate skyrmion crystallites can be inferred from the width of the six spots. However, both simulations found that even at a late stage of the equilibration, the structure factor exhibits only six diffusive peaks at the nesting wave vectors, instead of sharp Bragg peaks as expected for a perfect SkL. The broad peaks at a late stage of the phase ordering thus indicate an arrested growth of SkL domains in real space. An example of the real-space spin configuration at \(t=10^{4}\) after the quench is shown in Fig. 6. The snapshot shows rather small triangular clusters of skyrmions coexist with stripe-like structures of different orientations, These stripes or helical spins corresponds to the single-\(Q\) magnetic order which are meta-stable states of the s-d model.
This intriguing freezing phenomenon can be partly attributed to the frustrated electron-mediated spin interactions. Another important source is related to the degeneracy between skyrmions of opposite vorticity, or circulation of the in-plane spins. The two opposite circulations correspond to the topological winding number \(w=\pm 1\) for the skyrmions. As discussed above, the spin-rotation symmetry is decoupled from the lattice in the s-d Hamiltonian (4), which provides a minimum model for centrosymmetric itinerant magnets without spin-orbit coupling. As a result, skyrmions with clockwise circulation is energetically degenerate with counter-clockwise ones. This also means that SkL domains of the two opposite circulations are nucleated with roughly the same probability after the thermal quench. The subsequent annihilation of skyrmions with opposite vorticity thus prohibits the growth of a large coherent SkL.
### Scalability and Large-scale Simulation
As discussed in Sec. III.2, thanks to the locality property and the fixed-size kernels, the CNN model can be directly scaled to larger lattice systems without retraining, thus enabling large-scale dynamical simulations that are beyond conventional approaches. Here we demonstrate the scalability of the CNN spin-torque model by applying it to LLG simulations of large-scale SkL phases. Specifically, we perform LLG simulations of a perturbed SkL state on a \(96\times 96\) lattice using a CNN model trained from simulations of a \(48\times 48\) lattice. As discussed above, the triangular skyrmion lattice, characterized by the three nesting wave vectors, can be viewed as a superposition of three helical spin orders. Explicitly, a perfect SkL can be approximated by the following ansatz [44; 51]
\[\begin{split}\mathbf{S}_{i}&\sim\left(\cos \mathcal{Q}_{1i}-\frac{1}{2}\cos\mathcal{Q}_{2i}-\frac{1}{2}\mathcal{Q}_{3i} \right)\hat{\mathbf{e}}_{1}\\ &\quad+\left(\frac{\sqrt{3}}{2}\cos\mathcal{Q}_{2i}-\frac{\sqrt{3 }}{2}\cos\mathcal{Q}_{3i}\right)\hat{\mathbf{e}}_{2}\\ &\quad+\left[A\left(\sin\mathcal{Q}_{1i}^{\prime}+\sin\mathcal{Q} _{2i}^{\prime}+\sin\mathcal{Q}_{3i}^{\prime}\right)+M\right]\hat{\mathbf{e}}_{ 3},\end{split} \tag{13}\]
where \(\hat{\mathbf{e}}_{1,2,3}\) are three orthogonal unit vectors, \(\mathcal{Q}_{pi}=\mathbf{Q}_{\eta}\cdot\mathbf{r}_{i}\), and \(\mathcal{Q}_{\eta,i}^{\prime}=\mathcal{Q}_{\eta,i}+\phi\) are phase factors of the three helical orders, \(\phi\), \(A\), and \(M\) are fitting parameters. To demonstrate that the ML model can indeed stabilize the SkL, which is the ground state of our chosen s-d Hamiltonian, we initialize the system with a perturbed array of skyrmions as shown in Figure 7(a). The randomness in the initial state was introduced by allowing site-dependent parameters \(\phi_{i}\), \(A_{i}\) and \(m_{i}\), which are randomly generated, in the above SkL ansatz (13). Contrary to completely random spins for the initial states in the previous dynamical benchmark, this initial state preserves a coherent structure of skyrmion winding numbers. As these topological numbers have to be conserved, the relaxation of the system is free of random annihilation of skyrmions. As shown in Figure 7, our ML-based LLG simulations indeed find that a nearly perfect SkL is restored and stabilized over a long period of simulation time.
We further investigate the scalability in the time domain by running our ML-based LLG simulation long past
Figure 6: Snapshot of the spin configuration at the end of the LLG simulation with random initial conditions on a 48\(\times\)48 triangular lattice.
the duration of the training simulations. Figure 8 shows a roughly constant structural factor long beyond the duration of simulation snapshots used in training. While we noticed a huge decrease in structural factor between the time of 15,000 and 23,000, the structural factor quickly rebounds to its original stable value (\(\mathcal{S}(\mathbf{q},t)\approx 305\)). These temporal fluctuations can be ascribed to the prediction errors of the ML model. Yet, as also discussed above, such errors play a role similar to the stochastic noise in Langevin-type dynamics simulations. Our results thus demonstrate the robustness of SkL under small random perturbations. Importantly, this further benchmark highlight the scalability of our ML models not only in spatial domains (larger lattices), but also in temporal scales (much longer simulation times) as well.
### Symmetry Requirements
In order to incorporate the underlying symmetries of a physical system into an ML model, one needs to introduce appropriate biases (prior knowledge) through the statistical learning process. Two of the major approaches to this end are: i) _data augmentation_ based
Figure 7: CNN-based LLG simulation on a \(96\times 96\) lattice showing the restoration of a perturbed SkL. The CNN model was originally trained on a \(48\times 48\) lattice. The initial spin configuration is given by the SkL ansatz (13) with additional site-dependent random phases and amplitudes of \(S_{z}\).
on the symmetry group of the system; ii-a) constructing _symmetry-invariant_ descriptors, or ii-b) constructing _equivariant_ neural network architectures _w.r.t._ the symmetry group. These two types of approaches correspond to introducing the _observational_ and the _inductive_ biases, respectively, in the context of physics-informed machine learning literature (see, _e.g._, [52; 53; 54; 55; 56; 57]). As discussed in Section III.3, the _local_ symmetries of our system, _i.e._, the spin-space and the real-space lattice symmetries _a.k.a._ the internal (gauge) and the spacetime symmetries [58], consist of \(G\)-valued fields over the underlying lattice, where \(G=\mathrm{SO}(3)\times\mathrm{D}_{6}\,\). In the present work, we adopted the data augmentation approach as the mean to enforce the symmetry constraint for the reasons justified as follows.
First, we briefly summarize the theoretical justification of how data augmentation during our training phase is injecting the above-mentioned symmetries into the underlying supervised learning process (see [54; 59; 60] for details). To avoid cumbersome notation, we denote a pair of spin configurations and its corresponding torque field \((\mathbf{S},\mathbf{T})=(\left\{\mathbf{S}_{i}\right\},\left\{\mathbf{T}_{i }\right\})\) by \(\mathbf{F}\). Our training data \(\mathbf{F}_{1},\cdots,\mathbf{F}_{n}\) consist of independent identically-distributed (i.i.d.) samples from a probability distribution \(\mathbb{P}\) over the space of all spin-torque fields. It is of fundamental importance that the probability distribution \(\mathbb{P}\) remains _invariant_ under the _action_ of each local symmetry \(g\in\mathcal{G}\,\), where \(\mathcal{G}\) denotes the space of all local symmetries of the system2. Hence, the data augmentation process can be considered as enriching our set of samples from the probability distribution \(\mathbb{P}\), where our goal is to learn it, by adding transformed spin-torque fields \(g\cdot\mathbf{F}\), \(g\in\mathcal{G}\).
Footnote 2: The action of a local symmetry \(g\in\mathcal{G}\) over a spin \(\mathbf{S}\) and a torque \(\mathbf{T}\) field is the induced transformation by \(g\,\). We denote it by \(g\cdot\mathbf{F}:=(g\cdot\mathbf{S},\,g\cdot\mathbf{T})\).
During the training procedure, at each step \(t\), a mini-batch \(B_{t}\) of spin-torque \((\mathbf{S},\mathbf{T})\) samples of size \(|B_{t}|\) is chosen, and a random local symmetry \(g_{t,b}\in\mathcal{G}\) is applied to each spin \(\mathbf{S}_{b}\) and torque \(\mathbf{T}_{b}\) field, \(b\in B_{t}\). Then, according to the stochastic gradient descent (SGD) algorithm, the parameters \(\theta\) of the CNN model \(f_{\theta}\) get updated as
\[\theta_{t+1}=\theta_{t}-\frac{\eta}{|B_{t}|}\sum_{b\in B_{t}}\nabla_{\theta}L \left(f_{\theta}(g_{t,b}\cdot\mathbf{S}_{b})\,,\,(g_{t,b}\cdot\hat{\mathbf{T }}_{b})\right)\,, \tag{14}\]
where \(L\) denotes the loss function given by Eq. (11), and \(\eta\) is the learning rate. In other words, the augmented SGD can be considered as minimization of the _empirical risk_ associated with the following _augmented loss function_
\[\int_{\mathcal{G}}L\left(f_{\theta}(g\cdot\mathbf{S})\,,\,(g\cdot\hat{ \mathbf{T}})\right)\,d\mathbb{Q}(g)\,, \tag{15}\]
in which one takes an average along the whole orbit of the group action w.r.t. a probability distribution \(\mathbb{Q}\) over \(\mathcal{G}\). It can be proved that data augmentation based on the underlying symmetry group reduces the variance of general estimators and improves their generalizability [54].
The above theoretical justification can further be validated through empirical data. Figure 9 shows the typical prediction error (blue), the difference between predicted and ground truth torque, and the equivariance error (orange), defined as \(f_{\theta}(g\cdot\mathbf{S})-g\cdot f_{\theta}(\mathbf{S})\). As can be seen in the figure, the equivariance error is smaller than the typical prediction error, indicating that in practice data augmentation employed is capable of preserving the underlying symmetry of the physics system to a satisfactory degree.
### Locality Principle and Receptive Fields
To attest to the locality principle, we analyze the receptive field of our CNN model in this section. As discussed
Figure 8: The evolution of structural factor of a much longer than training simulation with time, using perturbed Skyrmion initial condition. The dashed black line indicates the duration of the training simulation, and our ML-based LLG simulation is capable of keeping the structural factor constant for more than 5 times the duration of the training simulation.
Figure 9: Distribution of the equivariance error \(err_{eq}:=f_{\mathrm{CNN}}(\left\{R\mathbf{S}_{i}\right\})-Rf_{\mathrm{CNN} }(\left\{\mathbf{S}_{i}\right\})\) where \(R\) is an arbitrary rotation. The overall prediction error (‘predicted torque’ minus ‘ground truth torque’) on the test dataset is superimposed for comparison. The equivariance error is sufficiently smaller than the overall prediction error, implying that the model can preserve the underlying symmetry of the physics system reasonably well.
in Section III.1, the receptive field of a convolution layer \(f_{m}\) is defined to be the support \(\text{supp}(h_{m})\) of the corresponding convolution kernel \(h_{m}\), _i.e._, the region where the function values of \(h_{m}\) are nonzero tensors. The receptive field of the entire CNN model is computed as the Minkowski sum of the receptive fields of individual convolution layers, or \(\text{RF}=\text{supp}(h_{1})\oplus\cdots\oplus\text{supp}(h_{L})\). For our model, in which there are 10 layers in depth with each layer comprised of \(5\times 5\) convolution kernels of stride 1, the size of the receptive field is calculated to be 41. This implies that, in principle, the spin directions of the 41-neighborhood can influence the prediction of the torque at the lattice position \(i\).
However, the naive computation of the receptive field size may be misleading because the kernel size of a convolution layer simply just indicates the theoretical maximum of the receptive field. On the other hand, the actual region of nonzero values can be much smaller than the theoretical receptive field size. To this end, we used the approach of Luo _et al._[61] to compute the _effective_ receptive field size, in which function values are practically nonzero. Figure 10 shows the result of such a calculation performed on the trained CNN model. The red hexagonal line delineates the region inside the hexagon where function values are practically nonzero and the region outside the hexagon where function values are practically zero. The grayscale values inside the hexagon indicate different levels of influence of neighboring spins in computing the torque vector. As can be seen from the figure, at the lattice location \(i\), which is the center of the red hexagon, the weighting factor is the largest, implying that \(\mathbf{T}_{i}\) is predominantly determined by \(\mathbf{S}_{i}\). The 1-neighborhood \(\mathcal{N}_{1}(i)\), _i.e._, the immediate neighbors to the lattice location \(i\), also has bright intensity values, implying that the relative configuration of the spin direction \(\mathbf{S}_{i}\) to its neighboring spins \(\mathbf{S}_{j}\) at \(j\in\mathcal{N}_{1}(i)\) also have a significant influence to the output torque \(\mathbf{T}_{i}\). Similarly, it appears that the spin directions of the 3-neighborhood have strong influences on torque prediction, while small influence can be detected all the way to 6-neighborhood.
This result is consistent with previous ML spin-torque models based on symmetry-invariant descriptors [33; 34], which shows that the spin dynamics of similar s-d models can be nicely captured by BP-type models based on fully connected NN with input from a neighborhood up to \(r_{c}\sim 5\) lattice constants. Physically, as discussed above, the finite sizes of effective receptive field are due to the locality nature of the spin torques. However, the range of locality can only be indirectly determined from exact calculations. In practice, the cutoff radius is treated as an ad hoc parameter in BP-type ML models, or is determined through trial and error. It is thus worth noting that the CNN model offers a systematic and rigorous method to determine this important physical attribute of electronic models.
## V Conclusion and Outlook
In this paper, we presented a CNN model to predict spin torques directly from input spin configuration for large-scale LLG dynamics simulations of itinerant magnets. Our CNN model is purely convolutional without any fully connected (dense) layers, and thus presents a distinct advantage of built-in locality. Central to each CNN layer is the convolution with a kernel or filter, which can be viewed as a Green's function representing finite responses to a local source. As each kernel is characterized by a finite set of trainable parameters, the CNN model can be used for dynamical simulations on larger system sizes without rebuilding or retraining the neural network. We demonstrated our ML approach on a triangular-lattice s-d model which exhibits a skyrmion crystal in its ground state. Using the ML-predicted torques in the LLG dynamics simulations, we showed that the trained CNN model can successfully reproduce the relaxation of the skyrmion phase of the itinerant spin models. We further demonstrated the scalability and transferability of our approach by showing that large-scale LLG simulations based on our CNN model are able to stabilize a perturbed skyrmion lattice and maintain it for a long period of time.
Contrary to the ML force-field models based on the Behler-Parrinello scheme, our CNN model directly pre
Figure 10: The effective receptive fields (ERFs) of the ML model. Since for each lattice site both the input and output tensors have 3 components, there are in total 9 ERFs corresponding to the 9 partial derivatives of the 3 output torque components with respect to the 3 input spin components. The sum of these absolute values of these derivatives are then presented in this figure, with blacker pixels indicating smaller derivative value. The red line roughly traces the non-zero value regions of the ERF.
dicts torques, which are the spin analogue of atomic forces. In BP-type approaches, ML models, either Gaussian process regression or fully connected neural nets, are built to predict local energy, which cannot be directly compared with exact calculations. The forces are obtained from derivatives of the total energy, which is the sum of all local energies. The introduction of local energy takes advantage of the locality property and also facilitates the incorporation of symmetry into the ML models. Yet, the fact that forces are computed indirectly from derivatives of energy also restricts BP-type models to the representation of conservative forces and quasi-equilibrium electron systems. On the other hand, our CNN approach can be used to describe both conservative as well as non-conservative spin torques. This capability is particularly important for ML modeling of out-of-equilibrium driven systems where the electron-mediated torques are non-conservative. A representative example is the spin transfer torque which plays an important role in spintronics applications.
For future work, we are currently looking into ways to enforce constraints due to either symmetry or conservation laws more strictly and rigorously. To this end, previous computer vision literature on equivariant CNNs (see _e.g._, Geiger and Smidt [62]) may shed light on how to constrain CNN layers to preserve SO(3) and D\({}_{6}\) symmetries. Moreover, the present work is limited to approximating torques using spin directions at each time step and does not provide a direct solution to the LLG equation in Eq. 1. In a recent body of literature, however, there have been attempts to solve governing partial differential equations (PDE) of physics directly using so-called physics-aware deep neural networks (see _e.g._, Nguyen _et al._[55]). Using these physics-aware CNN methods, we expect to attain faster and more accurate approximations of the spin dynamics, which is going to be another meaningful direction of research.
###### Acknowledgements.
SZ and GWC acknowledge the support from the US Department of Energy Basic Energy Sciences under Contract No. DE-SC0020330. SSB and PCHN were supported by the National Science Foundation under Grant No. DMREF-2203580. XC acknowledge the support from the Jefferson Scholars Foundation and the benefactor of the Edward P. Owens Jefferson Fellow.
|
2303.06561 | Phase Diagram of Initial Condensation for Two-layer Neural Networks | The phenomenon of distinct behaviors exhibited by neural networks under
varying scales of initialization remains an enigma in deep learning research.
In this paper, based on the earlier work by Luo et al.~\cite{luo2021phase}, we
present a phase diagram of initial condensation for two-layer neural networks.
Condensation is a phenomenon wherein the weight vectors of neural networks
concentrate on isolated orientations during the training process, and it is a
feature in non-linear learning process that enables neural networks to possess
better generalization abilities. Our phase diagram serves to provide a
comprehensive understanding of the dynamical regimes of neural networks and
their dependence on the choice of hyperparameters related to initialization.
Furthermore, we demonstrate in detail the underlying mechanisms by which small
initialization leads to condensation at the initial training stage. | Zhengan Chen, Yuqing Li, Tao Luo, Zhangchen Zhou, Zhi-Qin John Xu | 2023-03-12T03:55:38Z | http://arxiv.org/abs/2303.06561v2 | # Phase Diagram of Initial Condensation for Two-layer Neural Networks
###### Abstract
The phenomenon of distinct behaviors exhibited by neural networks under varying scales of initialization remains an enigma in deep learning research. In this paper, based on the earlier work by Luo et al. [16], we present a phase diagram of initial condensation for two-layer neural networks. Condensation is a phenomenon wherein the weight vectors of neural networks concentrate on isolated orientations
during the training process, and it is a feature in non-linear learning process that enables neural networks to possess better generalization abilities. Our phase diagram serves to provide a comprehensive understanding of the dynamical regimes of neural networks and their dependence on the choice of hyperparameters related to initialization. Furthermore, we demonstrate in detail the underlying mechanisms by which small initialization leads to condensation at the initial training stage.
_Keywords--_ two-layer neural network, phase diagram, dynamical regime, condensation
## 1 Introduction
In deep learning, one intriguing observation is the distinct behaviors exhibited by Neural Networks (NNs) depending on the scale of initialization. Specifically, in a particular regime, NNs trained with gradient descent can be viewed as a kernel regression predictor known as the Neural Tangent Kernel (NTK) [11, 5, 10, 15], and Chizat et al. [4] identify it as the lazy training regime in which the parameters of overparameterized NNs trained with gradient based methods hardly varies. However, under a different scaling, the Gradient Flow (GF) of NN shows highly nonlinear features and a mean-field analysis [19, 23, 3, 24] has been established for infinitely wide two-layer networks to analyze its behavior. Additionally, small initialization is proven to give rise to condensation [18, 16, 31, 32], a phenomenon where the weight vectors of NNs concentrate on isolated orientations during the training process. This is significant as NNs with condensed weight vectors are equivalent to "smaller" NNs with fewer parameters, as revealed by the embedding principle (the loss landscape of a DNN "contains" all the critical points of all the narrower DNNs [30, 29]), thus reducing the complexity of the output functions of NNs. As the generalization error can be bounded in terms of the complexity [1], NNs with condensed parameters tend to possess better generalization abilities. In addition, the study of the embedding principle also found the number of the descent directions in a condensed large network is no less than that of the equivalent small effective network, which may lead to easier training of a large network [30, 29].
Taken together, identifying the regime of condensation and understanding the mechanism of condensation are important to understand the non-linear training of neural networks. Our contributions can be categorized into two aspects.
Firstly, we established the phase diagram of initial condensation for two-layer neural networks (NNs) with a wide class of smooth activation functions, as illustrated Figure 1. Note that the phase diagram drawn in [16] is only for two-layer
wide ReLU networks and the phase diagram in [32] is empirical for three-layer wide ReLU networks. The phase diagram of a two-layer neural network refers to a graphical representation of the dynamical behavior of the network as a function of its initialization scales. In this diagram, different regions correspond to different types of behaviors exhibited by NNs, such as the linear regime, where the network behaves like a linear model, and the condensed regime, where the network exhibits the initial condensation phenomenon.
Secondly, we reveal the mechanism of initial condensation for two-layer NNs and identify the directions towards which the weight parameters condense. There has been a flurry of recent papers endeavor to analyze the mechanism underlying the condensation of NNs at the initial training stage under small initialization [18, 21, 16, 17, 32]. For instance, Maennel et al. [18] uncovered that for two-layer ReLU NNs, the GF limits the weight vectors to a certain number of directions depending sorely on the input data. Zhou et al. [32] showed empirically that condensation is a common feature in non-linear training regime for three-layer ReLU NNs. Theoretically, Maennel et al. [18] argued that GF prefers "simple" func
Figure 1: Phase diagram of two-layer NNs.
tions over "complex" ones, and Zhou et. al. [31] demonstrated that the maximal number of condensed orientations at initial training stage is twice the multiplicity (Definition 1) of the activation function. However, these proofs are heuristic as they do not account for the dynamics of parameters. Pellegrini and Biroli [21] derived a mean-field model demonstrating that two-layer ReLU NNs, when trained with hinge loss and infinite data, lead to a linear classifier. Nonetheless, their analysis does not illustrate how the initial condensation depends on the scale of initialization and does not specify which directions NNs condense on.
The organization of the paper is listed as follows. In Section 2, we discuss some related works. In Section 3, we give some preliminary introduction to our problems. In Section 4, we state our main results and show some empirical evidence. In Section 5, we give out the outline of proofs for our main results, and conclusions are drawn in Section 6. All the details of the proof are deferred to the Appendix.
## 2 Related Works
There has been a rich literature on the choice of initialization schemes in order to facilitate neural network training [7, 9, 19, 24], and most of the work identified the width \(m\) as a hyperparameter, where the kernel regime is reached when the width grows towards infinity [11, 27, 6]. However, with the introduction of lazy training by Chizat et al. [4], instead of the width \(m\), one shall take the initialization scale as the relevant hyperparameter. The lazy training refers to the phenomenon in which a heavily over-parameterized NN trained with gradient-based methods could converge exponentially fast to zero training loss with its parameters hardly varying, and such phenomenon can be observed in any non-convex model accompanied by the choice of an appropriate scaling factor of the initialization. Follow-up work by Woodworth et al. [26] focus on how the scale of initialization acts as a controlling quantity for the transition between two very different regimes, namely the kernel regime and the rich regime, for the matrix factorization problems. As for two-layer ReLU NNs, the phase digram in Luo et al. [16] identified three regimes, namely the linear regime, the critical regime and the condensed regime, based on the relative change of input weights as the width \(m\) approaches infinity. In summary, the selection of appropriate initialization scales plays a crucial role in the training of NNs.
Several theoretical works studying the dynamical behavior of NNs with small initialization can be connected to implicit regularization effect provided by the weight initialization schemes, and the condensation phenomenon has also been studied under different names. Ji and Telgarsky [12] analyzed the implicit regularization of GF on deep linear networks and observed the matrix alignment
phenomena, i.e., weight matrices belonging to different layers share the same direction. The weight quantization effect [18] in training two-layer ReLU NNs with small initialization is really the condensation phenomenon in disguise, and so is the case for the weight cluster effect [2] in learning the MNIST task for a three-layer CNN. Luo et al. [16] focused on how the condensation phenomenon can be clearly detected by the choice of initialization schemes, but they did not show the reason behind it. Zhang et al. [28, 30] proposed a general Embedding Principle of loss landscape of DNNs, showing that a larger DNN can experience critical points with condensed parameter, and its output is the same as that of a much smaller DNN, but their analysis did not involve its dynamical behavior. Zhou et al. [31] presented a theory for the initial direction towards which the weight vector condenses, yet it is far from satisfactory.
## 3 Preliminaries
### Notations
We begin this section by introducing some notations that will be used in the rest of this paper. We set \(n\) for the number of input samples and \(m\) for the width of the neural network. We set \(\mathcal{N}(\boldsymbol{\mu},\Sigma)\) as the normal distribution with mean \(\boldsymbol{\mu}\) and covariance \(\Sigma\). We let \([n]=\{1,2,\ldots,n\}\). We denote vector \(L^{2}\) norm as \(\left\|\cdot\right\|_{2}\), vector or function \(L_{\infty}\) norm as \(\left\|\cdot\right\|_{\infty}\), matrix spectral (operator) norm as \(\left\|\cdot\right\|_{2\to 2}\), matrix infinity norm as \(\left\|\cdot\right\|_{\infty\to\infty}\), and matrix Frobenius norm as \(\left\|\cdot\right\|_{\mathrm{F}}.\) For a matrix \(\mathbf{A}\), we use \(\mathbf{A}_{i,j}\) to denote its \((i,j)\)-th entry. We will also use \(\mathbf{A}_{i,\;\text{ }\text{ }}\) to denote the \(i\)-th row vector of \(\mathbf{A}\) and define \(\mathbf{A}_{i,j:k}=[\mathbf{A}_{i,j},\mathbf{A}_{i,j+1},\cdots,\mathbf{A}_{i, k}]^{\intercal}\) as part of the vector. Similarly, \(\mathbf{A}_{:,i}\) is the \(i\)-th column vector and \(\mathbf{A}_{j:k,i}\) is a part of the \(i\)-th column vector. For a semi-positive-definite matrix \(\boldsymbol{A}\), we denote its smallest eigenvalue by \(\lambda_{\min}(\boldsymbol{A})\), and correspondingly, its largest eigenvalue by \(\lambda_{\max}(\boldsymbol{A})\). We use \(\mathcal{O}(\cdot)\) and \(\Omega(\cdot)\) for the standard Big-O and Big-Omega notations. We finally denote the set of continuous functions \(f(\cdot):\mathbb{R}\to\mathbb{R}\) possessing continuous derivatives of order up to and including \(r\) by \(\mathcal{C}^{r}(\mathbb{R})\), the set of analytic functions \(f(\cdot):\mathbb{R}\to\mathbb{R}\) by \(\mathcal{C}^{\omega}(\mathbb{R})\), and \(\langle\cdot,\cdot\rangle\) for standard inner product between two vectors.
### Problem Setup
We use almost the same settings in Luo et al. [16] by starting with the original model
\[f_{\boldsymbol{\theta}}(\boldsymbol{x})=\sum_{k=1}^{m}a_{k}\sigma(\boldsymbol {w}_{k}^{\intercal}\boldsymbol{x}), \tag{3.1}\]
whose parameters \(\mathbf{\theta}^{0}:=\text{vec}(\mathbf{\theta}^{0}_{a},\mathbf{\theta}^{0}_{\mathbf{w}})\) are initialized by
\[a^{0}_{k}\sim\mathcal{N}(0,\nu^{2}),\quad\mathbf{w}^{0}_{k}\sim\mathcal{N}(\mathbf{0}, \varepsilon^{2}\mathbf{I}_{d}), \tag{3.2}\]
and the empirical risk is
\[R_{S}(\mathbf{\theta})=\frac{1}{2n}\sum_{i=1}^{n}{\left(f_{\mathbf{\theta}}(\mathbf{x}_{i} )-y_{i}\right)^{2}}. \tag{3.3}\]
Then the training dynamics based on gradient descent (GD) at the continuous limit obeys the following gradient flow precisely reads: For \(k\in[m]\),
\[\frac{\mathrm{d}a_{k}}{\mathrm{d}t} =-\frac{1}{n}\sum_{i=1}^{n}\left(\sum_{k^{\prime}=1}^{m}a_{k^{ \prime}}\sigma(\mathbf{w}^{\intercal}_{k^{\prime}}\mathbf{x}_{i})-y_{i}\right)\sigma( \mathbf{w}^{\intercal}_{k}\mathbf{x}_{i}), \tag{3.4}\] \[\frac{\mathrm{d}\mathbf{w}_{k}}{\mathrm{d}t} =-\frac{1}{n}\sum_{i=1}^{n}\left(\sum_{k^{\prime}=1}^{m}a_{k^{ \prime}}\sigma(\mathbf{w}^{\intercal}_{k^{\prime}}\mathbf{x}_{i})-y_{i}\right)a_{k} \sigma^{(1)}(\mathbf{w}^{\intercal}_{k}\mathbf{x}_{i})\mathbf{x}_{i}.\]
We identify the parameters \(\mathbf{\theta}_{a}:=\text{vec}(\{a_{k}\}_{k=1}^{m})\) and \(\mathbf{\theta}_{\mathbf{w}}:=\text{vec}(\{\mathbf{w}_{k}\}_{k=1}^{m})\) as variables of order one by setting
\[a_{k}=\nu\bar{a}_{k},\quad\mathbf{w}_{k}=\varepsilon\bar{\mathbf{w}}_{k},\]
then the rescaled dynamics can be written as
\[\nu\frac{\mathrm{d}\bar{a}_{k}}{\mathrm{d}t} =-\frac{1}{n}\sum_{i=1}^{n}\left(\sum_{k^{\prime}=1}^{m}\nu \varepsilon\bar{a}_{k^{\prime}}\frac{\sigma(\varepsilon\bar{\mathbf{w}}^{\intercal }_{k^{\prime}}\mathbf{x}_{i})}{\varepsilon}-y_{i}\right)\varepsilon\frac{\sigma( \varepsilon\bar{\mathbf{w}}^{\intercal}_{k}\mathbf{x}_{i})}{\varepsilon}, \tag{3.5}\] \[\varepsilon\frac{\mathrm{d}\bar{\mathbf{w}}_{k}}{\mathrm{d}t} =-\frac{1}{n}\sum_{i=1}^{n}\left(\sum_{k^{\prime}=1}^{m}\nu \varepsilon\bar{a}_{k^{\prime}}\frac{\sigma(\varepsilon\bar{\mathbf{w}}^{\intercal }_{k^{\prime}}\mathbf{x}_{i})}{\varepsilon}-y_{i}\right)\nu\bar{a}_{k}\sigma^{(1)} (\varepsilon\bar{\mathbf{w}}^{\intercal}_{k}\mathbf{x}_{i})\mathbf{x}_{i}.\]
For the case where \(\varepsilon\ll 1\) and \(\varepsilon\gg 1\), the expressions \(\frac{\sigma(\varepsilon\bar{\mathbf{w}}^{\intercal}_{k}\mathbf{x}_{i})}{\varepsilon}\) and \(\sigma^{(1)}(\varepsilon\bar{\mathbf{w}}^{\intercal}_{k}\mathbf{x}_{i})\) are hard to handle at first glance. However, in the case where \(\varepsilon\ll 1\), under the condition (Assumption 1) that \(\sigma(0)=0\) and \(\sigma^{(1)}(0)=1\), we obtain that
\[\frac{\sigma(\varepsilon\bar{\mathbf{w}}^{\intercal}_{k}\mathbf{x}_{i})}{\varepsilon} \approx\bar{\mathbf{w}}^{\intercal}_{k}\mathbf{x}_{i},\quad\sigma^{(1)}(\varepsilon \bar{\mathbf{w}}^{\intercal}_{k}\mathbf{x}_{i})\approx 1,\]
hence \(\frac{\sigma(\varepsilon\bar{\mathbf{w}}^{\intercal}_{k}\mathbf{x}_{i})}{\varepsilon}\) and \(\sigma^{(1)}(\varepsilon\bar{\mathbf{w}}^{\intercal}_{k}\mathbf{x}_{i})\) are of order one.
In the case where \(\varepsilon\gg 1\), under the condition (Assumption 2) that
\[\lim_{x\to-\infty}\sigma^{(1)}(x)=a,\quad\lim_{x\to+\infty}\sigma^{(1)}(x)=b,\]
we obtain that
\[\frac{\sigma(\varepsilon\bar{\mathbf{w}}_{k}^{\intercal}\mathbf{x}_{i})}{\varepsilon} \approx\sigma^{(1)}(\varepsilon\bar{\mathbf{w}}_{k}^{\intercal}\mathbf{x}_{i}),\]
hence \(\frac{\sigma(\varepsilon\bar{\mathbf{w}}_{k}^{\intercal}\mathbf{x}_{i})}{\varepsilon}\) and \(\sigma^{(1)}(\varepsilon\bar{\mathbf{w}}_{k}^{\intercal}\mathbf{x}_{i})\) are also of order one. Under these two aforementioned conditions, \(\sigma(\cdot)\) acts like a linear activation in the case where \(\varepsilon\ll 1\), and acts like a leaky-ReLU activation in the case where \(\varepsilon\gg 1\), both of which are homogeneous functions. Hence the above dynamics can be simplified into
\[\frac{\mathrm{d}\bar{a}_{k}}{\mathrm{d}t} =-\frac{1}{n}\sum_{i=1}^{n}\left(\sum_{k^{\prime}=1}^{m}\nu \varepsilon\bar{a}_{k^{\prime}}\frac{\sigma(\varepsilon\bar{\mathbf{w}}_{k^{ \prime}}^{\intercal}\mathbf{x}_{i})}{\varepsilon}-y_{i}\right)\frac{\varepsilon} {\nu}\frac{\sigma(\varepsilon\bar{\mathbf{w}}_{k}^{\intercal}\mathbf{x}_{i})}{ \varepsilon}, \tag{3.6}\] \[\frac{\mathrm{d}\bar{\mathbf{w}}_{k}}{\mathrm{d}t} =-\frac{1}{n}\sum_{i=1}^{n}\left(\sum_{k^{\prime}=1}^{m}\nu \varepsilon\bar{a}_{k^{\prime}}\frac{\sigma(\varepsilon\bar{\mathbf{w}}_{k^{ \prime}}^{\intercal}\mathbf{x}_{i})}{\varepsilon}-y_{i}\right)\frac{\nu}{ \varepsilon}\bar{a}_{k}\sigma^{(1)}(\varepsilon\bar{\mathbf{w}}_{k}^{\intercal} \mathbf{x}_{i})\mathbf{x}_{i}.\]
We hereby introduce two scaling parameters
\[\kappa:=\nu\varepsilon,\quad\kappa^{\prime}:=\frac{\nu}{\varepsilon}, \tag{3.7}\]
then the dynamics (3.6) can be written as a _normalized_ flow
\[\frac{\mathrm{d}\bar{a}_{k}}{\mathrm{d}t} =-\frac{1}{n}\sum_{i=1}^{n}\left(\sum_{k^{\prime}=1}^{m}\kappa \bar{a}_{k^{\prime}}\frac{\sigma(\varepsilon\bar{\mathbf{w}}_{k^{\prime}}^{ \intercal}\mathbf{x}_{i})}{\varepsilon}-y_{i}\right)\frac{1}{\kappa^{\prime}} \frac{\sigma(\varepsilon\bar{\mathbf{w}}_{k}^{\intercal}\mathbf{x}_{i})}{\varepsilon}, \tag{3.8}\] \[\frac{\mathrm{d}\bar{\mathbf{w}}_{k}}{\mathrm{d}t} =-\frac{1}{n}\sum_{i=1}^{n}\left(\sum_{k^{\prime}=1}^{m}\kappa \bar{a}_{k^{\prime}}\frac{\sigma(\varepsilon\bar{\mathbf{w}}_{k^{\prime}}^{ \intercal}\mathbf{x}_{i})}{\varepsilon}-y_{i}\right)\kappa^{\prime}\bar{a}_{k} \sigma^{(1)}(\varepsilon\bar{\mathbf{w}}_{k}^{\intercal}\mathbf{x}_{i})\mathbf{x}_{i}.\]
with the following initialization
\[\bar{a}_{k}^{0}\sim\mathcal{N}(0,1),\quad\bar{\mathbf{w}}_{k}^{0}\sim\mathcal{N}( \mathbf{0},\mathbf{I}_{d}). \tag{3.9}\]
In the following discussion throughout this paper, we always refer to this rescaled model (3.8) and drop all the "bar"s of \(\{a_{k}\}_{k=1}^{m}\) and \(\{\mathbf{w}_{k}\}_{k=1}^{m}\) for notational simplicity.
As \(\kappa\) and \(\kappa^{\prime}\) are always in specific power-law relations to the width \(m\), we introduce two independent coordinates
\[\gamma:=\lim_{m\to\infty}-\frac{\log\kappa}{\log m},\quad\gamma^{\prime}:= \lim_{m\to\infty}-\frac{\log\kappa^{\prime}}{\log m}, \tag{3.10}\]
which meet all the guiding principles [16] for finding the coordinates of a phase diagram.
Before we end this section, we list out some commonly-used initialization methods with their scaling parameters as shown in Table 1.
## 4 Main Results
### Activation function and input
In this part, we shall impose some technical conditions on the activation function and input samples. We start with a technical condition [31, Definition 1] on the activation function \(\sigma(\cdot)\)
**Definition 1** (Multiplicity \(p\)).: \(\sigma(\cdot):\mathbb{R}\to\mathbb{R}\) _has multiplicity \(p\) if there exists an integer \(p\geq 1\), such that for all \(0\leq s\leq p-1\), the \(s\)-th order derivative satisfies \(\sigma^{(s)}(0)=0\), and \(\sigma^{(p)}(0)\neq 0\)._
We list out some examples of activation functions with different multiplicity.
**Remark 1**.:
* \(tanh(x):=\frac{\exp(x)-\exp(-x)}{\exp(x)+\exp(-x)}\) _is with multiplicity_ \(p=1\)_;_
* \(SiLU(x):=\frac{x}{1+\exp(-x)}\) _is with multiplicity_ \(p=1\)_;_
* \(xtanh(x):=\frac{x\exp(x)-x\exp(-x)}{\exp(x)+\exp(-x)}\) _is with multiplicity_ \(p=2\)_._
**Assumption 1** (Multiplicity 1).: _The activation function \(\sigma\in\mathcal{C}^{2}(\mathbb{R})\), and there exists a universal constant \(C_{L}>0\), such that its first and second derivatives satisfy_
\[\left\|\sigma^{(1)}(\cdot)\right\|_{\infty}\leq C_{L},\quad\left\|\sigma^{(2)} (\cdot)\right\|_{\infty}\leq C_{L}. \tag{4.1}\]
_Moreover,_
\[\sigma(0)=0,\quad\sigma^{(1)}(0)=1. \tag{4.2}\]
\begin{table}
\begin{tabular}{|l|c c c c c|} \hline Name & \(\nu\) & \(\varepsilon\) & \(\kappa\) (\(\nu\varepsilon\)) & \(\kappa^{\prime}\) (\(\nu/\varepsilon\)) & \(\gamma\) & \(\gamma^{\prime}\) \\ \hline LeCun [14] & \(\sqrt{\frac{1}{m}}\) & \(\sqrt{\frac{1}{d}}\) & \(\sqrt{\frac{1}{md}}\) & \(\sqrt{\frac{d}{m}}\) & \(\frac{1}{2}\) & \(\frac{1}{2}\) \\ He [8] & \(\sqrt{\frac{2}{m}}\) & \(\sqrt{\frac{2}{d}}\) & \(\sqrt{\frac{4}{md}}\) & \(\sqrt{\frac{d}{m}}\) & \(\frac{1}{2}\) & \(\frac{1}{2}\) \\ Xavier [7] & \(\sqrt{\frac{2}{m+1}}\) & \(\sqrt{\frac{2}{m+d}}\) & \(\sqrt{\frac{4}{(m+1)(d+1)}}\) & \(\sqrt{\frac{m+d}{m+1}}\) & \(1\) & \(0\) \\ Huang [10] & \(1\) & \(\sqrt{\frac{1}{m}}\) & \(\sqrt{\frac{1}{m}}\) & \(\sqrt{m}\) & \(\frac{1}{2}\) & \(-\frac{1}{2}\) \\ \hline \end{tabular}
\end{table}
Table 1: Initialization methods with their scaling parameters
**Remark 2**.: _We remark that \(\sigma\) has multiplicity \(1\). \(\sigma^{(1)}(0)=1\) can be replaced by \(\sigma^{(1)}(0)\neq 0\), and we set \(\sigma^{(1)}(0)=1\) for simplicity, and it can be easily satisfied by replacing the original activation \(\sigma(\cdot)\) with \(\frac{\sigma(\cdot)}{\sigma^{(1)}(0)}\)._
_We note that Assumption 1 can be satisfied by using the tanh activation:_
\[\sigma(x)=\frac{\exp(x)-\exp(-x)}{\exp(x)+\exp(-x)},\]
_and the scaled SiLU activation_
\[\sigma(x)=\frac{2x}{1+\exp(-x)}.\]
**Assumption 2**.: _The activation function \(\sigma\in\mathcal{C}^{\omega}(\mathbb{R})\) and is not a polynomial function, also its function value at \(0\) satisfy_
\[\sigma(0)=0, \tag{4.3}\]
_also there exists a universal constant \(C_{L}>0\), such that its first and second derivatives satisfy_
\[\sigma^{(1)}(0)=1,\quad\left\|\sigma^{(1)}(\cdot)\right\|_{\infty}\leq C_{L}, \quad\left\|\sigma^{(2)}(\cdot)\right\|_{\infty}\leq C_{L}. \tag{4.4}\]
_Moreover,_
\[\lim_{x\to-\infty}\sigma^{(1)}(x)=a,\quad\lim_{x\to+\infty}\sigma^{(1)}(x)=b, \tag{4.5}\]
_and \(a\neq b\)._
**Remark 3**.: _We note that Assumption 2 can be satisfied by by using the scaled SiLU activation:_
\[\sigma(x)=\frac{2x}{1+\exp(-x)},\]
_where \(a=0\) and \(b=2\)._
_Some other functions also satisfy this assumption, for instance, the modified scaled softplus activation:_
\[\sigma(x)=2\left(\log(1+\exp(x))-\log 2\right),\]
_where \(a=0\) and \(b=2\)._
**Assumption 3** (Non-degenerate data).: _The training inputs and labels \(\mathcal{S}=\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{n}\) satisfy that there exists a universal constant \(c>0\), such that for all \(i\in[n]\),_
\[\frac{1}{c}\leq\left\|\mathbf{x}_{i}\right\|_{2},\quad|y_{i}|\leq c,\]
_and_
\[\sum_{i=1}^{n}y_{i}\mathbf{x}_{i}\neq\mathbf{0}. \tag{4.6}\]
_We denote by_
\[\mathbf{z}:=\frac{1}{n}\sum_{i=1}^{n}y_{i}\mathbf{x}_{i}, \tag{4.7}\]
_and assume further that for some universal constant \(c>0\), the following holds_
\[\frac{1}{c}\leq\left\|\mathbf{z}\right\|_{2}\leq c, \tag{4.8}\]
_and its unit vector_
\[\hat{\mathbf{z}}:=\frac{\sum_{i=1}^{n}y_{i}\mathbf{x}_{i}}{\left\|\sum_{i=1}^{n}y_{i} \mathbf{x}_{i}\right\|_{2}}. \tag{4.9}\]
**Assumption 4**.: _The training inputs and labels \(\mathcal{S}=\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{n}\) satisfy that there exists a universal constant \(c>0\), such that for all \(i\in[n]\),_
\[\frac{1}{c}\leq\left\|\mathbf{x}_{i}\right\|_{2},\quad|y_{i}|\leq c,\]
_and all training inputs are non-parallel with each other, i.e., for any \(i\neq j\) and \(i,j\in[n]\),_
\[\mathbf{x}_{i}\nparallel\mathbf{x}_{j}.\]
We remark that the requirements in Assumption 3 are easier to meet compared with Assumption 4, and both assumptions impose the input sample \(\mathcal{S}=\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{n}\) to be of order one.
**Assumption 5**.: _The following limit exists_
\[\gamma_{1}:=\lim_{m\to\infty}-\frac{\log\nu}{\log m},\quad\gamma_{2}:=\lim_{m \to\infty}-\frac{\log\varepsilon}{\log m}, \tag{4.10}\]
_then by definition_
\[\gamma=\lim_{m\to\infty}-\frac{\log\nu\varepsilon}{\log m}=\gamma_{1}+\gamma_ {2},\quad\gamma^{\prime}=\lim_{m\to\infty}-\frac{\log\frac{\nu}{\varepsilon} }{\log m}=\gamma_{1}-\gamma_{2}.\]
### Regime Characterization at Initial Stage
Before presenting our theory that establishes a consistent boundary to separate the diagram into two distinct areas, namely the linear regime area and the condensed regime area, we introduce a quantity that has proven to be valuable in the analysis of NNs.
It is known that the output of a two-layer NN is linear with respect to \(\mathbf{\theta}_{a}\), hence if the set of parameter \(\mathbf{\theta}_{\mathbf{w}}\) remain stuck to its initialization throughout the whole training process, then the training dynamics of a two-layer NN can be linearized around the initialization. In the phase diagram, the linear regime area precisely corresponds to the region where the output function of a two-layer NN can be well approximated by its linearized model, i.e., in the linear regime area, the following holds
\[\begin{split} f_{\mathbf{\theta}}(\mathbf{x})\approx f\left(\mathbf{x},\mathbf{ \theta}(0)\right)&+\left\langle\nabla_{\mathbf{\theta}_{a}}f\left(\mathbf{ x},\mathbf{\theta}(0)\right),\mathbf{\theta}_{a}(t)-\mathbf{\theta}_{a}(0)\right\rangle\\ &+\left\langle\nabla_{\mathbf{\theta}_{\mathbf{w}}}f\left(\mathbf{x},\mathbf{ \theta}(0)\right),\mathbf{\theta}_{\mathbf{w}}(t)-\mathbf{\theta}_{\mathbf{w}}(0)\right\rangle. \end{split} \tag{4.11}\]
In general, this linear approximation holds only when \(\mathbf{\theta}_{\mathbf{w}}(t)\) remains within a small neighbourhood of \(\mathbf{\theta}_{\mathbf{w}}(0)\). Since the size of this neighbourhood scales with \(\|\mathbf{\theta}_{\mathbf{w}}(0)\|_{2}\), therefore we use the following quantity as an indicator of how far \(\mathbf{\theta}_{\mathbf{w}}(t)\) deviates away from \(\mathbf{\theta}_{\mathbf{w}}(0)\) throughout the training process
\[\sup_{t\in[0,+\infty)}\mathrm{RD}(\mathbf{\theta}_{\mathbf{w}}(t))=\frac{\left\|\mathbf{ \theta}_{\mathbf{w}}(t)-\mathbf{\theta}_{\mathbf{w}}(0)\right\|_{2}}{\|\mathbf{\theta}_{\mathbf{w }}(0)\|_{2}}. \tag{4.12}\]
We demonstrate that as \(m\to\infty\), under suitable choice of the initialization scales (the blue area in Figure 1), the NN training dynamics fall into the linear regime (Theorem 1), and for large enough \(m\),
\[\sup_{t\in[0,+\infty)}\mathrm{RD}(\mathbf{\theta}_{\mathbf{w}}(t))\to 0.\]
We also demonstrate that under some other choices of the initialization scales (the green area in Figure 1), the NN training dynamics fall into the condensed regime (Theorem 2), where
\[\sup_{t\in[0,+\infty)}\mathrm{RD}(\mathbf{\theta}_{\mathbf{w}}(t))\to+\infty,\]
and the phenomenon of condensation can be observed, and \(\mathbf{\theta}_{\mathbf{w}}\) condense toward the direction of \(\mathbf{z}\). We observe that in both cases, as \(\mathbf{\theta}_{\mathbf{w}}\) deviates far away from initialization, the approximation (4.11) fails, and NN training dynamics is essentially nonlinear with respect to \(\mathbf{\theta}_{\mathbf{w}}\).
Moreover, under the remaining choice of the initialization scales (the solid blue line in Figure 1), the NN training dynamics fall into the critical regime area, and
we conjecture that
\[\sup_{t\in[0,+\infty)}\operatorname{RD}(\boldsymbol{\theta}_{\boldsymbol{w}}(t)) \rightarrow\mathcal{O}(1),\]
whose study is beyond the scope of this paper.
**Theorem 1** (Linear regime).: _Given any \(\delta\in(0,1)\), under Assumption 2, Assumption 4 and Assumption 5, if \(\gamma<1\) or \(\gamma^{\prime}>\gamma-1\), then with probability at least \(1-\delta\) over the choice of \(\boldsymbol{\theta}^{0}\),_
\[\lim_{m\rightarrow\infty}\sup_{t\in[0,+\infty)}\frac{\|\boldsymbol{\theta}_{ \boldsymbol{w}}(t)-\boldsymbol{\theta}_{\boldsymbol{w}}(0)\|_{2}}{\| \boldsymbol{\theta}_{\boldsymbol{w}}(0)\|_{2}}=0. \tag{4.13}\]
**Remark 4**.: _The linear regime area is split into two parts, one is termed the \(\boldsymbol{\theta}\)-lazy area (blue area in Figure 2), where \(\gamma<1\), the other is termed the \(\boldsymbol{w}\)-lazy area (pink area in Figure 2), where \(\gamma\geq 1\) and \(\gamma^{\prime}>\gamma-1>0\)._
_In the \(\boldsymbol{\theta}\)-lazy area, the following relation holds_
\[\lim_{m\rightarrow\infty}\sup_{t\in[0,+\infty)}\frac{\|\boldsymbol{\theta}(t) -\boldsymbol{\theta}(0)\|_{2}}{\|\boldsymbol{\theta}(0)\|_{2}}=0, \tag{4.14}\]
_whose detailed reasoning can be found in Appendix B.2, and in the \(\boldsymbol{w}\)-lazy area, relation (4.14) does not hold._
Figure 2: Different training regimes in the phase diagram.
**Theorem 2** (Condensed regime).: _Given any \(\delta\in(0,1)\), under Assumption 1, Assumption 3 and Assumption 5, if \(\gamma>1\) and \(\gamma^{\prime}<\gamma-1\), then with probability at least \(1-\delta\) over the choice of \(\mathbf{\theta}^{0}\), there exists \(T>0\), such that_
\[\lim_{m\to\infty}\sup_{t\in[0,T]}\frac{\left\|\mathbf{\theta}_{\mathbf{w}}(t)-\mathbf{ \theta}_{\mathbf{w}}(0)\right\|_{2}}{\left\|\mathbf{\theta}_{\mathbf{w}}(0)\right\|_{2}}=+\infty, \tag{4.15}\]
_and_
\[\lim_{m\to\infty}\sup_{t\in[0,T]}\frac{\left\|\mathbf{\theta}_{\mathbf{w},\mathbf{z}}(t) \right\|_{2}}{\left\|\mathbf{\theta}_{\mathbf{w}}(t)\right\|_{2}}=1, \tag{4.16}\]
_where \(\mathbf{\theta}_{\mathbf{w},\mathbf{z}}(t):=[\left\langle\mathbf{w}_{1},\hat{\mathbf{z}}\right\rangle,\left\langle\mathbf{w}_{2},\hat{\mathbf{z}}\right\rangle,\cdots,\left\langle\mathbf{w}_{ m},\hat{\mathbf{z}}\right\rangle]^{\intercal}\)._
**Remark 5**.: _The condensed regime area is split into two parts, one is termed the \(\mathbf{w}\)-lag area (orange area in Figure 2), where \(\gamma>1\) and \(0\leq\gamma^{\prime}<\gamma-1\), the other is termed the \(a\)-lag area (yellow area in Figure 2), where \(\gamma>1\) and \(\gamma^{\prime}<0\)._
_In the \(\mathbf{w}\)-lag regime area, as illustrated in (5.14), \(\mathbf{\theta}_{\mathbf{w}}\) waits for a period of time of order one until \(\mathbf{\theta}_{a}\) attains a magnitude that is commensurate with that of \(\mathbf{\theta}_{\mathbf{w}}\), and the time \(T\) in Theorem 2 satisfies that_
\[T\geq\log\left(\frac{1}{4}\right)+\frac{\gamma-\gamma^{\prime}-1}{8}\log(m), \tag{4.17}\]
_and as \(m\to\infty\), \(T\to\infty\)._
_In the \(a\)-lag regime area, as illustrated in (5.14), \(\mathbf{\theta}_{a}\) waits for a period of time of order one until \(\mathbf{\theta}_{\mathbf{w}}\) attains a magnitude that is commensurate with that of \(\mathbf{\theta}_{a}\), and it is exactly during this interval of time that the phenomenon of initial condensation can be observed. Hence for some \(\alpha>0\), the time \(T\) in Theorem 2 can be chosen as_
\[m^{-\alpha}\leq T\leq 2m^{-\alpha}, \tag{4.18}\]
_which is obviously of order one, see Appendix C.4 for more details._
### Experimental Demonstration
In order to distinguish between the \(\mathbf{w}\)-lag regime and \(a\)-lag regime, it is necessary to estimate the time \(T\) in Remark 5, which is also a reasonable way to empirically validate our theoretical analysis. An empirical approximation of \(T\) can be obtained by determining the time interval \(\hat{T}\), starting from the initial stage at \(t=0\), up to the point at which the quantity \(\frac{\left\|\mathbf{\theta}_{\mathbf{w},\mathbf{z}}(t)\right\|_{2}}{\left\|\mathbf{\theta}_{ \mathbf{w}}(t)\right\|_{2}}\) reaches its climax for sufficiently large values of \(m\) (\(m=50000,100000,200000,400000,800000,1600000\)), as we are unable to run experiments at \(m\to\infty\).
#### 4.3.1 \(\mathbf{w}\)-lag regime
We validate the effectiveness of our estimates by performing a simple linear regression to visualize the relation (4.17), where \(\hat{T}\) is set as the response variable and \(\log m\) as the single independent variable. Figure 3 shows that NNs with different values of \(\gamma\) but fixed \(\gamma^{\prime}\) satisfy the relation (4.17), thereby demonstrating the accuracy and reliability of our estimates.
Figure 4: \(\log\hat{T}\) (ordinate) vs \(\log m\) (abscissa) with different values of \(\gamma\) but fixed \(\gamma^{\prime}=-0.4\) for two-layer NNs with tanh activation indicated by blue dots. The black line is a linear fit, and \(R^{2}\) is the coefficient of determination.
Figure 3: \(\hat{T}\) (ordinate) vs \(\log m\) (abscissa) with different values of \(\gamma\) but fixed \(\gamma^{\prime}=0\) for two-layer NNs with tanh activation indicated by blue dots. The black line is a linear fit, and \(R^{2}\) is _the coefficient of determination_ that provides information about the goodness of fit of a linear regression. The closer \(R^{2}\) is to 1, the better the model fits the data.
#### 4.3.2 \(a\)-lag regime
We repeat the strategy in Section 4.3.1 except that in one hand we are hereby to visualize the relation (4.18) in Figure 4, and in the other, \(\log\hat{T}\) is set as the response variable and \(\gamma^{\prime}\) is no longer \(0\). We can still see a good agreement between the experimental data and its linear fitting, thus, validating the relation (4.18).
## 5 Technique Overview
In this part, we describe some technical tools and present the sketch of proofs for the above two theorems. Before we proceed, a rigorous description of the updated notations and definitions is required.
We start by a two-layer normalized NN model
\[f_{\mathbf{\theta}}(\mathbf{x})=\sum_{k=1}^{m}\nu\varepsilon a_{k}\frac{\sigma( \varepsilon\mathbf{w}_{k}^{\intercal}\mathbf{x})}{\varepsilon}, \tag{5.1}\]
with the normalized parameters \(\mathbf{\theta}^{0}:=\text{vec}(\mathbf{\theta}_{a}^{0},\mathbf{\theta}_{\mathbf{w}}^{0})\) initialized by
\[a_{k}^{0} :=a_{k}(0)\sim\mathcal{N}(0,1),\] \[\mathbf{w}_{k}^{0} :=\mathbf{w}_{k}(0)\sim\mathcal{N}(\mathbf{0},\mathbf{I}_{d}).\]
For all \(i\in[n]\), we denote hereafter that
\[e_{i}:=e_{i}(\mathbf{\theta}):=f_{\mathbf{\theta}}(\mathbf{x}_{i})-y_{i},\]
and
\[\mathbf{e}:=\mathbf{e}(\mathbf{\theta}):=[e_{1}(\mathbf{\theta}),e_{2}(\mathbf{\theta}),\ldots,e_ {n}(\mathbf{\theta})]^{\intercal}.\]
Then the normalized flow reads: For all \(k\in[m]\),
\[\begin{split}\frac{\mathrm{d}a_{k}}{\mathrm{d}t}&=- \frac{\varepsilon}{\nu}\frac{1}{n}\sum_{i=1}^{n}e_{i}\frac{\sigma(\varepsilon \mathbf{w}_{k}^{\intercal}\mathbf{x}_{i})}{\varepsilon},\\ \frac{\mathrm{d}\mathbf{w}_{k}}{\mathrm{d}t}&=-\frac{ \nu}{\varepsilon}\frac{1}{n}\sum_{i=1}^{n}e_{i}a_{k}\sigma^{(1)}(\varepsilon \mathbf{w}_{k}^{\intercal}\mathbf{x}_{i})\mathbf{x}_{i}.\end{split} \tag{5.2}\]
### Linear Regime
We define the normalized kernels as follows
\[\begin{split} k^{[a]}(\mathbf{x},\mathbf{x}^{\prime})&:=\frac {1}{\varepsilon^{2}}\mathbb{E}_{\mathbf{w}}\sigma(\varepsilon\mathbf{w}^{\intercal} \mathbf{x})\sigma(\varepsilon\mathbf{w}^{\intercal}\mathbf{x}^{\prime}),\\ k^{[\mathbf{w}]}(\mathbf{x},\mathbf{x}^{\prime})&:=\mathbb{E} _{(a,\mathbf{w})}a^{2}\sigma^{(1)}(\varepsilon\mathbf{w}^{\intercal}\mathbf{x})\sigma^{( 1)}(\varepsilon\mathbf{w}^{\intercal}\mathbf{x}^{\prime})\left<\mathbf{x},\mathbf{x}^{\prime} \right>,\end{split} \tag{5.3}\]
thus, the components of the Gram matrices \(\mathbf{K}^{[a]}\) and \(\mathbf{K}^{[\mathbf{w}]}\) of at infinite width respectively reads: For any \(i,j\in[n]\),
\[\mathbf{K}^{[a]} :=\left[K^{[a]}_{ij}\right]_{n\times n}, \tag{5.4}\] \[K^{[a]}_{ij} :=k^{[a]}(\mathbf{x}_{i},\mathbf{x}_{j}),\] \[\mathbf{K}^{[\mathbf{w}]} :=\left[K^{[\mathbf{w}]}_{ij}\right]_{n\times n},\] \[K^{[\mathbf{w}]}_{ij} :=k^{[\mathbf{w}]}(\mathbf{x}_{i},\mathbf{x}_{j}),\]
we conclude that under Assumption 2 and Assumption 4, \(\mathbf{K}^{[a]}\) and \(\mathbf{K}^{[\mathbf{w}]}\) are strictly positive definite, and both of their least eigenvalues are of order one (Theorem 4).
We define the normalized Gram matrices \(\mathbf{G}^{[a]}(\mathbf{\theta})\), \(\mathbf{G}^{[\mathbf{w}]}(\mathbf{\theta})\), and \(\mathbf{G}(\mathbf{\theta})\) for a finite width two-layer network as follows: For any \(i,j\in[n]\),
\[\mathbf{G}^{[a]}(\mathbf{\theta}) :=\left[G^{[a]}_{ij}(\mathbf{\theta})\right]_{n\times n},\] \[G^{[a]}_{ij}(\mathbf{\theta}) :=\frac{1}{m}\sum_{k=1}^{m}\left\langle\nabla_{a_{k}}f_{\mathbf{ \theta}}(\mathbf{x}_{i}),\frac{\varepsilon}{\nu}\nabla_{a_{k}}f_{\mathbf{\theta}}(\bm {x}_{j})\right\rangle\] \[=\frac{\nu^{2}}{m}\frac{\varepsilon}{\nu}\sum_{k=1}^{m}\sigma( \varepsilon\mathbf{w}_{k}^{\intercal}\mathbf{x}_{i})\sigma(\varepsilon\mathbf{w}_{k}^{ \intercal}\mathbf{x}_{j})\] \[=\frac{\nu\varepsilon^{3}}{m}\sum_{k=1}^{m}\frac{1}{\varepsilon^ {2}}\sigma(\varepsilon\mathbf{w}_{k}^{\intercal}\mathbf{x}_{i})\sigma(\varepsilon\bm {w}_{k}^{\intercal}\mathbf{x}_{j}), \tag{5.5}\] \[\mathbf{G}^{[\mathbf{w}]}(\mathbf{\theta}) :=\left[G^{[\mathbf{w}]}_{ij}(\mathbf{\theta})\right]_{n\times n},\] \[G^{[\mathbf{w}]}_{ij}(\mathbf{\theta}) :=\frac{1}{m}\sum_{k=1}^{m}\left\langle\nabla_{\mathbf{w}_{k}}f_{\mathbf{ \theta}}(\mathbf{x}_{i}),\frac{\nu}{\varepsilon}\nabla_{\mathbf{w}_{k}}f_{\mathbf{\theta}} (\mathbf{x}_{j})\right\rangle\] \[=\frac{\nu^{3}\varepsilon}{m}\sum_{k=1}^{m}a_{k}^{2}\sigma^{(1)} (\varepsilon\mathbf{w}_{k}^{\intercal}\mathbf{x}_{i})\sigma^{(1)}(\varepsilon\mathbf{w}_ {k}^{\intercal}\mathbf{x}_{j})\left\langle\mathbf{x}_{i},\mathbf{x}_{j}\right\rangle,\]
and
\[\mathbf{G}(\mathbf{\theta}):=\mathbf{G}^{[a]}(\mathbf{\theta})+\mathbf{G}^{[\mathbf{w}]}(\mathbf{\theta}). \tag{5.6}\]
**Remark 6**.: _We conclude that_
\[\lambda_{\min}\left(\mathbf{G}^{[a]}\left(\mathbf{\theta}^{0}\right)\right)\sim\Omega (\nu\varepsilon^{3}),\quad\lambda_{\min}\left(\mathbf{G}^{[\mathbf{w}]}\left(\mathbf{ \theta}^{0}\right)\right)\sim\Omega(\nu^{3}\varepsilon), \tag{5.7}\]
_and it has been rigorously achieved by Proposition 2 located in Appendix B.1._
Finally, we obtain that
\[\frac{\mathrm{d}}{\mathrm{d}t}R_{S}(\mathbf{\theta}) =-\left(\sum_{k=1}^{m}\frac{\varepsilon}{\nu}\left\langle\nabla_{a_{ k}}R_{S}(\mathbf{\theta}),\nabla_{a_{k}}R_{S}(\mathbf{\theta})\right\rangle+\sum_{k=1}^{m} \frac{\nu}{\varepsilon}\left\langle\nabla_{\mathbf{w}_{k}}R_{S}(\mathbf{\theta}), \nabla_{\mathbf{w}_{k}}R_{S}(\mathbf{\theta})\right\rangle\right)\] \[=-\frac{m}{n^{2}}\mathbf{e}^{\intercal}\left(\mathbf{G}^{[a]}(\mathbf{ \theta})+\mathbf{G}^{[\mathbf{w}]}(\mathbf{\theta})\right)\mathbf{e}.\]
In the case where \(\gamma<1\) (\(\mathbf{\theta}\)-lazy regime), the following holds for all \(t>0\):
\[\lambda_{\min}\left(\mathbf{G}^{[a]}(\mathbf{\theta}(t))\right)\geq\frac{1}{2}\nu \varepsilon^{3}\lambda,\quad\lambda_{\min}\left(\mathbf{G}^{[\mathbf{w}]}(\mathbf{\theta} (t))\right)\geq\frac{1}{2}\nu^{3}\varepsilon\lambda,\]
for some universal constant \(\lambda>0\). Hence, we obtain that
\[\frac{\mathrm{d}}{\mathrm{d}t}R_{S}(\mathbf{\theta}(t)) =-\frac{m}{n^{2}}\mathbf{e}^{\intercal}\left(\mathbf{G}^{[a]}(\mathbf{\theta }(t))+\mathbf{G}^{[\mathbf{w}]}(\mathbf{\theta}(t))\right)\mathbf{e}\] \[\leq-\frac{2m}{n}\lambda_{\min}\left(\mathbf{G}(\mathbf{\theta}(t)) \right)R_{S}(\mathbf{\theta}(t))\] \[\leq-\frac{m}{n}\nu^{2}\varepsilon^{2}\left(\frac{\varepsilon}{ \nu}\lambda+\frac{\nu}{\varepsilon}\lambda\right)R_{S}(\mathbf{\theta}(t)),\]
then
\[R_{S}(\mathbf{\theta}(t))\leq\exp\left(-\frac{m}{n}\nu^{2}\varepsilon^{2}\left( \frac{\varepsilon}{\nu}\lambda+\frac{\nu}{\varepsilon}\lambda\right)t\right) R_{S}(\mathbf{\theta}(0)). \tag{5.8}\]
The following relation
\[\lim_{m\to\infty}\sup_{t\in[0,+\infty)}\frac{\|\mathbf{\theta}(t)-\mathbf{\theta}(0) \|_{2}}{\|\mathbf{\theta}(0)\|_{2}}=0, \tag{5.9}\]
is illustrated through an intuitive scaling analysis. Since
\[\frac{\mathrm{d}}{\mathrm{d}t}R_{S}(\mathbf{\theta}(t))=-\frac{\varepsilon}{\nu} \left\|\nabla_{\mathbf{\theta}_{a}}R_{S}(\mathbf{\theta}(t))\right\|_{2}^{2}-\frac{ \nu}{\varepsilon}\left\|\nabla_{\mathbf{\theta}_{\mathbf{w}}}R_{S}(\mathbf{\theta}(t)) \right\|_{2}^{2}\sim-\frac{m}{n}\nu^{2}\varepsilon^{2}\left(\frac{\varepsilon }{\nu}\lambda+\frac{\nu}{\varepsilon}\lambda\right)R_{S}(\mathbf{\theta}(t)),\]
then we have that
\[R_{S}(\mathbf{\theta}(t))\sim\exp\left(-\frac{m}{n}\nu^{2}\varepsilon^{2}\left( \frac{\varepsilon}{\nu}\lambda+\frac{\nu}{\varepsilon}\lambda\right)t\right) R_{S}(\mathbf{\theta}(0)),\]
and
\[\left\|\nabla_{\mathbf{\theta}_{a}}R_{S}(\mathbf{\theta}(t))\right\|_{2} \sim\sqrt{\frac{m}{n}\nu^{3}\varepsilon\left(\frac{\varepsilon}{ \nu}\lambda+\frac{\nu}{\varepsilon}\lambda\right)}\sqrt{R_{S}(\mathbf{\theta}(t))}\] \[\sim\sqrt{\frac{m}{n}\nu^{3}\varepsilon\left(\frac{\varepsilon}{ \nu}\lambda+\frac{\nu}{\varepsilon}\lambda\right)}\exp\left(-\frac{m}{2n}\nu^ {2}\varepsilon^{2}\left(\frac{\varepsilon}{\nu}\lambda+\frac{\nu}{\varepsilon }\lambda\right)t\right)\sqrt{R_{S}(\mathbf{\theta}(0))},\] \[\left\|\nabla_{\mathbf{\theta}_{\mathbf{w}}}R_{S}(\mathbf{\theta}(t))\right\|_{2} \sim\sqrt{\frac{m}{n}\nu\varepsilon^{3}\left(\frac{\varepsilon}{ \nu}\lambda+\frac{\nu}{\varepsilon}\lambda\right)}\sqrt{R_{S}(\mathbf{\theta}(t))}\]
\[\sim\sqrt{\frac{m}{n}}\nu\varepsilon^{3}\left(\frac{\varepsilon}{\nu} \lambda+\frac{\nu}{\varepsilon}\lambda\right)\exp\left(-\frac{m}{2n}\nu^{2} \varepsilon^{2}\left(\frac{\varepsilon}{\nu}\lambda+\frac{\nu}{\varepsilon} \lambda\right)t\right)\sqrt{R_{S}(\boldsymbol{\theta}(0))},\]
both holds, hence
\[\|\boldsymbol{\theta}(t)-\boldsymbol{\theta}(0)\|_{2} \leq\|\boldsymbol{\theta}_{a}(t)-\boldsymbol{\theta}_{a}(0)\|_{2 }+\|\boldsymbol{\theta}_{\boldsymbol{w}}(t)-\boldsymbol{\theta}_{\boldsymbol{ w}}(0)\|_{2}\] \[\leq\frac{\varepsilon}{\nu}\int_{0}^{t}\left\|\nabla_{ \boldsymbol{\theta}_{a}}R_{S}(\boldsymbol{\theta}(s))\right\|_{2}\mathrm{d}s +\frac{\nu}{\varepsilon}\int_{0}^{t}\left\|\nabla_{\boldsymbol{\theta}_{w}}R_{S }(\boldsymbol{\theta}(s))\right\|_{2}\mathrm{d}s\] \[\leq\frac{\varepsilon}{\nu}\int_{0}^{\infty}\left\|\nabla_{ \boldsymbol{\theta}_{a}}R_{S}(\boldsymbol{\theta}(s))\right\|_{2}\mathrm{d}s +\frac{\nu}{\varepsilon}\int_{0}^{\infty}\left\|\nabla_{\boldsymbol{\theta}_{ w}}R_{S}(\boldsymbol{\theta}(s))\right\|_{2}\mathrm{d}s\] \[\lesssim\left(\sqrt{\frac{\varepsilon}{\nu}}+\sqrt{\frac{\nu}{ \varepsilon}}\right)\sqrt{\frac{n}{m\nu^{2}\varepsilon^{2}\left(\frac{ \varepsilon}{\nu}\lambda+\frac{\nu}{\varepsilon}\lambda\right)}}\sqrt{R_{S}( \boldsymbol{\theta}(0))},\]
and
\[\left\|\boldsymbol{\theta}(0)\right\|_{2}\sim\sqrt{m},\]
hence
\[\frac{\|\boldsymbol{\theta}(t)-\boldsymbol{\theta}(0)\|_{2}}{ \left\|\boldsymbol{\theta}(0)\right\|_{2}} \lesssim\left(\sqrt{\frac{\varepsilon}{\nu}}+\sqrt{\frac{\nu}{ \varepsilon}}\right)\sqrt{\frac{n}{m^{2}\nu^{2}\varepsilon^{2}\left(\frac{ \varepsilon}{\nu}\lambda+\frac{\nu}{\varepsilon}\lambda\right)}}\sqrt{R_{S}( \boldsymbol{\theta}(0))} \tag{5.10}\] \[\lesssim\sqrt{\frac{n}{m^{2}\nu^{2}\varepsilon^{2}}}\sqrt{R_{S}( \boldsymbol{\theta}(0))}.\]
The rigorous statements of relations (5.8) and (5.9) are given in Theorem 5.
In the case where \(\gamma\geq 1\) and \(\gamma^{\prime}>\gamma-1\) (\(\boldsymbol{w}\)-lazy regime), the following holds for all \(t>0\):
\[\lambda_{\min}\left(\boldsymbol{G}^{[a]}(\boldsymbol{\theta}(t))\right)\geq \frac{1}{2}\nu\varepsilon^{3}\lambda,\]
for some universal constant \(\lambda>0\). Hence, we have
\[\frac{\mathrm{d}}{\mathrm{d}t}R_{S}(\boldsymbol{\theta}(t)) =-\frac{m}{n^{2}}\boldsymbol{e}^{\intercal}\left(\boldsymbol{G}^{[ a]}(\boldsymbol{\theta}(t))+\boldsymbol{G}^{[\boldsymbol{w}]}(\boldsymbol{\theta}(t)) \right)\boldsymbol{e}\] \[\leq-\frac{2m}{n}\lambda_{\min}\left(\boldsymbol{G}^{[a]}( \boldsymbol{\theta}(t))\right)R_{S}(\boldsymbol{\theta}(t))\] \[\leq-\frac{m}{n}\nu^{2}\varepsilon^{2}\frac{\varepsilon}{\nu} \lambda R_{S}(\boldsymbol{\theta}(t))\] \[=-\frac{m}{n}\nu\varepsilon^{3}\lambda R_{S}(\boldsymbol{\theta}( t)),\]
thus the following holds
\[R_{S}(\boldsymbol{\theta}(t))\leq\exp\left(-\frac{m\nu\varepsilon^{3}\lambda t }{n}\right)R_{S}(\boldsymbol{\theta}(0)),\]
and (5.9) does not hold anymore. However, we still have
\[\lim_{m\to\infty}\sup_{t\in[0,+\infty)}\frac{\|\boldsymbol{\theta}_{\boldsymbol{w}} (t)-\boldsymbol{\theta}_{\boldsymbol{w}}(0)\|_{2}}{\|\boldsymbol{\theta}_{ \boldsymbol{w}}(0)\|_{2}}=0, \tag{5.11}\]
and it can also be illustrated through an intuitive scaling analysis. Since
\[\frac{\mathrm{d}}{\mathrm{d}t}R_{S}(\boldsymbol{\theta}(t))=-\frac{\varepsilon }{\nu}\left\|\nabla_{\boldsymbol{\theta}_{a}}R_{S}(\boldsymbol{\theta}(t)) \right\|_{2}^{2}-\frac{\nu}{\varepsilon}\left\|\nabla_{\boldsymbol{\theta}_{ \boldsymbol{w}}}R_{S}(\boldsymbol{\theta}(t))\right\|_{2}^{2}\sim-\frac{m}{n} \nu\varepsilon^{3}\lambda R_{S}(\boldsymbol{\theta}(t)),\]
then
\[\left\|\nabla_{\boldsymbol{\theta}_{\boldsymbol{w}}}R_{S}( \boldsymbol{\theta}(t))\right\|_{2} \sim\sqrt{\frac{m}{n}\varepsilon^{4}\lambda}\sqrt{R_{S}( \boldsymbol{\theta}(t))}\] \[\sim\sqrt{\frac{m}{n}\varepsilon^{4}\lambda}\exp\left(-\frac{m \nu\varepsilon^{3}\lambda t}{2n}\right)\sqrt{R_{S}(\boldsymbol{\theta}(0))},\]
hence
\[\|\boldsymbol{\theta}_{\boldsymbol{w}}(t)-\boldsymbol{\theta}_{ \boldsymbol{w}}(0)\|_{2} \leq\frac{\nu}{\varepsilon}\int_{0}^{t}\left\|\nabla_{\boldsymbol{ \theta}_{\boldsymbol{w}}}R_{S}(\boldsymbol{\theta}(s))\right\|_{2}\mathrm{d}s\] \[\leq\frac{\nu}{\varepsilon}\int_{0}^{\infty}\left\|\nabla_{ \boldsymbol{\theta}_{\boldsymbol{w}}}R_{S}(\boldsymbol{\theta}(s))\right\|_{2} \mathrm{d}s\] \[\lesssim\sqrt{\frac{n}{m\varepsilon^{4}\lambda}}\sqrt{R_{S}( \boldsymbol{\theta}(0))},\]
and as
\[\left\|\boldsymbol{\theta}_{\boldsymbol{w}}(0)\right\|_{2}\sim\sqrt{m},\]
then
\[\frac{\|\boldsymbol{\theta}_{\boldsymbol{w}}(t)-\boldsymbol{\theta}_{ \boldsymbol{w}}(0)\|_{2}}{\left\|\boldsymbol{\theta}_{\boldsymbol{w}}(0) \right\|_{2}} \lesssim\sqrt{\frac{n}{m^{2}\varepsilon^{4}\lambda}}\sqrt{R_{S}( \boldsymbol{\theta}(0))} \tag{5.12}\] \[\lesssim\sqrt{\frac{n}{m^{2}\varepsilon^{4}}}\sqrt{R_{S}( \boldsymbol{\theta}(0))}.\]
The rigorous statements of relation (5.11) are given in Theorem 6. To end this part, we provide a sketch of the proofs for Theorem 1, see Figure 5.
### Condensed Regime
We remark that the \(\{a_{k},\boldsymbol{w}_{k}\}_{k=1}^{m}\) dynamics
\[\frac{\mathrm{d}a_{k}}{\mathrm{d}t} =-\frac{\varepsilon}{\nu}\frac{1}{n}\sum_{i=1}^{n}e_{i}\frac{ \sigma(\varepsilon\boldsymbol{w}_{k}^{\intercal}\boldsymbol{x}_{i})}{ \varepsilon}, \tag{5.13}\] \[\frac{\mathrm{d}\boldsymbol{w}_{k}}{\mathrm{d}t} =-\frac{\nu}{\varepsilon}\frac{1}{n}\sum_{i=1}^{n}e_{i}a_{k} \sigma^{(1)}(\varepsilon\boldsymbol{w}_{k}^{\intercal}\boldsymbol{x}_{i}) \boldsymbol{x}_{i},\]
is coupled in the sense that the solution of at least one of the equations in the system depends on knowing one of the other solutions in the system, and a coupled system is usually hard to solve.
However, in the condense regime, as \(\varepsilon\ll 1\) and \(\varepsilon\nu\ll\frac{1}{m}\), the evolution of \(\{e_{i}\}_{i=1}^{n}\) is slow enough so that it remains close to \(\{-y_{i}\}_{i=1}^{n}\) over a period of time \(T>0\) at the initial stage, hence (5.13) approximately reads
\[\frac{\mathrm{d}a_{k}}{\mathrm{d}t} \approx\frac{\varepsilon}{\nu}\frac{1}{n}\sum_{i=1}^{n}y_{i} \boldsymbol{w}_{k}^{\intercal}\boldsymbol{x}_{i}=\frac{\varepsilon}{\nu} \boldsymbol{w}_{k}^{\intercal}\boldsymbol{z}, \tag{5.14}\] \[\frac{\mathrm{d}\boldsymbol{w}_{k}}{\mathrm{d}t} \approx\frac{\nu}{\varepsilon}\frac{1}{n}\sum_{i=1}^{n}y_{i}a_{k} \sigma^{(1)}(0)\boldsymbol{x}_{i}=\frac{\nu}{\varepsilon}a_{k}\boldsymbol{z},\]
Figure 5: Sketch of proof for Theorem 1.
and the coupled dynamics is reduced to linear dynamics.
We are able to solve out the linear dynamics (5.14) (Proposition 7), whose solutions read: For each \(k\in[m]\), under the initial condition \(\left[\nu a_{k}(0),\varepsilon\mathbf{w}_{k}^{\intercal}(0)\right]^{\intercal}= \left[\nu a_{k}^{0},(\varepsilon\mathbf{w}_{k}^{0})^{\intercal}\right]^{\intercal}\), we obtain that
\[\nu a_{k}(t) =\nu\left(\frac{1}{2}\exp(\|\mathbf{z}\|_{2}t)+\frac{1}{2}\exp(-\|\bm {z}\|_{2}t)\right)a_{k}^{0} \tag{5.15}\] \[\quad+\varepsilon\left(\frac{1}{2}\exp(\|\mathbf{z}\|_{2}t)-\frac{1}{ 2}\exp(-\|\mathbf{z}\|_{2}t)\right)\left\langle\mathbf{w}_{k}^{0},\hat{\mathbf{z}}\right\rangle,\] \[\varepsilon\mathbf{w}_{k}(t) =\nu\left(\frac{1}{2}\exp(\|\mathbf{z}\|_{2}t)-\frac{1}{2}\exp(-\| \mathbf{z}\|_{2}t)\right)a_{k}^{0}\hat{\mathbf{z}}\] \[\quad+\varepsilon\left(\frac{1}{2}\exp(\|\mathbf{z}\|_{2}t)+\frac{1}{ 2}\exp(-\|\mathbf{z}\|_{2}t)\right)\left\langle\mathbf{w}_{k}^{0},\hat{\mathbf{z}}\right\rangle \hat{\mathbf{z}}\] \[\quad-\varepsilon\left\langle\mathbf{w}_{k}^{0},\hat{\mathbf{z}}\right\rangle \hat{\mathbf{z}}+\varepsilon\mathbf{w}_{k}^{0}.\]
We remark that \(\{a_{k},\mathbf{w}_{k}\}_{k=1}^{m}\) are the normalized parameters, then \(\{\nu a_{k},\varepsilon\mathbf{w}_{k}\}_{k=1}^{m}\) corresponds to the original parameters in (3.4).
In the case where \(\gamma>1\) and \(0\leq\gamma^{\prime}<\gamma-1\) (\(\mathbf{w}\)-lag regime), as \(\varepsilon\gg\nu\), then the magnitude of \(\{\varepsilon\mathbf{w}_{k}\}_{k=1}^{m}\) is much larger than that of \(\{\nu a_{k}\}_{k=1}^{m}\) at \(t=0\). Based on (5.15), it can be observed that \(\{\varepsilon\mathbf{w}_{k}\}_{k=1}^{m}\) remains dormant until \(\{\nu a_{k}\}_{k=1}^{m}\) attain a magnitude that is commensurate with that of \(\{\varepsilon\mathbf{w}_{k}\}_{k=1}^{m}\), and only then do the magnitudes of \(\{\varepsilon\mathbf{w}_{k}\}_{k=1}^{m}\) and \(\{\nu a_{k}\}_{k=1}^{m}\) experience exponential growth simultaneously. In order for the initial condensation of \(\mathbf{\theta}_{\mathbf{w}}\) to be observed, one has to wait for some growth in the magnitude of \(\{\varepsilon\mathbf{w}_{k}\}_{k=1}^{m}\), hence \(T\sim\Omega(\log(m))\). More importantly, compared with the \(\mathbf{w}\)-lazy regime, the condition \(\gamma^{\prime}<\gamma-1\) enforces \(\varepsilon\ll\frac{1}{\sqrt{m}}\), thus providing enough room for \(\{\varepsilon\mathbf{w}_{k}\}_{k=1}^{m}\) to grow in the \(\mathbf{z}\)-direction before \(\{e_{i}\}_{i=1}^{n}\) deviates away from \(\{-y_{i}\}_{i=1}^{m}\).
In the case where \(\gamma>1\) and \(\gamma^{\prime}<0\) (\(a\)-lag regime), as \(\varepsilon\ll\nu\), then the initial magnitudes of \(\{\varepsilon\mathbf{w}_{k}\}_{k=1}^{m}\) is much smaller than that of \(\{\nu a_{k}\}_{k=1}^{m}\). Based on (5.15), it takes only \(T\sim\Omega(1)\) for \(\{\varepsilon\mathbf{w}_{k}\}_{k=1}^{m}\) to attain a magnitude comparable to \(\{\nu a_{k}\}_{k=1}^{m}\), and this rapid growth leads to the observation of initial condensation towards the \(\mathbf{z}\)-direction, where \(\gamma>1\) impose \(\{e_{i}\}_{i=1}^{n}\) to a small neighbourhood of \(\{-y_{i}\}_{i=1}^{m}\) for a period of time which is at least of order one. To end this part, we provide a sketch of the proofs for Theorem 2, see Figure 6.
## 6 Conclusions
In this paper, we present the phase diagram of initial condensation for two-layer NNs with a wide class of smooth activation functions. We demonstrate the distinct features exhibited by NNs in the linear regime area and condensed regime
area, and we provide a complete and detailed analysis for the transition across the boundary (critical regime) in the phase diagram. Moreover, in comparison with the work of Luo et al. [16], we identify the direction towards which the weight parameters condense, and we draw estimates on the time required for initial condensation to occur in contrast to the work of Zhou et al. [31]. The phase diagram at initial stage is crucial in that it is a valuable tool for understanding the implicit regularization effect provided by weight initialization schemes, and it serves as a cornerstone upon which future works can be done to provide thorough characterization of the dynamical behavior of general NNs at each of the identified regime.
In future, we endeavor to establish a framework for the analysis of initial condensation by a series of papers. In our upcoming publication, we plan to extend this formalism to Convolutional Neural Network (CNN) [20], and apply it to investigate the phenomenon of condensation for a wide range of NN architectures, including fully-connected deep network (DNN) and Residual Network (ResNet) [9].
## Acknowledgments
This work is sponsored by the National Key R&D Program of China Grant No. 2022YFA1008200 (Z. X., T. L.), the Shanghai Sailing Program, the Natural Science Foundation of Shanghai Grant No. 20ZR1429000 (Z. X.), the National Natural
Figure 6: Sketch of proof for Theorem 2.
Science Foundation of China Grant No. 62002221 (Z. X.), the National Natural Science Foundation of China Grant No. 12101401 (T. L.), Shanghai Municipal Science and Technology Key Project No. 22JC1401500 (T. L.), Shanghai Municipal of Science and Technology Major Project No. 2021SHZDZX0102, and the HPC of School of Mathematical Sciences and the Student Innovation Center, and the Siyuan-1 cluster supported by the Center for High Performance Computing at Shanghai Jiao Tong University.
|
2302.12629 | Quantifying Noise Limitations of Neural Network Segmentations in
High-Resolution Transmission Electron Microscopy | Motivated by the need for low electron dose transmission electron microscopy
imaging, we report the optimal frame dose (i.e. $e^-/A^{2}$) range for object
detection and segmentation tasks with neural networks. The MSD-net architecture
shows promising abilities over the industry standard U-net architecture in
generalising to frame doses below the range included in the training set, for
both simulated and experimental images. It also presents a heightened ability
to learn from lower dose images. The MSD-net displays mild visibility of a Au
nanoparticle at 20-30 $e^-/A^{2}$, and converges at 200 $e^-/A^{2}$ where a
full segmentation of the nanoparticle is achieved. Between 30 and 200
$e^-/A^{2}$ object detection applications are still possible. This work also
highlights the importance of modelling the modulation transfer function when
training with simulated images for applications on images acquired with
scintillator based detectors such as the Gatan Oneview camera. A parametric
form of the modulation transfer function is applied with varying ranges of
parameters, and the effects on low electron dose segmentation is presented. | Matthew Helmi Leth Larsen, William Bang Lomholdt, Cuauhtemoc Nuñez Valencia, Thomas W. Hansen, Jakob Schiøtz | 2023-02-24T13:46:05Z | http://arxiv.org/abs/2302.12629v2 | Quantifying Noise Limitations of Neural Network Segmentations in High-Resolution Transmission Electron Microscopy.
###### Abstract
Motivated by the need for low electron dose transmission electron microscopy imaging, we report the optimal frame dose (_i.e._\(e^{-}/\)A\({}^{2}\)) range for object detection and segmentation tasks with neural networks. The MSD-net architecture shows promising abilities over the industry standard U-net architecture in generalising to frame doses below the range included in the training set, for both simulated and experimental images. It also presents a heightened ability to learn from lower dose images. The MSD-net displays mild visibility of a Au nanoparticle at 20-30 \(e^{-}/\)A\({}^{2}\), and converges at 200 \(e^{-}/\)A\({}^{2}\) where a full segmentation of the nanoparticle is achieved. Between 30 and 200 \(e^{-}/\)A\({}^{2}\) object detection applications are still possible. This work also highlights the importance of modelling the modulation transfer function when training with simulated images for applications on images acquired with scintillator based detectors such as the Gatan Oneview camera. A parametric form of the modulation transfer function is applied with varying ranges of parameters, and the effects on low electron dose segmentation is presented.
keywords: HR-TEM, Machine Learning, Modulation Transfer Function, Signal-to-noise, Beam damage +
Footnote †: journal:
## 1 Introduction
High-resolution transmission electron microscopy (HR-TEM) is a primary method to characterise materials at the atomic scale, and is a method where an abundance of data can be obtained. HR-TEM can provide a greater temporal resolution, as opposed to scanning transmission electron microscopy, by illuminating the entire sample simultaneously. It does this, however, at the cost of the signal-to-noise ratio (SNR). Increasing the frame dose of the image can assist in increasing the interpretability of each image, but it is usually undesired to do so to avoid electron beam induced effects that are yet to be completely understood [1; 2; 3]. The averaging of many images is another method to increase the SNR. This method, however, lowers the temporal resolution and can be a tedious task that requires complex image alignment to ensure sensible results. Image alignment is also frame dose dependent as the alignment between low SNR images becomes difficult. Addressing SNR related issues has led to a quest in developing denoising methods [4; 5], often using neural networks [6; 7].
The field has seen a steady increase in applying machine learning solutions to solve various tasks of analysing and interpreting data. The importance of including machine learning in a standard workflow for HR-TEM characterisation is highlighted in multiple works [8; 9]. Pipelines for training neural networks for segmentation with hand labelled experimental data have been developed, such as the pipeline by Groschner _et al._[10], where segmented nanoparticles are classified to acquire sufficient statistics on various classes of nanoparticles. The segmentation of nanoparticles is also useful for tracking dynamic behaviour across frames [11]. Simulated images provide possibilities for large amounts of accurately labelled images for training and have been used to identify and analyse atomic columns in experimental HR-TEM sequences [12; 13]. Vincent _et al._ also used simulated images to train neural networks for denoising [7]. Due to the successes of deep learning powered analysis of HR-TEM images many are seeking to understand the important aspects of optimising the generalisability and applications of deep learning models, and attempt to determine the best models available in comparison to eachother and other thresholding or clustering methods [14; 15].
In this work we look to continue the search for optimal simulated training data and neural networks, by considering the specific task of low SNR segmentation. We will report the optimal frame dose range for reliable segmentations by neural networks and how to control that range by tuning simulations. For this task we focus on the segmentation of metallic nanoparticles, more specifically CeO\({}_{2}\) supported Au nanoparticles. Au nanoparticles serve as a useful system for studying catalytic properties of metallic nanoparticles [16].
## 2 Methods
This work utilises two different network architectures to perform the segmentations. The first is the well recognised U-net [17] architecture used by Madsen _et al._[12], with the only modification that the skip connections are now concatenations rather than element-wise additions [18]. The other is the MSD-net introduced by Pelt _et al._[19], which was highlighted to be a robust network against low SNR images. Both networks are convolutional neural networks that differ in their approach to capturing information from varying spatial distances in the image. The U-net's approach gives its characteristic U-shaped architecture due to the numerous down-sampling and up-sampling layers to spread the convolutional kernel over patterns with large spatial coverage. The MSD-net in contrast maintains the same resolution throughout all layers, but dilates the convolutional kernel to spread the weights over a larger spatial region. Specific hyper-parameters regarding the network architectures can be seen in the supplementary information section A.1.
To train the neural networks, supported nanoparticles are constructed with the Atomic Simulation Environment (ASE) [20] and HR-TEM images are simulated using abTEM [21]. For this work, Au face centred cubic structures of varying sizes are generated so that the [110] crystal direction is aligned with the optical axis, with a slight random tilt off zone axis of up to 3\({}^{\circ}\). At a random layer from the centre of the nanoparticle the [111] direction is truncated, effectively slicing a (111) plane. The exposed face of the FCC structure is attached to a (111) plane of a CeO\({}_{2}\) slab. These systems are randomly rotated about the optical axis, and contain varying sizes of nanoparticles. This replicates realistic interfaces between Au and CeO\({}_{2}\) identified by Liu _et al._[16]. The image simulation then applies the parameters shown in Table 1. This is a relatively cheap operation, since the expensive part of the image simulation is generating the exitwave by computing the multislice algorithm [22]. Applying imaging imperfections such as the contrast transfer function (CTF) and the modulation transfer function (MTF) can be done multiple times on the same exitwave, which generates multiple images of the same nanoparticle with varying imaging conditions. These images are pre-generated and stored to allow for a controlled comparison between networks on the exact same dataset. Training epochs cycle through the different sets of images of the same atomic system. Each set of images is referred to as an image epoch. With 300 training epochs and 10 image epochs, each image is reused 30 times in training.
Mask labels are binary images separating the Au nanoparticle from the surrounding vacuum and the CeO\({}_{2}\) support. This is generated by computing the convex hull of the atomic coordinates. The neural networks are trained to map a single HR-TEM image to a binary mask label image, which will provide the pixels containing the nanoparticle separate from the pixels containing substrate and vacuum. An example of this is shown in Fig. A.10. Once the network is trained it can be applied to an image, either simulated or experimental, and will return a probability map. Each pixel will be classified to belong to either the nanoparticle or the background (vacuum or substrate) class at some probability. This is referred to as the network inference or prediction and a threshold is applied at 0.9 to generate a binary predicted mask of the pixels that the network classify with at least 90% confidence.
Experimental images of a CeO\({}_{2}\) supported Au nanoparticle were acquired on an image corrected FEI Titan 80-300 ECELL at 300 keV, using a Gatan OneView detector. The Au nanoparticle is imaged initially at a frame dose of \(10\ e^{-}/\text{\AA}^{2}\), where noise dominates the images. The dose-rate is continuously increased to above 1000 \(e^{-}/\text{\AA}^{2}\), where atomic columns are relatively visible. The Au nanoparticle is situated in vacuum at \(200^{\circ}C\), so it is expected that the nanoparticle is relatively inert. We refer to work by Lomholdt _et al._[23] for experimental details and details of the SNR at varying frame doses.
Our approach here is to utilise the segmentation of the final frame of this continuously increasing doserate series as a pseudo-ground truth to gauge the performance of the network at lower frame doses. Due to drift in the images of the nanoparticle, pixel-wise scores such as the F1-Score will not be used, instead we will measure the area of segmentation, where the ground truth area will be the target.
For scintillating material based detectors, such as the Gatan OneView, the approximation of pure Poissonian noise in HR-TEM images breaks down, since the spectral profile of the noise is altered by the modulation transfer function (MTF) [24; 25]. This function is the Fourier transform of the point spread function, which is an intrinsic property of the scintillating material [26]. Here we study the parametric form from Lee _et al._[27], shown in Eq. 1.
\[MTF(\tilde{q})=(1-C)\cdot\frac{1}{1+(\frac{\tilde{q}}{c_{0}})^{c_{3}}}+C \tag{1}\]
where the spatial frequencies are normalised by the Nyquist frequency, which is related to the sampling, \(s\) of the detec
\begin{table}
\begin{tabular}{l c c c} \hline \hline Parameter & L & U & Unit \\ \hline Acceleration voltage & & 300 & keV \\ Defocus (\(\Delta f\)) & -200 & 200 & Å \\ Spherical aberration (\(C_{s}\)) & 0 & 12.45 & \(\mu\)m \\ Focal spread & & 5 & 20 & Å \\ Blur & 0.1 & 0.8 & Å \\ Frame dose & \(10^{2}\) & \(10^{6}\) & \(e^{-}/\text{\AA}^{2}\) \\ Resolution & 0.07 & 0.08 & Å/pixel \\ \hline \hline \end{tabular}
\end{table}
Table 1: Microscope parameters. For each image series, a set of microscope parameters are drawn within the limits given here, except the acceleration voltage which is kept constant. L and U denote the lower and upper limits, respectively. All distributions are uniform, except for the electron dose which is exponential.
tor by \(\tilde{q}=q/q_{N}=2\cdot q\cdot s\). The limits of the function are 1 for \(\tilde{q}\to 0\) due to a normalisation and \(C\) for \(q\rightarrow\infty\). The function is fitted following the noise method described in Ref. [26].
## 3 Results
The following section will present results presenting how to achieve reliable and robust low dose segmentations of HR-TEM images. This will be divided into firstly a comparison of two neural network architectures and their ability to differentiate signal from noise. The neural networks will be trained on simulated data with different electron dose ranges to gauge their abilities to learn from training datasets of varying difficulty. After this a detailed analysis on how to tune performance by optimising the MTF will be presented.
### Dataset Dose Limits and Neural Network Comparison
An obvious first step in approaching segmentation of low SNR data is to investigate the performance of various architectures. Here we use the experimental image series to benchmark the U-net against the MSD-net and identify the best neural network for low SNR HR-TEM segmentation. We test the neural networks abilities to train on two different ranges of frame dose presented in Table 2. The low frame dose range covers the range of the experimental data series in this work, where-as the high frame dose range feeds the network much clearer simulated images, making it an easier dataset to learn.
Comparisons will be made by plotting the segmented area versus frame dose to identify when each network begins to detect the nanoparticle and when the segmented area converges _i.e._ the network identifies the entire nanoparticle. The final frame segmentation should be understood as a a pseudo-ground truth. The aim is to segment a similar area as in the ground truth at as low frame dose as possible. The area may not be exact due to possible morphological changes in the nanoparticle. The morphological changes and slight drift disallows the use of, for example, an F1-Score against the given frame and the ground truth for the experimental data series, since the overlap of the segmentations will not be sensible at a pixel-wise level. The ideal case of these plots will be a step function, meaning the entire nanoparticle is immediately identified, at some minimal frame dose.
The MSD-net presents an ability to generalise outside of the training data range, whereas the U-net only performs within the training data range. Training the MSD-net on the low frame dose range increases the visibility of the nanoparticle between 40-100 \(e^{-}/\text{\AA}^{2}\) by \(\sim\)50%, as seen in Fig. 1(a), however it is also a noteworthy feature that the MSD-net is able to segment significant regions below the lower limit of the high frame dose range when trained on the high frame dose range. This proves a superior ability to separate noise to signal and generalise beyond the limits of the training set, which means the MSD-net is a better candidate for when data is limited. Fig. 1(b) visualises the improvement in low dose segmentation by overlapping the ground truth segmentation (cyan coloured mask), with the segmentation from the two networks trained on each dose range (colour coded mask). The MSD-net trained on both dose ranges show a lower limit at 20-30 \(e^{-}/\text{\AA}^{2}\).
Predictions from the U-net are not as robust as the MSD-net. Fig. 2(b) shows that the U-net trained on the low frame dose range is not entirely reliable due to the difficulties in defining the edges of the nanoparticle. Comparing the low dose trained MSD-net segmentation at 49 \(e^{-}/\text{\AA}^{2}\) in Fig. 1(b) to the low dose trained U-net at 49 \(e^{-}/\text{\AA}^{2}\) in Fig. 2(b), the segmented area is similar however the edges are more well defined in the segmentation from the MSD-net.
The low dose trained U-net achieves some visibility a few \(e^{-}/\text{\AA}^{2}\) below the MSD-net, as seen in Fig. 2, however the over segmentation at higher dose highlights that this network is possibly being triggered by noise as well. In Fig. A.11 the F1-Score as a function of training epochs with simulated data highlight that the low dose range dataset becomes a harder problem for both neural networks to learn and generalise. Both networks show greater signs of overfitting. The MSD-net seems to handle more difficult problems better than the U-net. Finally Fig. 3 compares the two networks trained on the high frame dose range data, showcasing the performance gain in the low frame dose regime with the MSD-net segmenting 50% of the nanoparticle at a \(\sim\)70% lower frame dose, and both show a reliable convergence at 200 \(e^{-}/\text{\AA}^{2}\).
### Frame dose limit
Low dose segmentation performance relies on a proper modelling of the noise in the simulated data. The following will described how noise characteristics are extracted from experimental data and modelled in simulated data. Eq. 1 was fitted to the azimuthally averaged centred Fourier transform of the vacuum region in the images, as done in [26; 27]. See Fig. A.12 and A.13 for the fitted MTF of the first frame and last frame of the experimental image series, respectively. Fig. 4 presents a distribution of fitted parameters dependent on the frame dose of the experimental images. The parameter points with white dots are the fits with an \(R^{2}\geq 0.98\).
\begin{table}
\begin{tabular}{l c c c} \hline \hline Parameter & L & U & Unit \\ \hline High frame dose range & \(10^{2}\) & \(10^{6}\) & \(e^{-}/\text{\AA}^{2}\) \\ Low frame dose range & \(10^{1}\) & \(10^{4}\) & \(e^{-}/\text{\AA}^{2}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Frame dose ranges for simulated images.
Figure 1: Comparison of the MSD-net trained with simulated images within the high frame dose range to simulated images within the low frame dose range (see Table 2). (a) Segmented area of each model as a function of frame dose. Black dashed line represents the convergence of the area of segmentation, symbolising where the network segmented the entire nanoparticle. The minimum and maximum area beyond this point form the shaded grey bar as a visual aid for the target area of segmentation. Colour coded dashed lines for each model is shown representing the frame dose at which the model achieves 50% segmentation of the nanoparticle. (b) Colour coded examples of the segmentation at \(49\ e^{-}/\mathrm{\SIUnitSymbolAngstrom}^{2}\) overlapped with the segmentation of the final frame (ground truth). This highlights the ability to achieve low dose segmentation.
Figure 2: Comparison of the U-net trained with simulated images within the high frame dose range to simulated images within the low frame dose range (see Table 2). (a) Segmented area of each model as a function of frame dose. Black dashed line represents the convergence of the area of segmentation, symbolising where the network segmented the entire nanoparticle. The minimum and maximum area beyond this point from the shaded grey bar as a visual aid for the target area of segmentation. Colour coded dashed lines for each model is shown representing the frame dose at which the model achieves 50% segmentation of the nanoparticle. (b) Top: maximum segmentation overlapped with the segmentation of the final frame (ground truth). This highlights any over-segmented areas, due to difficulties in defining the boundaries of the nanoparticle. Bottom: Segmentation at \(49\ e^{-}/\mathrm{\SIUnitSymbolAngstrom}^{2}\) overlapped with the segmentation of the final frame (ground truth). This highlights the ability to achieve low dose segmentation.
It is vital to understand the role of each parameter in Eq. 1, to interpret the distributions in Fig. 4. The \(c_{0}\) parameter represents the \(\tilde{q}\) value of the half-maximum, which increases with frame dose and converges around \(200\)\(e^{-}/\text{\AA}^{2}\). This means that at frame doses above \(200\)\(e^{-}/\text{\AA}^{2}\) spatial frequencies up to \(\sim 0.27\cdot q_{N}\) are maintained at at least half maximum, but for lower doses the function is narrower. As a result spatial frequencies above \(c_{0}\cdot q_{N}\) are effectively filtered out at higher frame doses. The curvature of the function is consistently close to a Lorentzian form, as revealed by the \(c_{3}\) parameter (higher values approach a low pass step function). The \(C\) parameter reveals the change in the tail of the function _i.e._ the \(q\rightarrow\infty\) limit. \(C\) is non-zero at lower frame dose and approaches \(0\) at higher doses. We interpret this as a fraction of the noise that is not subject to the point spread function of the scintillator, but is generated later in the detection process. We label this part of the noise "readout noise" although it may come from more than one source.
All parameters show most variation below \(200\)\(e^{-}/\text{\AA}^{2}\). The variation below this limit is likely due to the transition between readout noise and shot noise as the dominating noise source [28].
The readout noise appears after the scintillating material and is therefore not affected by the MTF. The contributions of the readout noise and shot noise are modelled separately in our simulations and in order to extract the fractional contribution of each noise source we look at the \(C\) dependency of the frame dose in electrons per pixel. This value represents the noise floor in the image. At higher frame dose this noise floor is washed out by shot noise, but at lower dose the readout noise dominates. The shot noise is modelled as a Poisson distribution, \(P(\lambda=N_{D})\), where \(N_{D}\) is the frame dose in electrons per pixel. The read out noise is also modelled as a Poisson distribution, \(P(\lambda=N_{0})\), where \(N_{0}\) is a constant noise floor. The total noise is a sum of the two Poisson distributions
\[P(\lambda=N_{D})+P(\lambda=N_{0})=P(\lambda=(N_{D}+N_{0})) \tag{2}\]
meaning that each pixel intensity, \(I_{x,y}\) in the final image is within the distribution
\[I_{x,y}=N_{D}+N_{0}\pm\sqrt{N_{D}+N_{0}}. \tag{3}\]
In Fig. 5, we extract \(N_{0}\) from the fractional contribution of the readout noise deviation to the total noise deviation.
The approach we have taken to model the variations of \(c_{0}\) and \(c_{3}\) is to randomise the parameters within a given range as done by Madsen et al. [12], which alters the spectral profile of the shot noise. The extracted \(N_{0}\) is varied at \(\pm 50\%\), _i.e._\(N_{0}\in[0.005,0.015]\) when applied to simulated images. In Fig. A.14, we show that with the fitted \(N_{0}\) applied as a separate Poissonian noise source, we are able to replicate the \(C\) parameter dependency on the frame dose from MTFs fitted to a simulated image series of vacuum at increasing frame dose. The parameters \(c_{0}\) and \(c_{3}\) show random values within the given range as expected.
Table 3 summarises 5 different MTF parameter ranges applied to 5 simulated datasets of identical atomic systems. All apply the microscope parameters from Table 1. An identical MSD-net is trained on each dataset. The
Figure 4: The fitted parameters of Eq. 1 for each frame in the data series. Fits with \(R^{2}\geq 0.98\) are marked by a white dot. The orange line represents the mean value.
Figure 3: Comparison of the MSD-net’s and U-net’s ability to learn to distinguish between signal and noise. The plot shows the segmented area of each model as a function of frame dose. Black dashed line represents the convergence of the area of segmentation, symbolising where the network segmented the entire nanoparticle. The minimum and maximum area beyond this point from the shaded grey bar as a visual aid for the target area of segmentation. Colour coded dashed lines for each model is shown representing the frame dose at which the model achieves 50% segmentation of the nanoparticle.
first model, MTF 1, will take the parameters from Ref. [12]. Motivations behind the other MTF models will be explained as the results are discussed.
Higher values of \(c_{0}\) in MTF 1 and MTF 2 assist the network in low frame dose segmentation. Fig. 6(a) presents the visibility of the entire nanoparticle as a function of frame dose from 3 MSD-net models trained on MTF models 1-3 individually. MTF 1 and MTF 2 show very similar performance, which highlights that it is not important that the form of the MTF is exactly Lorentzian. This was determined by ranging \(c_{3}\) over a larger range in MTF 2. Following this result, MTF 3 samples a range of \(c_{0}\) more suited to the fitted ranges in Fig. 4. The result of this range were detrimental on the low dose performance, delaying the visibility of the nanoparticle by \(\sim\)25% in electron dose.
Applying the range of \(c_{0}\) within the experimental fit by MTF 3 provide more refined segmentations at higher frame dose. The variations in the segmented area at higher doses for the MTF 3 trained model are smaller than that of the MTF 1 and 2. The variations of MTF 1 and MTF 2 are shown by the shaded grey region. These variations can be due to difficulties in defining the boundaries between the nanoparticle and substrate/vacuum in the images. Fig. 6(b) shows the maximum segmented area of MTF 2 (top) and MTF 3 (bottom). Here it is seen that the MTF 2 trained model has minor difficulties in defining the border of the nanoparticles.
We speculate that preserving spatial frequencies up to around half-Nyquist is preferential for the networks to learn to differentiate signal from noise. In practice this means having a \(c_{0}\) that ranges up to or slightly above \(0.5\cdot q_{N}\). Values of \(c_{0}\) within the fitted range from experimental data is however also necessary for refining boundaries of the segmented areas.
MTF 4 ranges between the lower limit of the fitted \(c_{0}\) from MTF 3 and the upper limits of \(c_{0}\) from MTF 1 and 2, and achieves the low dose performance of MTF 1 and 2 and the refined boundaries of MTF 3 at higher dose. MTF 5 samples larger \(c_{0}\), beyond half-Nyquist, with the same range as MTF 1 and 2. Fig. 7(a) presents the performance of MTF models 2,4, and 5. MTF 5 shows much more visibility at low dose frames, however much more sporadic variations in the higher frame dose regime. Fig. 7(b) shows the maximum segmented area of MTF 5, which highlights its weakness in identifying the borders of the nanoparticle; The segmentation bleeds into the surrounding vacuum. This renders the low dose segmentations of MTF 5 unreliable, as it seems it is being triggered by noise in the vacuum. MTF 5 has such a high \(c_{0}\) range that it approaches a noise profile appropriate for direct electron counting detectors such as the Gatan K2/3 camera [25]. The maximum segmentation of MTF 4 in contrast shows very sharp separations between the nanoparticle and its surroundings. This proves that ranging \(c_{0}\) such that it covers the range fitted from experimental data, but also such that it retains spatial frequencies up to half-Nyquist is ideal for optimal segmentation performance across the entire frame dose range.
We note that in all cases the area of segmentation converges at approximately 200 \(e^{-}/\AA^{2}\), depicted by the dashed black line. This is the first reported lower limit of frame dose for a reliable full segmentation.
The overlap in the middle of Figs. 6(b) and 7(b) shows the morphological changes in the nanoparticle but also highlight that both are segmentations of the same nanoparticle. Here we also highlight that the segmentation is sensible and we show that this is at a level where a human interpreter would have difficulties being certain of the presence of the entire nanoparticle.
The segmentations below 200 \(e^{-}/\AA^{2}\) are not full segmentations and cannot be used to for example measure the area, but can still be used for object detection purposes and regional Fourier transform extraction. For these purposes 50% of segmentation would be a sensible minimum, represented by the colour coded dashed lines in Figs. 1(a), 2(a), 6(a), and 7(a).
To further prove the performance of the MTF 4 model, we show that an MSD-net trained with the MTF 4 model performs better on a simulated dataset with the MTF 3 model applied, compared to an MSD-net trained with the MTF 3 model. An image series was simulated to repli
Figure 5: The correlation of the \(C\) parameter of Eq. 1 to the frame dose as electrons per pixel. The fraction of readout noise, \(N_{0}\) is extracted from its fractional contribution to the total noise. The orange lines shows the fitted curve with \(R^{2}=0.94\) and \(N_{0}=0.01\).
\begin{table}
\begin{tabular}{c|c c|c c|c c} \hline \hline \multirow{2}{*}{MTF} & \multicolumn{2}{c|}{\(c_{0}\)} & \multicolumn{2}{c|}{\(c_{3}\)} & \multicolumn{2}{c}{\(N_{0}\)} \\ & L & U & L & U & L & U \\ \hline
1 & 0.4 & 0.6 & 2.0 & 3.0 & 0.005 & 0.015 \\
2 & 0.4 & 0.6 & 1.0 & 5.0 & 0.005 & 0.015 \\
3 & 0.2 & 0.3 & 1.0 & 5.0 & 0.005 & 0.015 \\
4 & 0.25 & 0.6 & 2.0 & 5.0 & 0.005 & 0.015 \\
5 & 0.6 & 0.8 & 2.0 & 5.0 & 0.005 & 0.015 \\ \hline \hline \end{tabular}
\end{table}
Table 3: The various MTF models studied in the work. 1-5 represents different ranges of parameters in Eq. 1.
Figure 6: Comparison of the first 3 MTF models presented in Table 3. (a) Segmented area of each model as a function of frame dose. Black dashed line represents the convergence of the area of segmentation, symbolising where the network segmented the entire nanoparticle. The minimum and maximum area beyond this point from the shaded grey bar as a visual aid for the target area of segmentation. Colour coded dashed lines for each model is shown representing the frame dose at which the model achieves 50% segmentation of the nanoparticle. MTF 2 shows the best performance (most nanoparticle visibility at the lowest frame dose). (b) Colour coded examples of the maximum segmentation overlapped with the segmentation of the final frame (ground truth). This highlights any over-segmented areas, due to difficulties in defining the boundaries of the nanoparticle.
Figure 7: Comparison of the last 3 MTF models presented in Table 3. (a) Segmented area of each model as a function of frame dose. Black dashed line represents the convergence of the area of segmentation, symbolising where the network segmented the entire nanoparticle. The minimum and maximum area beyond this point from the shaded grey bar as a visual aid for the target area of segmentation. Colour coded dashed lines for each model is shown representing the frame dose at which the model achieves 50% segmentation of the nanoparticle. MTF 4 shows the best performance (most nanoparticle visibility at the lowest frame dose and a tight convergence of the segmented area). (b) Colour coded examples of the maximum segmentation overlapped with the segmentation of the final frame (ground truth). This highlights any over-segmented areas, due to difficulties in defining the boundaries of the nanoparticle.
cate the experimental series. This simulated series contains 1000 images of a CeO\({}_{2}\) supported Au nanoparticle in the [110] zone axis positioned similarly to the experimental sample, with a blur range of [0.1, 0.2] A, focal spread range of [5; 6] A, defocus range of [48; 50] A, C\({}_{s}\) range of [7; 8]\(\mu\)m, a frame dose from [10\({}^{1}\), 10\({}^{4}\)] \(e^{-}/\)A\({}^{2}\), and the MTF 3 model. In this case we have exact ground truths and the F1-Score will be used as a metric. Fig. 15a presents a histogram of the F1-Score for each image in the simulated series for both networks in mention. It is immediately apparent that the MTF 4 dataset trains a network that outperforms the MTF 3 trained network on images with the MTF 3 ranges applied. Fig. 15b displays the F1-Score against frame dose, which highlights the improved performance of the MTF 4 dataset trained MSD-net is primarily on low frame dose simulated images, with a higher mean F1-Score in the range of 10-100 \(e^{-}/\)A\({}^{2}\).
## 4 Conclusions
In this work we have investigated the quantitative limit of low SNR HR-TEM image segmentation. A continuously increasing frame dose HR-TEM image series of a CeO\({}_{2}\) supported Au nanoparticle was acquired to rate the performance of neural network segmentations at low frame dose and compare the segmentation to the final (highest frame dose) frame.
The results show that the neural networks are achieving human level performance, which means it can safely be equipped for large-scale automated analysis to relieve the human operator of repetitive tasks.
The MSD-net showed promising low SNR performance compared to the industry standard U-net. The MSD-net showed an ability to generalise outside the frame dose range of the training set and provide reliable segmentations, with well defined boundaries down to 20 \(e^{-}/\)A\({}^{2}\). The U-net was only able to operate within the given training range, and when trained with lower dose images, presented weaknesses in defining boundaries between the nanoparticle and its surroundings. This highlights the MSD-net's superior ability to learn the differences between signal and noise.
A parametric form of the MTF was fitted to all frames in the HR-TEM series resulting in a frame dose dependent range of parameters. The noise contributions were separated into two sources, shot noise and readout noise, and the fractional contribution of each was extracted from the noise floor given by the \(C\) parameter of Eq. 1. It was shown that modelling the MTF to retain spatial frequencies up to \(\sim 0.5\cdot q_{N}\) with at least half-maximum assisted the MSD-net in detecting the nanoparticle at lower frame dose. It seems that modelling the MTF with a wide range of parameters that both consist of parameters within the fitted range and parameters that maintain spatial frequencies up to half-Nyquist allows the MSD-net to operate in both the shot noise and readout noise dominated regimes.
All results converged at 200 \(e^{-}/\)A\({}^{2}\). This is the first report of a frame dose limit for reliable neural network segmentations. Neural network predictions with HR-TEM images between 20-100 \(e^{-}/\)A\({}^{2}\) are still useful for object detection purposes.
Knowledge of these frame dose limits will provide the community with realistic expectations from deep learning models and an ability to design experiments that are optimised for their needs from deep learning powered analysis tools, whether it be object detection in live low dose imaging, or statistical data accumulating of morphological properties, amongst others.
## Funding
The authors acknowledge financial support from the Independent Research Fund Denmark (DFF-FTP) through grant no. 9041-00161B.
## Availability of data and materials
### Competing interests
The authors declare that they have no competing interests.
|
2306.08386 | Efficient Backdoor Attacks for Deep Neural Networks in Real-world
Scenarios | Recent deep neural networks (DNNs) have came to rely on vast amounts of
training data, providing an opportunity for malicious attackers to exploit and
contaminate the data to carry out backdoor attacks. However, existing backdoor
attack methods make unrealistic assumptions, assuming that all training data
comes from a single source and that attackers have full access to the training
data. In this paper, we introduce a more realistic attack scenario where
victims collect data from multiple sources, and attackers cannot access the
complete training data. We refer to this scenario as data-constrained backdoor
attacks. In such cases, previous attack methods suffer from severe efficiency
degradation due to the entanglement between benign and poisoning features
during the backdoor injection process. To tackle this problem, we introduce
three CLIP-based technologies from two distinct streams: Clean Feature
Suppression and Poisoning Feature Augmentation.effective solution for
data-constrained backdoor attacks. The results demonstrate remarkable
improvements, with some settings achieving over 100% improvement compared to
existing attacks in data-constrained scenarios. Code is available at
https://github.com/sunh1113/Efficient-backdoor-attacks-for-deep-neural-networks-in-real-world-scenarios | Ziqiang Li, Hong Sun, Pengfei Xia, Heng Li, Beihao Xia, Yi Wu, Bin Li | 2023-06-14T09:21:48Z | http://arxiv.org/abs/2306.08386v2 | # Efficient Backdoor Attacks for Deep Neural Networks in Real-world Scenarios
###### Abstract
Recent deep neural networks (DNNs) have come to rely on vast amounts of training data, providing an opportunity for malicious attackers to exploit and contaminate the data to carry out backdoor attacks. These attacks significantly undermine the reliability of DNNs. However, existing backdoor attack methods make unrealistic assumptions, assuming that all training data comes from a single source and that attackers have full access to the training data. In this paper, we address this limitation by introducing a more realistic attack scenario where victims collect data from multiple sources, and attackers cannot access the complete training data. We refer to this scenario as **data-constrained backdoor attacks**. In such cases, previous attack methods suffer from severe efficiency degradation due to the **entanglement** between benign and poisoning features during the backdoor injection process.
To tackle this problem, we propose a novel approach that leverages the pre-trained Contrastive Language-Image Pre-Training (CLIP) model. We introduce three CLIP-based technologies from two distinct streams: _Clean Feature Suppression_, which aims to suppress the influence of clean features to enhance the prominence of poisoning features, and _Poisoning Feature Augmentation_, which focuses on augmenting the presence and impact of poisoning features to effectively manipulate the model's behavior.
To evaluate the effectiveness, harmlessness to benign accuracy, and stealthiness of our method, we conduct extensive experiments on 3 target models, 3 datasets, and over 15 different settings. The results demonstrate remarkable improvements, with some settings achieving over **100%** improvement compared to existing attacks in data-constrained scenarios. Our research contributes to addressing the limitations of existing methods and provides a practical and effective solution for data-constrained backdoor attacks.
## 1 Introduction
Deep neural networks (DNNs) are widely utilized and powerful machine learning algorithms inspired by the structure and functioning of the human brain. They excel at learning intricate patterns in data, making them invaluable for various applications such as image recognition [17, 21], natural language processing [33, 68], image generation [20, 30], and anomaly detection [45, 64]. However, the effectiveness of DNNs heavily relies on the quantity and quality of the training data. For instance, Stable Diffusion [49], a generative model with 983 million parameters, owes its success in image generation tasks to pre-training on 5 billion image-text pairs. Similarly, GPT-3 [3], a language model with 175 billion
Figure 1: One-to-one (O2O) and many-to-one (M2O) data collection modes. M2O mode is more in line with practical scenarios where data collectors collect data from multiple sources. In this mode, the attacker cannot have all the data available to the victims.
parameters, owes its efficacy in diverse language processing tasks to pre-training on an extensive corpus of 45 TB of text data. As the demand for data continues to rise, many users and businesses resort to third-party sources or online collections as a convenient means of acquiring the necessary data. However, recent studies [25, 18, 43, 64, 2, 15] have demonstrated that such practices can be maliciously exploited by attackers to contaminate the training data, significantly impairing the functionality and reliability of trained models.
The growing adoption of neural networks across different domains has made them an attractive target for malicious attacks. One particular attack technique gaining attention is the _backdoor attack_[26, 27, 2, 9, 15, 24, 41]. In backdoor attacks, a neural network is deliberately injected with a hidden trigger by introducing a small number of poisoning samples into the benign training set during the training. Once the model is deployed, the attacker can activate the backdoor by providing specific inputs containing the hidden trigger, causing the model to produce incorrect results. Backdoor attacks continue to present a significant and pervasive threat across multiple sectors, including image classification [16], natural language processing [43], speaker verification [67], malware detection [24], and video recognition [69]. In this paper, we focus on the widely studied field of image classification.
However, it is important to note that all of previous backdoor attacks are based on an assumption that may be too broad in practice. They assume that all training data has been collected from a single source, and the collected source has been poisoned by attacker (As depicted in the O2O data collection mode in Fig. 1). In this case, the attacker can full access to the entire training data. While this assumption allows attackers to easily poison the training data, it does not accurately reflect real-world attack scenarios. To illustrate this point, let's consider a scenario where victims possess a private dataset with limited samples. To compensate for the limited data, victims may need to augment their dataset by collecting additional data from multiple sources on the internet (referred to as the public dataset) and combine it with their private dataset to form the training set. As shown in the M2O data collection mode in Fig. 1, part of the sources may be poisoned by attackers, secretly+. In this case, the attackers cannot access the private dataset and can only manipulate a portion of the public dataset to carry out the poisoning process. Consequently, a discrepancy arises between the distribution of the poisoning data and the distribution of the training data, which differs from the previous pipeline of poisoning attacks.
Footnote †: These three backdoor attacks are more realistic attack scenario, where the data provided by each data source is independently and identically distribution in number-constrained backdoor attacks, each data source provides data belonging to different categories in class-constrained backdoor attacks, and each data source provides data from different domains in domain-constrained backdoor attacks. More description can be found in Sec. 3.
In this paper, we address a more realistic backdoor attack scenario called **data-constrained backdoor attacks**, where the attackers do not have access to the entire training set. To be more precise, we classify data-constrained backdoor attacks into three types based on different types of data sources: number-constrained backdoor attacks, class-constrained backdoor attacks, and domain-constrained backdoor attacks+. Upon investigation, we have discovered that existing attack methods exhibit significant performance degradation when dealing with these data-constrained backdoor attacks. We propose that the **entanglement** between benign and poisoning features is a crucial factor contributing to this phenomenon. Entanglement refers to the neural networks utilizing both benign and poisoning features to make decisions for poisoning samples. However, this approach is not efficient for backdoor attacks. Ideally, an efficient backdoor attack should solely rely on the poison feature generated by the trigger to make decisions, irrespective of how the benign feature is expressed.
Footnote †: These three backdoor attacks are more realistic attack scenario, where the data provided by each data source is independently and identically distribution in number-constrained backdoor attacks, each data source provides data belonging to different categories in class-constrained backdoor attacks, and each data source provides data from different domains in domain-constrained backdoor attacks. More description can be found in Sec. 3.
To address the aforementioned challenge and enhance the efficiency of poisoning attacks in data-constrained backdoor scenarios, we introduce two streams: _Clean Feature Suppression_ and _Poisoning Feature Augmentation_. In the Clean Feature Suppression stream, we focus on reducing the influence of clean features during the poisoning process. On the other hand, in the Poisoning Feature Augmentation stream, we aim to amplify the expression of poisoning features. To achieve these goals, we propose three techniques utilizing the pre-trained Contrastive Language-Image Pre-Training (CLIP) [46] model.
i) _CLIP-based Clean Feature Erasing (CLIP-CFE)_ is designed to suppress the expression of clean features. It leverages the capabilities of the CLIP model to identify and minimize the impact of clean features in the poisoning process.
ii) _CLIP-based Universal Adversarial Perturbations (CLIP-UAP)_ focuses on poisoning feature augmentation. It employs the CLIP model to generate universal adversarial perturbations that effectively enhance the expression of poisoning features in the training data.
iii) _CLIP-based Contrastive Feature Augmentation (CLIP-CFA)_ also falls under the poisoning feature augmentation stream. This technique utilizes the CLIP model to perform contrastive feature augmentation, enhancing the power of poisoning features and improving the effectiveness of the backdoor attack.
Our main contributions are summarized as follows.
* We present a novel and contemporary backdoor attack scenario called data-constrained backdoor attacks to image classification. Data-constrained backdoor attacks assume that attackers lack access to the entire training data, making it a versatile and practical attack with broad
applicability.
* Through a systematic analysis of previous attack methods, we identify the entanglement between poisoning and benign features as the primary contributing factor to their performance degradation.
* To address this issue, we introduce the pre-trained CLIP model into the field of backdoor attacks for the first time. We propose three innovative technologies: CLIP-CFE, CLIP-UAP, and CLIP-CFA. Extensive evaluations conducted on 3 datasets and 3 target models, and over **15** different settings demonstrate the significant superiority of our proposed CLIP-UAP and CLIP-CFA over existing backdoor attacks. Furthermore, CLIP-CFE complements existing attack methods and can be seamlessly integrated with them, resulting in further efficiency improvements.
## 2 Background
Here we first summary the common pipeline of backdoor attacks on neural networks, and then we introduce the Contrastive Language-Image Pre-Training (CLIP) Model that has been adopted in our method.
### Backdoor Attacks on Neural Networks
#### 2.1.1 General Pipeline of Backdoor Attacks
Consider a learning model \(f(\cdot;\Theta):X\to Y\), where \(\Theta\) represents the model's parameters and \(X(Y)\) denotes the input (output) space, with given dataset \(\mathcal{D}\subset X\times Y\). Backdoor attacks typically involve three essential steps: _poisoning set generation_, _backdoor injection_, and _backdoor activation_.
**Poisoning set generation.** In this step, attackers employ a pre-defined poison generator \(\mathcal{T}(x,t)\) to introduce a trigger \(t\) into a clean sample \(x\). Specifically, they select a subset \(\mathcal{P}^{t}=\{(x_{i},y_{i})|i=1,\cdots,P\}\) from the clean training set \(\mathcal{D}=\{(x_{i},y_{i})|i=1,\cdots,N\}\) (\(\mathcal{P}^{t}\subset\mathcal{D}\), and \(P\ll N\)) and result in the corresponding poisoning set \(\mathcal{P}=\{(x^{\prime}_{i},k)|x^{\prime}_{i}=\mathcal{T}(x_{i},t),(x_{i}, y_{i})\in\mathcal{P}^{t},i=1,\cdots,P\}\). Here, \(y_{i}\) and \(k\) represent the true label and the attack-target label of the clean sample \(x_{i}\) and the poisoning sample \(x^{\prime}_{i}\), respectively.
**Backdoor injection.** In this step, The attackers mix the poisoning set \(\mathcal{P}\) into the clean training set \(\mathcal{D}\) and release the new dataset. The victims download the poisoning dataset and use it to train their own DNN models [16]:
\[\underset{\Theta}{\text{min}}\quad\frac{1}{N}\sum_{(x,y)\in\mathcal{D}}L(f(x ;\Theta),y)+\frac{1}{P}\sum_{(x^{\prime},k)\in\mathcal{P}}L\left(f(x^{\prime} ;\Theta),k\right), \tag{1}\]
where \(L\) is the classification loss such as the commonly used cross entropy loss. In this case, backdoor injection into DNNs has been completed silently.
**Backdoor activation.** In this step, the victims deploy their compromised DNN models on model-sharing platforms and model-selling platforms, such as Model Zoo and AWS Marketplace. The compromised model behaves normally when presented with benign inputs, but attackers can manipulate its predictions to align with their malicious objectives by providing specific samples containing pre-defined triggers.
#### 2.1.2 Examples of Backdoor Attacks
Here, we present three popular backdoor attack methods that serve as the baseline for our preliminary experiments, providing insight into the motivation discussed in Sec. 3. All attacks follow the pipeline described in Sec. 2.1.1.
**BadNets [16].** BadNets [16] is the pioneering backdoor attack in deep learning and is often used as a benchmark for subsequent research. It utilizes a \(2\times 2\) attacker-specified pixel patch as the universal trigger pattern attached to benign samples.
**Blended [5]** Chen _et al._[5] first discuss the requirement for invisibility in backdoor attacks. They propose that the poisoning image should be visually indistinguishable from its benign counterpart to evade human inspection. To meet this requirement, they introduce a blending strategy where poisoning images are created by blending the backdoor trigger with benign images. Formally, the poison generator can be formulated as \(\mathcal{T}(x,t)=\lambda\cdot t+(1-\lambda)\cdot x\), where \(\lambda\) represents the blend ratio (we set \(\lambda=0.15\) for all experiments in this paper), and \(t\) is an attacker-specified benign image serving as the universal trigger pattern.
**Universal Adversarial Perturbations (UAP) [70]** Inspired by Universal Adversarial Perturbations (UAPs) in adversarial examples, some studies [70, 9, 26] propose optimizing a UAP on a pre-trained clean model as the natural trigger, formulated as \(\mathcal{T}(x,t)=x+t\), where \(t\) is a pre-defined UAP serving as the universal trigger pattern. It's worth noting that UAP-based backdoor attacks require a clean model pre-trained on the entire training set, which is not suitable for the discussed settings. However, to better explain our motivation that previous technologies exhibit significant performance degradation in data-constrained backdoor attacks, we assume the availability of a clean model pre-trained on the original training dataset in this section. It is important to acknowledge that this assumption does not hold in an actual attack scenario.
### Contrastive Language-Image Pre-Training (CLIP) Model
Our method introduces the Contrastive Language-Image Pre-Training (CLIP) [46] model into backdoor injection and we introduce it here. CLIP is a revolutionary deep learning model developed by OpenAI that is designed to connects texts and images by bringing them closer in a shared latent space, under a contrastive learning manner. The CLIP model is pre-trained on 400 million image-text pairs harvested from the Web, containing two encoder: CLIP text encoder \(\hat{\mathcal{L}}_{t}(\cdot)\) and CLIP image encoder \(\hat{\mathcal{L}}_{t}(\cdot)\). These encoders project the text and image to
the CLIP common embedded feature space. Since natural language is able to express a much wider set of visual concepts, it contains ability to generalize across a wide range of tasks and domains, such as text-driven image manipulation [44], zero-shot classification [6], domain generalization [42]. To our best knowledge, our paper is the first study to explore the usage of CLIP model in the security community.
## 3 Data-constrained Backdoor Attacks
Here we first show the considered pipeline of data-constrained backdoor attacks, and than illustrate the performance degradation of previous attack methods on proposed data-constrained backdoor attacks. Finally, we attribute the degradation to the entanglement between the benign and poisoning features during the poisoning injection.
### Preliminaries
Previous methods [5, 12, 16, 27, 34, 39, 41, 59, 60] have commonly followed the attack pipeline outlined in Sec. 2.1.1. However, this widely adopted pipeline relies on an overly loose assumption that all training data is collected from a single source and that the attacker has access to the entire training data, which is often not the case in real-world attack scenarios. In this paper, we focus on a more realistic scenario: _Data-constrained Backdoor Attacks_, which necessitates the definition of a modified attack pipeline as follows:
**Pipeline of data-constrained backdoor attacks.** Similar to the general pipeline of backdoor attacks on neural networks, the proposed pipeline also consists of three steps: _poisoning set generation_, _backdoor injection_, and _backdoor activation_. The backdoor injection and activation steps remain unchanged from the previous pipeline. However, in the poisoning set generation step, data-constrained backdoor attacks differ from the previous pipeline. Instead of assuming access to the entire dataset \(\mathcal{D}\), data-constrained attacks only assume access to a clean training set \(\mathcal{D}^{\prime}=\{(x_{i},y_{i})|i=1,\cdots,N^{\prime}\}\), which follows a different data distribution from \(\mathcal{D}\). To address this, the attacker randomly selects a subset \(\mathcal{B}^{\prime}=\{(x_{i},y_{i})|i=1,\cdots,P\}\) from the accessible dataset \(\mathcal{D}^{\prime}\), and creates the corresponding poisoning set \(\mathcal{P}=\{(x^{\prime}_{i},k)|x^{\prime}_{i}=\mathcal{T}(x_{i},t),(x_{i}, y_{i})\in\mathcal{B}^{\prime},i=1,\cdots,P\}\). Additionally, based on the different constraints imposed by the accessible training set \(\mathcal{D}^{\prime}\), data-constrained backdoor attacks are further categorized into three types: _Number-constrained Backdoor Attacks_, _Class-constrained Backdoor Attacks_, and _Domain-constrained Backdoor Attacks_. The details can be found in Sec. 3.2, Sec 3.3, and Sec. 3.4, respectively.
**Experimental settings.** To evaluate the performance of three backdoor attack methods (BadNets, Blended, and UAP) under data-constrained scenarios, we conduct experiments on the CIFAR-10 dataset. Specifically, we consider three types of data constraints: number, class, and domain. The settings for the poisoning attacks follow those described in Sec. 2.1.2. In all attacks, we set the attack-target label \(k\) to category 0. For our experiments, we select the VGG-16 model as the victim model and employ SGD as the optimizer with a weight decay of 5e-4 and a momentum of 0.9. The batch size is set to 256, and the initial learning rate is set to 0.01. The learning rate is multiplied by 0.1 at the 35-th and 55-th epochs, and the training is conducted for a total of 70 epochs.
### Number-constrained Backdoor Attacks
**Definition.** Let \(\mathcal{D}^{\prime}\) denote the data manipulable by the malicious source, and \(\mathcal{D}\) represent all the data available to the data collector. In the number-constrained scenario, as illustrated in Fig. 2 (a), the data collector gathers data from multiple sources, including both malicious and benign sources, to form \(\mathcal{D}\). The data provided by each data source is independently and identically distributed. In other words, \(\mathcal{D}\) and \(\mathcal{D}^{\prime}\) belong to the same distribution, but in terms of quantity, \(N^{\prime}<N\). The setting of number-constrained backdoor attacks is similar to that of data-efficient backdoor attacks discussed in previous
Figure 2: Three data-constrained attack scenarios, where the data provided by each data source is independently and identically distribution in number-constrained backdoor attacks, each data source provides data belonging to different categories in class-constrained backdoor attacks, and each data source provides data from different domains in domain-constrained backdoor attacks.
studies [60, 70]. Both aim to improve the Attack Success Rate (ASR) under a low poisoning rate. However, previous studies assumed that the attacker has access to the entire training set \(\mathcal{D}\), which enables efficient trigger design and sample selection. For example, some studies [70] draw inspiration from Universal Adversarial Perturbations (UAPs) in adversarial examples and propose to optimize a UAP on a clean model pre-trained on the training set as the natural trigger. Xia _et al._[60] enhance the poisoning efficiency in backdoor attacks by selecting poisoning data from the entire training set. Although these methods have achieved remarkable results, they cannot be directly applied to number-constrained backdoor attacks due to the lack of access to the entire training set.
**Experimental results.** In this section, we investigate the performance degradation of previous studies in number-constrained backdoor attacks. As shown in Fig. 3, the attack success rate experiences a significant decrease as the number (\(P\)) of poisoning samples decreases, particularly for Blended backdoor attacks. It is worth noting that Universal Adversarial Perturbations (UAP) achieves relatively favorable results even with a low poisoning rate. This can be attributed to the utilization of a proxy model that is pre-trained on the entire training set (\(\mathcal{D}\)). However, in our settings, UAP is not accessible, and we present the results for UAP to effectively demonstrate the performance degradation even when a pre-trained proxy model is available.
### Class-constrained Backdoor Attacks
**Definition.** In the class-constrained scenario, let \(\mathcal{D}^{\prime}\) represent the data manipulable by the malicious source, and \(\mathcal{D}\) denote all the data available to the data collector. As depicted in part (b) of Fig. 2, the data collector gathers data from multiple sources, including both malicious and benign sources, to form \(\mathcal{D}\). Each data source provides data belonging to different categories, resulting in \(\mathcal{D}^{\prime}\) containing only a subset of categories present in \(\mathcal{D}\). Therefore, \(\mathcal{D}\) and \(\mathcal{D}^{\prime}\) follow distinct distributions. More specifically, the accessible clean training set \(\mathcal{D}^{\prime}\subset X\times Y^{\prime}(\mathcal{D}^{\prime}=\{(x_{i}, y_{i})|i=1,\cdots,N^{\prime}\})\) is a subset of the entire training set \(\mathcal{D}\subset X\times Y(\mathcal{D}=\{(x_{i},y_{i})|i=1,\cdots,N\})\), where \(Y^{\prime}\subset Y=\{1,2,\cdots,C\}\). Class-constrained backdoor attacks can be seen as a general setting of clean-label backdoor attacks [50, 53, 54]. In clean-label backdoor attacks, the accessible clean training set \(\mathcal{D}^{\prime}\) is defined as \(\mathcal{D}^{\prime}\subset X\times Y^{\prime}\), where \(Y^{\prime}=\{k\}\) and \(k\) represents the attack-target label.
**Experimental results.** In this section, we explore the performance degeneration of previous studies in class-constrained backdoor attacks. As illustrated in Fig. 4, attack success rate decreases as the number of class (\(C^{\prime}\)) in the poisoning set decreases, which is the similar as experimental results on the number-constrained backdoor attacks.
### Domain-constrained Backdoor Attacks
**Definition.** In the domain-constrained scenario, as depicted in part (c) of Fig. 2, the data collector gathers data from multiple sources (both malicious and benign) to form \(\mathcal{D}\). Each data source provides data from a different domain, resulting in \(\mathcal{D}^{\prime}\) containing only a subset of the domains present in \(\mathcal{D}\). Consequently, \(\mathcal{D}\) and \(\mathcal{D}^{\prime}\) belong to different distributions. We examine an extreme scenario in domain-constrained backdoor attacks, where the test dataset follows the same distribution as the benign source (\(\mathcal{D}\setminus\mathcal{D}^{\prime}\)) and is outside the domain of the malicious source \(\mathcal{D}^{\prime}\subset X\times Y^{\prime}\) (\(\mathcal{D}^{\prime}=\{(x_{i},y_{i})|i=1,\cdots,N^{\prime}\}\)).
**Experimental results.** To simulate the domain-constrained
Figure 4: The attack success rate (ASR) in the class-constrained backdoor attack. The experiments are conducted with triggers BadNets, Blended, and UAP, with a poisoning rate of 2% (\(P=1000\)) for each. The x-axis represents the number of classes (\(|Y^{\prime}|\)) in the poisoning set \(\mathcal{D}^{\prime}\). Specifically, ’1 (1)’ and ’1 (0)’ denote dirty-label single-class (\(Y^{\prime}=\{c\},c\neq k\)) and clean-label single-class (\(Y^{\prime}=\{k\}\)), respectively. The experiment is repeated 5 times, and the solid lines indicate the average results.
Figure 3: The attack success rate (ASR) in the number-constrained backdoor attacks. The abscissa is the number (\(P\)) of samples in the poisoning set \(\mathcal{D}^{\prime}\). The experiment is repeated 5 times, and the solid lines represent the mean results.
scenario, we conducted experiments with the following settings in this section: we designate the CIFAR-10 dataset as the benign source, the ImageNet dataset as the malicious source, and evaluated the attack performance on the CIFAR-10 dataset. Fig. 5 illustrates the results, showing a decrease in the attack success rate as the domain rate (the proportion of poisoning sets sampled from \(\mathcal{D}\setminus\mathcal{D}^{\prime}\) and \(\mathcal{D}^{\prime}\)) in the poisoning set decreases. This observation aligns with the experimental findings in the number-constrained and class-constrained backdoor attacks.
### Entanglement Between Benign and Poisoning Features
In our data-constrained backdoor attacks, we have made two significant observations. Firstly, we have noticed a severe performance decline in class-constrained and domain-constrained backdoor attacks. Secondly, we consistently found that BadNets outperforms Blended in terms of attack efficiency. We attribute these observations to the entanglement between benign and poisoning features. Our study is the first to investigate feature entanglement in the context of backdoor attacks, providing new insights into backdoor learning. Ideally, we would expect backdoor models to rely solely on poisoning features to make decisions when encountering poisoning samples, as this would be the most efficient approach for backdoor attacks. However, neural networks tend to be greedy and utilize all features for decision-making [31], leading to activation of both poisoning and benign features during backdoor injection. This results in reduced poisoning efficiency when there is a difference in benign features between the backdoor injection and activation phases, as observed in data-constrained backdoor attacks.
We have further investigated our hypothesis and present our findings in Fig. 6. As shown in Fig. 6 (a), the attack efficiency of number-constrained, dirty-label single-class, and clean-label single-class backdoor attacks4 decreases in turn under the same poison rate. To understand the reason behind this phenomenon, we provide visualizations of the backdoor injection and activation phases for these three attacks in Fig. 6 (b). For the **number-constrained backdoor attack**, the distribution of poisoning samples (consisting of both benign and poisoning features) in the backdoor injection phase is the same as that in the backdoor activation phase. In other words, both benign and poisoning features are activated simultaneously during both phases. However, for the **dirty-label single-class backdoor attack**, the distribution of poisoning samples (consisting of single-class benign and poisoning features) in the backdoor injection phase is different from that in the backdoor activation phase. During the injection phase, both benign and poisoning features are activated, but during activation phase, only the poisoning feature is activated. This is the reason why previous attack methods on dirty-label single-class backdoor attacks exhibit performance degeneration. The **clean-label single-class backdoor attack** is similar to the dirty-label single-class backdoor attack in terms of the distribution of poisoning samples. However, during backdoor injection, there is competing activation5 between benign and poisoning features. Consequently, the poisoning efficiency of clean-label single-class backdoor attacks is lower than that of dirty-label single-class backdoor attacks.
Footnote 4: In the clean-label single-class backdoor attack, the benign feature of the accessible class (the same as the attack-target class) in both poisoning and clean sets is labeled with the same label (e.g., “Fish” in Fig. 6), and the clean set contains more samples of the attack-target class. As a result, the presence of the benign feature in the poisoning set hampers the activation of the poisoning features. In contrary, the benign feature of the accessible class in poisoning and clean sets is labeled with the different label in the dirty-label single-class backdoor attack (e.g., the benign feature in the clean set is labeled as “Frog”, while the benign+poisoning feature in the poisoning set is labeled as “Fish”). Consequently, the benign feature in the poisoning set does not impact the activation of the poisoning features.
In summary, performance degeneration in data-constrained backdoor attacks can be attributed to entanglement between benign and poisoning features. BadNets exhibits simpler triggers compared to Blended, leading to less entanglement and a higher attack success rate.
## 4 CLIP-guided Backdoor Attacks Method
In this section, we present our approach, which consists of two components: **Clean Feature Suppression** and **Poisoning Feature Augmentation**. These components are independent of each other and can be seamlessly combined. Specifically, Clean Feature Suppression can be implemented through _CLIP-based Clean Feature Erasing (CLIP-CFE)_, and Poisoning Feature Augmentation has included two technolo
Figure 5: The attack success rate (ASR) in the domain-constrained backdoor attack. The poisoning rates of experiments with trigger BadNets, Blended, and UAP are 2% (\(P=1000\)), 2% (\(P=1000\)), and 1% (\(P=500\)), respectively. The abscissa is the domain rate that represents the proportion of poisoning sets sampled from \(\mathcal{D}\setminus\mathcal{D}^{\prime}\) and \(\mathcal{D}^{\prime}\)). The experiment is repeated 5 times, and the solid lines represent the mean runs.
gies: _CLIP-based Universal Adversarial Perturbations_ (_CLIP-UAP_) and _CLIP-based Contrastive Feature Augmentation_ (_CLIP-CFA_). By seamlessly integrating these two orthogonal aspects, our method presents a comprehensive and versatile solution that addresses all three types of data-constrained backdoor attacks. Before proposing the attack method, we first introduce threat model considered in our work.
### Threat Model
**Attack scenario.** The proliferation of large-scale artificial intelligence models, such as ChatGPT and Stable Diffusion, necessitates the collection of massive amounts of data from the web. However, the security and trustworthiness of this data cannot always be guaranteed. This data collection pipeline inadvertently introduces vulnerabilities that can be exploited by data-based backdoor attacks. Attackers can strategically inject poisoning data into the training dataset and publish it on the internet, potentially compromising the integrity and performance of these models. Unlike previous attack scenarios where all training data is sourced from a single provider, we consider a more realistic scenario in which victims collect data from multiple sources. In this scenario, attackers only have access to a portion of the training dataset. This situation mirrors the real-world training process of models that utilize diverse public data. By acknowledging the challenges posed by multi-source data collection and limited attacker access, our study provides valuable insights into the security implications of such scenarios.
**Attack goal.** The objective of our paper is aligned with popular backdoor attacks, as seen in previous studies [16, 24]. The attackers aim to activate a hidden trigger within the model by providing specific inputs, leading the model to produce incorrect results. Our attack strategy emphasizes three key properties: (i) _Minimal side effects:_ The backdoor attacks should not adversely impact the accuracy of the model on benign inputs. (ii) _Effective backdoor:_ The attack should have a high success rate across various datasets and models, ensuring its efficiency. (iii) _Stealthy attack:_ The backdoor attacks should be inconspicuous and difficult to detect, maintaining their stealthiness. Our research aims to develop invigorative backdoor attacks that strike a balance between effectiveness and preserving the integrity of the model's performance on legitimate inputs.
**Attackers' prior knowledge.** In order to simulate a realistic scenario, we assume that the attackers have no access to the models or training details. They possess only general knowledge about the class labels involved in the task. This assumption reflects a more challenging and practical setting, where attackers have limited information about the target system.
**Attackers' capabilities.** Building upon previous studies [16], we make the assumption that the attackers possess the capability to control the training data. However, we further impose a stricter assumption in this work, stating that the attackers have control over only a portion of the training data. Consequently, we divide the attack scenario into three distinct tasks, each representing different capabilities of the attacker. These tasks include: (i) _Number-constrained backdoor attacks_, where the attacker has access to only a subset of the training data; (ii) _Class-constrained backdoor attacks_, where the attacker has access to only a subset of the classes in the training data; and (iii) _Domain-constrained backdoor attacks_, where the attacker has access to only a subset of the domains within the training data. By considering these various constraints, we provide a comprehensive analysis of backdoor attacks in different data-constrained scenarios.
### Clean Feature Suppression
As described in Sec. 3, the effectiveness of data-constrained backdoor attacks is hindered due to the entanglement of be
Figure 6: Analyses the entanglement between benign and poisoning features on the number-constrained, dirty-label single-class, and clean-label single-class backdoor attacks.
nign and poisoning features during the backdoor injection phase. To address this challenge, we propose a solution called "clean feature suppression" in this section. The primary objective of this approach is to minimize the impact of benign features on the decision-making process, thus amplifying the significance of poisoning features.
#### 4.2.1 CLIP-based Clean Feature Erasing
To achieve clean feature suppression, we can employ a feature extractor pre-trained on the entire training set (As shown in **Clean Feature Erasing Noise**). However, since our data-constrained backdoor attacks lack access to the complete training set, an alternative solution is required. Recent studies have shown that pre-trained CLIP [46] generates consistent and robust semantic representations across a wide range of (image, text) pairs, enabling impressive zero-shot classification performance comparable to supervised learning accuracy on challenging datasets like ImageNet (As shown in **CLIP for Zero-shot Classification**). Hence, we can utilize the pre-trained general model CLIP, which replacesSS the feature extractor trained on the entire training set, allowing us to achieve clean feature suppression (As shown in **CLIP for Clean Feature Erasing**).
Footnote §: CLIP [46] is a general pre-trained model that has been released by OpenAI. It is pre-trained on 400 million image-text pairs harvested from the Web and can express a much wider set of visual concepts.
**Clean Feature erasing noise.** The technique of clean feature suppression aims to eliminate the clean information present in images by introducing optimized noise, denoted as \(\delta\), which helps modify the input image to resemble the unbiased class. In accordance with the data-constrained backdoor attack pipeline outlined in Sec. 3, we assume that the chosen clean training dataset for generating the poisoning set consists of \(P\) clean examples, denoted as \(\mathcal{D}^{\prime}\subset X\times Y\) (where \(\mathcal{D}^{\prime}=\{(x_{i},y_{i})|i=1,\cdots,P\}\)). Here, \(x_{i}\in X\) represents the inputs, \(y_{i}\in Y=\{1,2,\cdots,C\}\) represents the labels, and \(C\) denotes the total number of classes. We refer to the modified version as \(\mathcal{P}_{e}=\{(x_{e,i},y_{i})|i=1,\cdots,P\}\), where \(x_{e,i}=x_{i}+\delta_{i}\) represents the erased version of the training example \(x_{i}\in\mathcal{D}^{\prime}\). The term \(\delta_{i}\in\Delta\) denotes the "invisible" noise applied to achieve the erasing effect. The noise \(\delta_{i}\) is subject to the constraint \(||\delta_{i}||_{p}\leq\epsilon\), where \(||\cdot||_{p}\) represents the \(L_{p}\) norm, and \(\epsilon\) is set to a small value to ensure the stealthiness of the backdoor attacks. Our objective in erasing the clean features is to ensure that the pre-trained feature extractor does not extract any meaningful information from the given images \(x\). This is achieved by introducing customized and imperceptible noise, denoted as \(\delta_{i}\). To be more specific, for a clean example \(x_{i}\), we propose to generate the noise \(\delta_{i}\) that erases the features by solving the following optimization problem:
\[\delta_{i}=\operatorname*{arg\,min}_{\delta_{i}}L(f^{\prime}(x_{i}+\delta_{i} ),y_{m})\quad\text{s.t.}\quad||\delta_{i}||_{p}\leq\epsilon, \tag{2}\]
where \(L\) represents the mean squared error (MSE) loss, defined as \(L(a,b)=||a-b||^{2}\). The function \(f^{\prime}(\cdot)\) corresponds to the pre-trained feature extractor employed for noise generation. Additionally, \(y_{m}\) denotes the unbiased label for the classification task, which is defined as \(y_{m}=[\frac{1}{C},\frac{1}{C},\cdots,\frac{1}{C}]\), where \(C\) signifies the total number of classes. While this vanilla method proves effective in erasing clean features, it requires a proxy feature extractor that has been pre-trained on the entire training set. This approach is not suitable for our data-restricted backdoor attacks.
**CLIP for zero-shot classification.** The pre-trained CLIP model [46] possesses the ability to express a broader range of visual concepts and has been utilized as a general feature extractor in various tasks. These tasks include text-driven image manipulation [44], zero-shot classification [6], and domain generalization [42]. In this section, we introduce the pipeline of CLIP in zero-shot classification, which can serve as inspiration for incorporating it into our clean feature erasing approach. CLIP achieves zero-shot classification by aligning text and image features. Firstly, CLIP employs its text encoder, denoted as \(\hat{\mathcal{E}}_{t}(\cdot)\), to embed the input prompts ("a photo of a \(c_{i}\)") into text features \(T_{i}\in\mathbb{R}^{d}\), where \(i=\{1,2,\cdots,C\}\) represents the classes. Subsequently, the image feature \(I_{j}\in\mathbb{R}^{d}\) of image \(x_{j}\) is embedded using the image encoder, denoted as \(\hat{\mathcal{E}}_{i}(\cdot)\). During the inference phase, the classification prediction \(y_{j}\) is computed using the cosine similarity between \(T_{i}\) and \(I_{j}\). This can be expressed as:
\[y_{j}=\operatorname*{arg\,max}_{i}(\langle I_{j},T_{i}\rangle),\quad i\in\{1,2,\cdots,C\}, \tag{3}\]
where \(C\) represents the number of classes, and \(\langle\cdot,\cdot\rangle\) represents the cosine similarity between two vectors.
**CLIP for Clean Feature Erasing (CLIP-CFE).** Taking inspiration from CLIP's approach to zero-shot classification, we leverage a general CLIP model to optimize the feature erasing noise. This allows us to relax the need for a proxy feature extractor pre-trained on the entire training set. We consider \(C\) prompts, "a photo of a \(c_{i}\)," corresponding to different classes \(c_{i}\) in the dataset, where \(i=1,\cdots,C\). The CLIP-based feature erasing noise, denoted as \(\delta_{i}\), is proposed for the input \(x_{i}\) by solving the following optimization problem:
\[\delta_{i}=\operatorname*{arg\,min}_{\delta_{i}}L(f_{CLIP}(x_{i}+\delta_{i}, \mathbb{P}),y_{m})\quad\text{s.t.}\quad||\delta_{i}||_{p}\leq\epsilon, \tag{4}\]
where \(L\) represents the mean squared error (MSE) loss, \(y_{m}\) denotes the unbiased label for the classification task defined as \(y_{m}=[\frac{1}{C},\frac{1}{C},\cdots,\frac{1}{C}]\), \(\mathbb{P}\) represents the set of prompts corresponding to different classes in the dataset, and \(f_{CLIP}\) denotes the CLIP-based model used to obtain the label of the input image. Specifically,
\[\mathbb{P}=\{p_{1},p_{2},\cdots,p_{C}\}=\{\text{``a photo of a $c_{i}$''}|i=1,2,\cdots,C\}, \tag{5}\]
\[f_{CLIP}(x_{i}+\delta_{i},\mathbb{P})= \tag{6}\] \[\bigg{[}\frac{\langle\hat{\mathcal{E}}_{t}(x_{i}+\delta_{i}),\hat{ \mathcal{E}}_{t}(p_{1})\rangle}{\sum_{i=1}^{C}\langle\hat{\mathcal{E}}_{t}(x_{ i}+\delta_{i}),\hat{\mathcal{E}}_{t}(p_{i})\rangle},\cdots,\frac{\langle\hat{ \mathcal{E}}_{t}(x_{i}+\delta_{i}),\hat{\mathcal{E}}_{t}(p_{C})\rangle}{\sum_{ i=1}^{C}\langle\hat{\mathcal{E}}_{t}(x_{i}+\delta_{i}),\hat{\mathcal{E}}_{t}(p_{i}) \rangle}\bigg{]}.\]
To solve the constrained minimization problem illustrated in Eq. 4, we utilize the first-order optimization method known as Projected Gradient Descent (PGD) [37]. The PGD method enables us to find a solution by iteratively updating the noise as follows:
\[\delta_{i}^{t+1}=\prod_{\epsilon}\big{(}\delta_{i}^{t}-\alpha\cdot\text{sign} (\nabla_{\delta}L(f_{CLIP}(x_{i}+\delta_{i}^{t},\mathbb{P}),y_{m}))\big{)}, \tag{7}\]
where \(t\) represents the current perturbation step, with a total of \(T=50\) steps. \(\nabla_{\delta}L(f_{CLIP}(x_{i}+\delta_{i}^{t},\mathbb{P}),y_{m})\) denotes the gradient of the loss with respect to the input. The projection function \(\prod\) is applied to restrict the noise \(\delta\) within the \(\epsilon\)-ball (with \(\epsilon=8/255\) in our paper) around the original example \(x\), ensuring it does not exceed this boundary. The step size \(\alpha\) determines the magnitude of the noise update at each iteration. The resulting erasing examples are then obtained as follows:
\[\mathcal{R}_{e}=\{(x_{e,i},y_{i})|i=1,\cdots,P\},\quad\text{where}\quad x_{e, i}=x_{i}+\delta_{i}^{T}. \tag{8}\]
### Poisoning Feature Augmentation
In addition to eradicating clean features in images to tackle the entanglement between benign and poisoning features, enhancing the expression of poisoning features is another effective approach. In this section, we present two parallel triggers aimed at augmenting the poisoning features.
#### 4.3.1 CLIP-based Universal Adversarial Perturbations
In this section, we also employ the widely-used pre-trained CLIP model [46] to generate universal adversarial perturbations as the backdoor trigger. Xia _et al._[61] argue that deep models inherently possess flaws, and it is easier to exploit and enhance an existing flaw to serve as a backdoor rather than implanting a new one from scratch (BadNets [16] and Blended [5]). Universal Adversarial Perturbations (UAP) [61, 70] utilize these inherent flaws in models as triggers, providing a straightforward method for augmenting the poisoning feature. However, this approach typically requires a feature extractor that has been pre-trained on the entire training set, which is not practical in data-constrained backdoor attacks. To address this limitation, we propose a CLIP-based5 Universal Adversarial Perturbations (CLIP-UAP) method. Specifically, given an accessible clean training set \(\mathcal{D}^{\prime}=\{(x_{i},y_{i})|i=1,\cdots,N^{\prime}\}\) and an attack-target label \(k\), the defined trigger can be formulated as follows:
\[\delta_{\text{aug}}=\underset{\|\delta_{\text{aug}}\|_{p}\leq\epsilon}{\arg \min}\sum_{(x,y)\in\mathcal{D}^{\prime}}L(f_{CLIP}(x+\delta_{\text{aug}}, \mathbb{P}),k), \tag{9}\]
where \(\mathbb{P}\) and \(f_{CLIP}\) are defined as shown in Eq. 5 and Eq. 6, respectively. Similar to Eq. 7, we utilize the first-order optimization method known as Projected Gradient Descent (PGD) [37] to solve the constrained minimization problem. The optimization process can be expressed as follows:
\[\delta_{\text{aug}}^{t+1}=\prod_{\epsilon}\big{(}\delta_{\text{aug}}^{t}- \alpha\cdot\text{sign}(\nabla_{\delta_{\text{aug}}}L(f_{CLIP}(x+\delta_{ \text{aug}}^{t},\mathbb{P}),k))\big{)}, \tag{10}\]
where \(t\), \(\nabla_{\delta_{\text{aug}}}L(f_{CLIP}(x+\delta_{\text{aug}}^{t},\mathbb{P} ),k)\), and \(\prod\) hold the same meaning as in Eq. 7. Unlike the sample-wise clean feature erasing noise, the CLIP-UAP serves as a universal trigger for the entire training set. Therefore, it follows the optimization formulation presented in Eq. 10 to generate \(\delta_{\text{aug}}^{t+1}\) at each step \(t\). The optimization process is performed on all samples in the accessible clean training set \(\mathcal{D}^{\prime}\). Consequently, the CLIP-UAP for the set \(\mathcal{D}^{\prime}\) can be represented as \(\delta_{\text{aug}}=\delta_{\text{aug}}^{T}\), and the poison generator is formulated as \(\mathcal{T}(x,\delta_{\text{aug}})=x+\delta_{\text{aug}}\).
#### 4.3.2 CLIP-based Contrastive Feature Augmentation
While the CLIP-UAP method has shown impressive results in terms of poisoning efficiency, it requires customization for different attack-target labels. In this section, we propose a more versatile trigger design that is independent of the attack-target label, enhancing the poisoning feature. Drawing inspiration from the entanglement between benign and poisoning features discussed in Sec. 3.5, we utilize contrastive optimization to augment the poisoning feature. Our expectation is that the poisoning feature extracted from the designed trigger will be more expressive compared to the clean feature extracted from the clean samples. Specifically, given a trigger \(\delta_{\text{con}}^{t+1}\) to be optimized, two random views (query: \(x+\delta_{\text{con}}\) and key: \(x_{1}+\delta_{\text{con}}\)) are created by different clean samples (\(x\) and \(x_{1}\)). Positive pair is defined as such query-key pair, between different poisoning samples. Negative pairs are defined as pairs between poisoning example and its corresponding clean example, _i.e._ between \(x+\delta_{\text{con}}\) and \(x\). All views are passed through the pre-trained image encoder \(\hat{\mathcal{E}}_{t}(\cdot)\) of the CLIP to acquire the representation \(v\):
\[v_{q}=\hat{\mathcal{E}}_{t}(x+\delta_{\text{con}}),\quad v_{+}=\hat{\mathcal{ E}}_{t}(x_{1}+\delta_{\text{con}}),\quad v_{-}=\hat{\mathcal{E}}_{t}(x). \tag{11}\]
CLIP-Contrastive feature augmentation (CLIP-CFA) focuses on optimizing the general trigger by maximizing the similarity between positive pairs while ensuring dissimilarity between negative pairs. To achieve this, we design a loss function as follows:
\[L_{\text{con}}(x,x_{1},\delta_{\text{con}})=-\frac{\langle v_{q},v_{+}\rangle}{ \langle v_{q},v_{-}\rangle}, \tag{12}\]
where \(\langle\cdot,\cdot\rangle\) represents the cosine similarity between two vectors and the optimization of \(\delta_{\text{con}}\) is designed as:
\[\delta_{\text{con}}=\operatorname*{arg\,min}_{\|\delta_{\text{con}}\|_{p}\leq \epsilon}\sum_{(x,y)\in\mathcal{D}^{\prime}}L_{\text{con}}(x,x_{1},\delta_{ \text{con}}). \tag{13}\]
Similar to Eq. 10, we also adopt the first-order optimization method PGD [37] to solve the constrained minimization problem as follows:
\[\delta_{\text{con}}^{+1}=\prod_{\epsilon}\big{(}\delta_{\text{con}}^{\prime}- \alpha\cdot\text{sign}(\nabla_{\delta_{\text{con}}}L_{\text{con}}(x,x_{1}, \delta_{\text{con}}))\big{)}. \tag{14}\]
Therefore, the optimization should also be accumulated on all samples in the accessible clean training set \(\mathcal{D}^{\prime}\). Finally, the CLIP-CFA of set \(\mathcal{D}^{\prime}\) can be formulated as \(\delta_{\text{con}}=\delta_{\text{con}}^{T}\), and the poison generator is formulated as \(\mathcal{T}(x,\delta_{\text{con}})=x+\delta_{\text{con}}\).
### Attack Summary
In Sec. 4.3, we present two independent trigger design methods: CLIP-based Universal Adversarial Perturbations (CLIP-UAP) and CLIP-based Contrastive Feature Augmentation (CLIP-CFA). These triggers are aimed at enhancing the expression of poisoning features and can replace previous trigger design approaches, leading to improved performance in data-constrained backdoor attacks. Additionally, in Sec. 4.2, we introduce a CLIP-based Clean Feature Erasing (CLIP-CFE) method. This approach minimizes the influence of clean features during the poisoning process and can be integrated into any of the aforementioned trigger design methods. By combining trigger design and clean feature erasing, our final approach achieves state-of-the-art performance in all three types of data-constrained backdoor attacks.
## 5 Experiments
We provides an overview of the experimental settings in this paper, covering various aspects such as datasets, model architecture, evaluation metrics, baselines, and implementations (Appendix 7.1). Subsequently, we perform comprehensive experiments to assess the effectiveness of our proposed methods through answering the following research questions:
**RQ1: Are proposed technologies effective on three backdoor attacks?** (Sec. 5.1)
**RQ2: Are proposed technologies harmless for Benign Accuracy?** (Sec. 5.2)
**RQ3: Are proposed technologies stealthy for victims?** (Sec. 5.3)
**RQ4: Are proposed technologies effective for different poisoning settings?** (Sec. 5.4)
In this section, we present the results specifically for CIFAR-100 datasets. Experimental outcomes for CIFAR-10 and ImageNet-50 are provided in Appendix 7.2 and Appendix 7.3 respectively. Additionally, for further discussions, please refer to Appendix 7.5.
Figure 7: Attack success rate (ASR) of the **(a):** number-constrained backdoor attacks, **(b):** clean-label single-class attack (the access category \(Y^{\prime}\) is set to \(\{0\}\)), **(c):** dirty-label single-class attack (the access category \(Y^{\prime}\) is set to \(\{1\}\)), and **(d):** domain-constrained backdoor attacks (domain rate is set to \(0\)) on the CIFAR-100 dataset. The red points represents w/o CLIP-based Clean Feature Erasing (CLIP-CFE), while the green points represents w/ CLIP-CFE. All experiments are repeated 5 times, and the results are computed withthe mean of five different runs.
### RQ1: Are proposed technologies effective on three backdoor attacks?
To assess the effectiveness of our proposed technologies, we conduct attacks on various target models and datasets, evaluating the Attack Success Rate (ASR) for each target model. In order to establish a basis for comparison, we introduce two baseline attack methods: BadNets [16] and Blended [5], as discussed in Sec. 2.1.2. Fig. 7 illustrates the performance of the following types of backdoor attacks on the CIFAR-100 dataset: (a) number-constrained, (b) clean-label single-class (class-constrained), (c) dirty-label single-class (class-constrained), and (d) out-of-the-domain (domain-constrained)++.
Footnote †: *: As shown in Appendix 7.1, CIFAR-10 and CIFAR-100 have low resolution, which makes unclear visualizations. Therefore, we show the results on the ImageNet-50 dataset in this section.
**CLIP-based poisoning feature augmentation is more effective than previous attack methods.** Our proposed methods, CLIP-UAP and CLIP-CFA, outperform the baseline techniques (BadNets [16] and Blended [5]++) in terms of consistency across different attacks and target models. Specifically, we achieved an ASR of 0.878, 0.825, 0.984, and 0.988 for BadNets, Blended, CLIP-UAP, and CLIP-CFA, respectively, in the number-constrained backdoor attack on the VGG-16 dataset. These results provide evidence that our proposed poisoning feature augmentation generates more effective triggers compared to other methods.
Footnote †: *: As shown in Appendix 7.1, CIFAR-10 and CIFAR-100 have low resolution, which makes unclear visualizations. Therefore, we show the results on the ImageNet-50 dataset in this section.
**CLIP-based Clean Feature Suppression is useful for different attack methods.** Our proposed method, CLIP-CFE, has shown significant improvements in effectiveness compared to the baseline method without CLIP-CFE. In various cases, CLIP-CFE has enhanced the poisoning efficiency significantly. For instance, in the clean-label single-class backdoor attack on the VGG-16 dataset, we observed remarkable improvements of 187%, 150%, 110%, and 229% for BadNets, Blended, CLIP-UAP, and CLIP-CFA, respectively. However, it is worth noting that in the results of the domain-constrained backdoor attacks on MobileNet-V2 (as depicted in the right part of Fig. 7 (d)), CLIP-CFA and CLIP-UAP only slightly outperform the corresponding methods with CFE.
**More discussion.** While our technologies have shown significant improvements in poisoning efficiency compared to baseline methods, there are still important discussions that need to be addressed. We aim to provide answers to the following questions in a systematic manner in Appendix 7.5: i) Why do we observe performance degradation in the clean-label single-class attack? ii) Why are domain-constrained backdoor attacks generally easier compared to class-constrained backdoor attacks?
### RQ2: Are proposed technologies harmless for Benign Accuracy?
As shown in Table 1, our proposed methods, CLIP-UAP and CLIP-CFA, exhibit similar or even better average Benign Accuracy (BA) compared to the baseline methods, BadNets [16] and Blended [5]. Additionally, it is worth noting that our proposed method, CLIP-CFE, does not negatively impact BA. This finding confirms that our technologies are harmless to the benign accuracy compared to baseline methods, even under various settings and different backdoor attacks.
### RQ3: Are proposed technologies stealthy for victims?
Fig. 8 showcases examples of poisoning images generated by different attacks on the ImageNet-50** dataset. While our CLIP-UAP and CLIP-CFA may not achieve the highest stealthiness in terms of SSIM (as indicated in Table 2), the
\begin{table}
\begin{tabular}{c c c c|c c c|c c c|c c c|c} \hline \hline \multirow{2}{*}{Trigger} & Clean Feature & \multicolumn{8}{c}{Backdoor Attacks} & \multirow{2}{*}{Average} \\ \cline{3-14} & Suppression & \multicolumn{3}{c}{Number Constrained} & \multicolumn{3}{c}{Class Constrained (\(Y^{\prime}=\{0\}\))} & \multicolumn{3}{c}{Class Constrained (\(Y^{\prime}=\{1\}\))} & \multicolumn{3}{c}{Domain Constrained} \\ & & V-16 & R-18 & M-2 & V-16 & R-18 & M-2 & V-16 & R-18 & M-2 & V-16 & R-18 & M-2 & \multirow{3}{*}{Average} \\ \hline \multirow{2}{*}{BadNets} & w/o CLIP-CFE & 0.698 & 0.728 & 0.722 & 0.698 & 0.730 & 0.728 & 0.700 & 0.728 & 0.729 & 0.699 & 0.727 & 0.728 & 0.718 \\ & w/ CLIP-CFE & 0.700 & 0.730 & 0.728 & 0.701 & 0.731 & 0.723 & 0.698 & 0.730 & 0.726 & 0.701 & 0.730 & 0.724 & 0.719 \\ \hline \multirow{2}{*}{Blended} & w/o CLIP-CFE & 0.700 & 0.727 & 0.722 & 0.700 & 0.726 & 0.725 & 0.701 & 0.729 & 0.723 & 0.698 & 0.729 & 0.725 & 0.717 \\ & w/ CLIP-CFE & 0.700 & 0.730 & 0.727 & 0.701 & 0.729 & 0.727 & 0.699 & 0.730 & 0.724 & 0.700 & 0.731 & 0.727 & 0.719 \\ \hline \multirow{2}{*}{CLIP-UAP} & w/o CLIP-CFE & 0.702 & 0.730 & 0.727 & 0.702 & 0.729 & 0.727 & 0.701 & 0.730 & 0.725 & 0.702 & 0.731 & 0.729 & 0.720 \\ & w/ CLIP-CFE & 0.700 & 0.731 & 0.725 & 0.702 & 0.732 & 0.726 & 0.699 & 0.732 & 0.724 & 0.700 & 0.730 & 0.725 & 0.719 \\ \hline \multirow{2}{*}{CLIP-CFA} & w/o CLIP-CFE & 0.703 & 0.731 & 0.727 & 0.701 & 0.730 & 0.725 & 0.701 & 0.730 & 0.727 & 0.700 & 0.731 & 0.727 & 0.719 \\ & w/ CLIP-CFE & 0.702 & 0.729 & 0.729 & 0.701 & 0.730 & 0.727 & 0.702 & 0.731 & 0.725 & 0.702 & 0.730 & 0.727 & 0.720 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The Benign Accuracy (BA) on the CIFAR-100 dataset. All results are computed the mean by 5 different run.
poisoning images generated by our methods appear more natural to human inspection compared to the baseline attacks. Additionally, incorporating CLIP-CFE has minimal impact on both PSNR and the natural appearance of the images, while achieving higher stealthiness in terms of SSIM.
### RQ4: Are proposed technologies effective for different poisoning settings?
**Experiments on different poison rates for number-constrained backdoor attacks.** We conduct ablation studies to assess the effectiveness of our proposed methods in reducing the number of poisoning samples (poisoning rates) for number-constrained backdoor attacks. The results depicted in Fig. 9 demonstrate the following: i) The attack success rate increases with higher poisoning rates for different attacks. ii) Our proposed CLIP-UAP and CLIP-CFA outperform the baseline techniques, BadNets [16] and Blended [5]. iii) The incorporation of our proposed CLIP-CFE further enhances the poisoning effectiveness across different triggers.
**Experiments on different poison classes for class-constrained backdoor attacks.** We conduct ablation studies to assess the effectiveness of our proposed methods in increasing the number of poisoning classes for class-constrained backdoor attacks. The results presented in Fig. 10 demonstrate the following: i) The attack success rate increases with higher poisoning classes for different attacks. ii) The attack success rate of clean-label single-class attack is lower than that of dirty-label single-class attacks. iii) Our proposed methods, CLIP-UAP and CLIP-CFA, outperform the baseline techniques, BadNets [16] and Blended [5]. iv) The incorporation of our proposed CLIP-CFE further enhances the poisoning effectiveness across different triggers.
**Experiments on different domain rates for domain-constrained backdoor attacks.** We conduct ablation studies to assess the effectiveness of our methods in increasing the domain rate for domain-constrained backdoor attacks. The results depicted in Fig. 11 demonstrate the following: i) The ASR increases with higher domain rates for different attacks. ii) Our proposed CLIP-UAP and CLIP-CFA outperform the baseline techniques, BadNets [16] and Blended [5]. iii) The incorporation of our proposed CLIP-CFE further enhances the poisoning effectiveness across different triggers.
**Experiments on different large pre-trained models.** We utilize the pre-trained CLIP model as the basis for our technologies. It's worth noting that the community has proposed various CLIP variants. Therefore, an important practical consideration is whether our proposed technologies remain robust when applied different pre-trained CLIP models. To investigate this, we conduct ablation studies on different CLIP models for number-constrained backdoor attacks, as depicted in Fig. 12. The results demonstrate that our proposed technologies exhibit robustness across different CLIP models, with ViT-B/32 emerging as a competitive choice for all methods.
Figure 8: Visualizations of the poisoning samples with different triggers.
## 6 Limitations and Future Works
In this paper, we address the challenges of data-constrained backdoor attacks, which occur in more realistic scenarios where victims collect data from multiple sources and attackers cannot access the full training data. To overcome the performance degradation observed in previous methods under data-constrained backdoor attacks, we propose three technologies from two streams that leverage the pre-trained CLIP model to enhance the efficiency of poisoning. Our goal is to inspire the research community to explore these realistic backdoor attack scenarios and raise awareness about the threats posed by such attacks. In the following section, we discuss the limitations of our approach and outline potential future directions for backdoor learning research.
**Performance degradation in clean-label backdoor attacks.** Clean-label backdoor attacks present a significant challenge [69]. As shown in Fig. 7, previous methods exhibit a poor ASR, and our technologies show limited improvement in clean-label backdoor attacks when the poisoning rate is low. In future research, we will investigate the underlying reasons for this situation and explore more efficient attack methods specifically designed for clean-label backdoor attacks.
**Application limitations.** Our technologies depend on the CLIP model that is pre-trained on natural images, which may limit their applicability to certain domains such as medical images or remote sensing. In such cases, a possible solution
Figure 11: The attack success rate on the CIFAR-100 dataset with different domain rates. All results were computed as the mean of five different runs.
Figure 12: The attack success rate on the CIFAR-100 dataset across different pre-trained CLIP models for number-constrained backdoor attacks. All results are averaged over five separate runs.
Figure 10: The attack success rate on the CIFAR-100 dataset with different accessible class of poisoning samples, where 1 (0) and 1 (1) in the abscissa represent the clean-label and dirty-label single-class attacks, respectively. All results were computed as the mean of five different runs.
Figure 9: The attack success rate on the CIFAR-100 dataset with different poisoning rates. All results were computed as the mean of five different runs.
is to replace CLIP with a domain-specific pre-trained model, such as MedCLIP [36] for medical images or Satellite [1] for remote sensing, to adapt our methods to the target domain.
**Transfer to other domains.** The attack scenario we have defined is not limited to a specific domain and can be applied to other important applications, including backdoor attacks for malware detection, deepfake detection, and federated learning. In our future work, we plan to explore the design of realistic attack scenarios and efficient backdoor attacks specifically tailored for these applications.
|
2307.12790 | Compact & Capable: Harnessing Graph Neural Networks and Edge Convolution
for Medical Image Classification | Graph-based neural network models are gaining traction in the field of
representation learning due to their ability to uncover latent topological
relationships between entities that are otherwise challenging to identify.
These models have been employed across a diverse range of domains, encompassing
drug discovery, protein interactions, semantic segmentation, and fluid dynamics
research. In this study, we investigate the potential of Graph Neural Networks
(GNNs) for medical image classification. We introduce a novel model that
combines GNNs and edge convolution, leveraging the interconnectedness of RGB
channel feature values to strongly represent connections between crucial graph
nodes. Our proposed model not only performs on par with state-of-the-art Deep
Neural Networks (DNNs) but does so with 1000 times fewer parameters, resulting
in reduced training time and data requirements. We compare our Graph
Convolutional Neural Network (GCNN) to pre-trained DNNs for classifying
MedMNIST dataset classes, revealing promising prospects for GNNs in medical
image analysis. Our results also encourage further exploration of advanced
graph-based models such as Graph Attention Networks (GAT) and Graph
Auto-Encoders in the medical imaging domain. The proposed model yields more
reliable, interpretable, and accurate outcomes for tasks like semantic
segmentation and image classification compared to simpler GCNNs | Aryan Singh, Pepijn Van de Ven, Ciarán Eising, Patrick Denny | 2023-07-24T13:39:21Z | http://arxiv.org/abs/2307.12790v1 | Compact & Capable: Harnessing Graph Neural Networks and Edge Convolution for Medical Image Classification
###### Abstract
Graph-based neural network models are gaining traction in the field of representation learning due to their ability to uncover latent topological relationships between entities that are otherwise challenging to identify. These models have been employed across a diverse range of domains, encompassing drug discovery, protein interactions, semantic segmentation, and fluid dynamics research. In this study, we investigate the potential of Graph Neural Networks (GNNs) for medical image classification. We introduce a novel model that combines GNNs and edge convolution, leveraging the interconnectedness of RGB channel feature values to strongly represent connections between crucial graph nodes. Our proposed model not only performs on par with state-of-the-art Deep Neural Networks (DNNs) but does so with 1000 times fewer parameters, resulting in reduced training time and data requirements. We compare our Graph Convolutional Neural Network (GCNN) to pre-trained DNNs for classifying MedMNIST dataset classes, revealing promising prospects for GNNs in medical image analysis. Our results also encourage further exploration of advanced graph-based models such as Graph Attention Networks (GAT) and Graph Auto-Encoders in the medical imaging domain. The proposed model yields more reliable, interpretable, and accurate outcomes for tasks like semantic segmentation and image classification compared to simpler GCNNs. Code available at AnonRepo.
**Keywords:** Medical Imaging, Machine Vision, GNN, GCNN, Image classification.
## 1 Introduction
Medical image classification and segmentation play critical roles in the field of medical imaging. Although there have been considerable advancements in image classification, medical image classification faces unique challenges due to the diverse dataset modalities, such as X-ray, Positron Emission Tomography (PET), Magnetic Resonance Imaging (MRI), Ultrasound (US), and Computed Tomography (CT). Variations within and between modalities, mainly stemming from the inherent differences in imaging technologies, complicate the classification process. Additionally, obtaining labeled training data is costly in the medical domain. Pre-trained DNNs address these issues through transfer learning techniques, yielding impressive results. However, DNNs exhibit limitations, including inductive bias, inefficient capture of spatial and local-level associations, and inconsistent performance across modalities [17, 20].
GNNs offer a solution to these complexities, handling variations in data with embedded relationships effectively and accommodating heterogeneous graph nodes[16]. The successful application of Knowledge-Based Graph Methods in medical diagnosis supports this notion. We have compared GNN architecture with Convolutional Neural Networks (CNNs) and discussed various types of Graph Convolutional Neural Networks (GCNNs). We propose a GCNN model integrated with Edge Convolution [21](GCNN-EC) for medical image classification. By performing graph convolution and edge convolution for edge prediction. Edge convolution overcomes the limitations of vanilla GCNN thus improving classification. Our method enhances model performance with reduced training time and data requirements. This research validates graph-based learning's efficiency for medical image data. In this study, we focus on classifying the
MedMNIST dataset [Yang et al., 2021], featuring 10 pre-processed datasets from various sources and modalities, with 708,069 images in 12 2D datasets. We narrow our research scope to six categories/classes1 containing 58,954 images with dimensions 28x28 as these classes represent diverse modalities, reflecting the compilation of images from various imaging techniques. These classes are AbdomenCT, BreastMRI, CXR, ChestCT, Hand, and HeadCT. The subsets are balanced.
We observe that our simple GCNN-EC outperforms leading state-of-the-art DNNs for specific MedMNIST dataset classes. Proposed model required less training than compared DNNs while using 100 times fewer parameters.
## 2 Prior Art
In this section, we will delve into the technicalities of CNNs, and compare their mechanisms with GCNNs. We will also shed light on three contemporary, state-of-the-art CNN models that have been utilized for the task of medical image classification. Furthermore, this section will introduce the diverse variants of GNNs, providing a comprehensive comparison from a technical standpoint. It will also cover the various applications of these models in medical domains.
### CNNs
CNNs[Lecun et al., 1998] owe their name to the convolution operation, which involves overlaying a kernel onto the image grid and sliding it across the grid to extract local information, such as details from neighboring pixels. Technically, the convolution operation involves performing a dot product between the filter's elements and the corresponding elements of the image grid, then storing the result in an output matrix (often termed a feature map or convolved feature). As illustrated in Figure 2, the dot product employed in the convolution process is an aggregation operation. The main objective of this operation is to consolidate image data into a compressed form, making it feasible to extract global-level features from an image.
Thus, convolution as a process systematically extracts spatial hierarchies or patterns, starting from local pixel interactions (low-level features) to more abstract concepts (high-level features) as we progress deeper into the network. Finally, the hierarchical feature extracted from the preceding convolution and pooling operations is compressed into a compact and linear representation. The flattened feature vector derived is used for various tasks such as classification, segmentation, or feature localization.
Figure 1: Sample MedMNIST data.
Figure 2: CNN architecture.
We have chosen three state-of-the-art DNNs that have demonstrated robust performance in image classification in the medical imaging domain for further discussion in this paper. The effectiveness of our proposed model is assessed in relation to these distinguished DNN models in the later section, thereby offering a comparative study.
**ResNet**[14] is a deep neural network architecture with a varying number of hidden layers, including a large number of convolutional layers to work efficiently by using residual blocks [14] that allow the network to effectively learn the residual or the difference between the input and output features. ResNets has been one of the best-performing models on the ImageNet dataset [4] for classification tasks. It has served as a skeleton for several DNNs that continue to use similar skip connection methods for achieving state-of-the-art performance. It has been applied for the classification of medical image data and has proved to produce state-of-the-art results [15], the ResNets-based model showed 99.05% and 98.59% testing accuracy for binary and multi-class breast cancer recognition.
**DenseNet**[13] is one of the densely connected deep-layered neural network architectures that also use residual blocks. They exploit the potential of the deep network by feature reuse, producing more condensed models that are easy to train and highly parameter efficient. Concatenating feature maps learned by different layers increases variation in the input of subsequent layers and improves efficiency. DenseNets uses the Network in Network architecture [12] which uses multi-layer perceptrons in the filters of convolutional layers to extract more complicated features. DenseNet has increasingly been applied as the backbone model for various medical imaging tasks[16] from image registry to image embedding generation which has further been used for tasks like segmentation and classification.
**EfficientNet**[12] makes use of techniques like compound scaling, that enable the efficient scaling of deep neural network architectures to meet specific requirements regarding data or resource limitations. Unlike other deep neural networks, EfficientNet achieves improved model performance without increasing the number of floating-point operations per second (FLOPS), resulting in enhanced efficiency. The method introduces the concept of efficient compound coefficients to uniformly scale the depth, width, and resolution of the network. When scaling a model by a factor of \(2^{N}\) in terms of computational resources, EfficientNet scales the network depth by \(\alpha^{N}\), network width by \(\beta^{N}\), and image size by \(\gamma^{N}\). These coefficients, namely \(\alpha\), \(\beta\), and \(\gamma\), are determined through a grid search on the base model. By employing this compound scaling approach, EfficientNet strikes a balance between network accuracy and efficiency. EfficientNet has been proven to produce excellent results for medical image classification [16] even in resource-constrained environments.
### GNNs
A GNN is a specialized kind of neural network tailored for handling graph-structured data. It exploits the attributes of nodes and edges to learn representations for nodes, edges, and the overall graph. Its working principle is iterative message passing, where features from neighboring nodes are gathered by each node to update its own feature set. CNNs demonstrate limitations in capturing the associations between features within an image [17]. However, these intricate interconnections can be effectively captured by representing images as graphs and then utilizing GNNs to comprehend these intricate interdependencies. Also, GNNs better capture topological data features compared to CNNs[15]
The intricate complexities of graph data, ranging from structures as varied as protein sequences and chemical molecules to pixels in medical images serving as nodes, necessitate the transformation of these structures into suitable vector spaces. This transformation is crucial for performing computations and analyses. However, this comes with the daunting task of handling graph isomorphism issues, which involve identifying topological similarities between different graphs. In solving these intricate problems, the significance of GNNs becomes especially apparent, specifically in conjunction with the Weisfeiler-Lehman (WL) Isomorphism algorithm[17].
WL algorithm, a prevalent technique in this context, generates graph embeddings by aggregating colors, or more general features (image features, etc), of proximate nodes, culminating in a histogram-like representation
for each graph. The conventional GNN architecture mirrors a neural rendition of the 1-WL algorithm, where '1' represents a single neighborhood hop. In this transformation, discrete colors evolve into continuous feature vectors, and neural network mechanisms are harnessed to aggregate information from the neighborhood of each node.
By virtue of this design, GNNs inherently embody a continuous variant of graph-based message passing similar to the WL algorithm. In this paradigm, details from a node's immediate surroundings are accumulated and relayed to the node, thereby facilitating learning from local graph structures[1]. This characteristic is at the core of the utility of GNNs in various domains requiring graph-based data analysis. Furthermore, we elaborate in detail on the specific type of GNN employed in this study, namely the GCNN.
There are two types of GCNNs:
1. GCNNs based on **spectral methods** (using convolutions via the convolution theorem [1]). Spectral methods fall into the category of transductive learning, where learning and inference take place on the entire dataset. Spectral CNNs (SCNN) [1] was the first implementation of CNNs on graphs, leveraging the graph Fourier transform [21] and defining the convolution kernel in the spectral domain. Examples include the Dynamic Graph Convolutional Network (DGCN), which has been effectively applied to detect relation heat maps in images for pose and gesture detection. HACT-Net [13] is a further example, which has been applied for the classification of Histopathological images.
2. GCNNs based on **spatial methods**. These GCNNs fall into the category of inductive learning, where learning and inference can be performed on a test and train dataset. They define convolution as a weighted average function over the neighborhood of the target vertex. For example, GraphSAGE [1] takes one-hop neighbors as neighborhoods and defines the weighting function as various aggregators over the neighborhood. The spatial GCNN is extremely robust due to its inductive learning which makes spatial GCNNs highly scalable.
We use a simple spectral GCNN \(f(X,A)\) that takes input X which is a vector of node features and an adjacency matrix A, along with a layer-wise propagation rule. The matrix \(A\) is normalized using methods mentioned in [10] as multiplication of \(X\) and \(A\) will change the scale of feature vectors, which leads to disproportional learning from neighbors. The equation 1 defines the aggregation operation from one layer to another.
\[H^{(l+1)}=\sigma\left(\tilde{D}^{-\frac{1}{2}}\tilde{A}\tilde{D}^{-\frac{1}{2} }H^{(l)}W^{(l)}\right). \tag{1}\]
\(\tilde{A}\) is the adjacency matrix of the graph \(\mathcal{G}\), \(\tilde{D}\) is the degree matrix and \(W^{(l)}\) is a trainable weight matrix. The result is passed to an activation function \(\sigma(\cdot)\). The output is then concatenated to get a new hidden state \(H^{(l+1)}\) for hidden layer \(l+1\) as shown in Figure 3.
GCNN has its inherent limitations. They exhibit poor performance when confronted with dynamic graph structures, causing over-fitting to the training set. Furthermore, GCNNs are susceptible to the issue of over
Figure 3: A simple GCN with 3 hidden layers.
smoothing[Magner et al., 2020], whereby the addition of more convolutional layers results in an indistinguishable final embedding.
In this section, we defined and elaborated on the different types of CNN used in this study along with the types of GNNs and an explanation of spectral GCNN which has been used in the proposed method. In the next section, we explain the proposed method that leverages the power of GNNs, while also overcoming its limitations.
## 3 Our work
In this work, we present GCNN-EC which resolves identified issues around the limitation of CNNs in capturing the inherent connections between features within an image. The proposed method aims to leverage both local and global inter-pixel relationships by incorporating edge convolution along with graph convolution. Our procedure involves three stages: edge convolution, graph convolution, and classification. We merge RGB values into a node feature and compute edge features using a dynamic filter, the system processes the graph representation through multiple convolution layers before flattening the embedding for final classification. We explain these steps in detail in this section.
### Edge convolution
We begin by creating a node feature vector by combining RGB channel values. This is a vital step in transforming the RGB image data into a grid format, where each pixel is a node connected to adjacent pixels. Next, we use a dynamic filter[Brabandere et al., 2016] to learn edge features. This filter incorporates the node features from the immediate neighbors and also those at a two-hop distance, meaning it considers not just the nearest node but also the nodes that are connected to these nearest nodes. This filter is unique for each input and is learned by the network based on node features and the Euclidean distance between the node feature vectors. This learned filter is stored as a registered buffer in the network and not used during back-propagation. Rather, it is used as an edge feature while being passed through convolution layers. It is through this process that we generate an abstract understanding of the relationships between various components of the image. The graph augmented with node and edge features is further improved by an edge convolution layer [Wang et al., 2019], which performs convolution operations on the graph using the edge features. The edge convolution layer detects and enhances the edges or boundaries in an image (grid of pixels). It focuses on identifying sharp transitions in intensity/color, which are indicative of object edges. Finally giving a more distinguishable graph representation.
### Graph convolution
The graph representation enriched by edge convolution is passed through graph convolution layers, capturing features of the graph by incorporating features of nodes and their connections to finally create a more accurate graph-level embedding. One of the strengths of the graph convolution layer is its ability to incorporate both local and global information. By considering the neighborhood relationships between nodes, it captures local patterns and structures within the graph. Additionally, by aggregating information from neighboring nodes iteratively, it gradually incorporates global information, allowing for a comprehensive understanding of the overall graph. Graph convolution alone suffers from the problem of over-smoothing, This limitation can be overcome by enhancing the graph representation quality by edge convolution to capture meaningful edge information.
### Classification
The graph embedding, obtained by flattening the output of the graph convolution layers, is classified using a dense layer. This layer transforms the embedding by matrix multiplication and bias addition thus learning weighted connections and finally applying non-linear activation. Enabling accurate predictions based on the learned representation of the graph.
We have employed PyTorch to implement our pipeline, while the Monai framework has been utilized for medical image processing. We generate a graph using MedMNIST images as the input and incorporate the Dynamic Edge Convolution layer[22] to perform edge convolution. The resulting learned representation then undergoes 3 graph convolution layers. We fine-tuned the model using Optuna [1], obtaining a learning rate of 0.001 and a weight decay of 0.01. The parameter count in our models ranged from 24,967 to 67,938. We have used the Cross-Entropy loss function with Adam Optimizer and have trained our models for 4 epochs. The batch size used was 64, and the training, testing, and validation splits were 80%,10%, and 10% respectively. GCNN-EC model architecture is shown in Figure4.
## 4 Result
In this section, we present the results achieved by our model on the 6 classes of the MedMNIST dataset to demonstrate the efficacy of simple GNN when compared with sophisticated DNNs. GCNN-EC model converges to stable loss value within 4 epochs. Our results are presented in Table 1 comparing the Area under the Curve (AUC) and Accuracy (ACC) of our model with the DNN models. From the plot in Figure 5, it is evident that our method is comparable to DenseNet and outperforms ResNet and EfficientNet, showing it as an effective classifier. The proposed method demonstrates superior performance compared to ResNet18 and EfficientNet-B0 while performing on par with DenseNet121. Notably, the GCNN-EC model utilizes 100 times fewer parameters than the three DNN models considered in this study. Furthermore, our model achieves a remarkable accuracy of 99.13% on the MNIST dataset (as indicated in the "gcnn-ec-mnist.py" file in the code).
## 5 Conclusion
Our model exhibited superior performance when compared to renowned CNNs like ResNet18 and EfficientNet-B0 while achieving comparable results to DenseNet121 on MedMNIST dataset. Notably, our model achieved
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Methods} & \multicolumn{2}{c|}{AbdomenCT} & \multicolumn{2}{c|}{BreastMRI} & \multicolumn{2}{c|}{CXR} & \multicolumn{2}{c|}{ChestCT} & \multicolumn{2}{c|}{Hand} & \multicolumn{2}{c|}{HeadCT} & \multirow{2}{*}{Parameters} \\ \cline{2-2} \cline{7-13} & AUC & ACC & AUC & ACC & AUC & ACC & AUC & ACC & AUC & ACC & AUC & ACC \\ \hline ResNet18 & 0.800 & 0.839 & 0.897 & 0.899 & 0.832 & 0.842 & 0.901 & 0.940 & 0.915 & 0.921 & 0.733 & 0.762 & 11,689,512 \\ \hline EfficientNet-B0 & 0.901 & 0.907 & 0.905 & 0.918 & 0.958 & 0.960 & **0.913** & **0.948** & 0.907 & 0.911 & 0.874 & 0.894 & 4,014,658 \\ \hline DenseNet121 & **0.936** & **0.942** & 0.961 & 0.971 & **0.972** & **0.985** & 0.887 & 0.901 & **0.916** & **0.925** & **0.899** & **0.914** & 7,978,856 \\ \hline GCN-EC (ours) & 0.876 & 0.882 & **0.983** & **0.985** & 0.957 & 0.965 & 0.748 & 0.813 & 0.886 & 0.905 & 0.869 & 0.874 & **24,967** \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of DNNs with proposed method for MedMNIST
Figure 4: GCNN-EC architecture.
this with significantly fewer parameters (GCNN-EC: 24,967 vs. ResNet: 11.68M, EfficientNet: 4.01M, DenseNet: 6.95M), highlighting its efficiency and effectiveness in capturing meaningful features. This efficiency suggests the possibility of training our GCNN with significantly fewer data, an important factor in the medical field where properly labeled data is scarce and expensive.
## Acknowledgments
This publication has emanated from research supported in part by a grant from Science Foundation Ireland under Grant number 18/CRT/6049. For the purpose of Open Access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission.
|
2305.06822 | Implicit Neural Networks with Fourier-Feature Inputs for Free-breathing
Cardiac MRI Reconstruction | Cardiac magnetic resonance imaging (MRI) requires reconstructing a real-time
video of a beating heart from continuous highly under-sampled measurements.
This task is challenging since the object to be reconstructed (the heart) is
continuously changing during signal acquisition. In this paper, we propose a
reconstruction approach based on representing the beating heart with an
implicit neural network and fitting the network so that the representation of
the heart is consistent with the measurements. The network in the form of a
multi-layer perceptron with Fourier-feature inputs acts as an effective signal
prior and enables adjusting the regularization strength in both the spatial and
temporal dimensions of the signal. We study the proposed approach for 2D
free-breathing cardiac real-time MRI in different operating regimes, i.e., for
different image resolutions, slice thicknesses, and acquisition lengths. Our
method achieves reconstruction quality on par with or slightly better than
state-of-the-art untrained convolutional neural networks and superior image
quality compared to a recent method that fits an implicit representation
directly to Fourier-domain measurements. However, this comes at a relatively
high computational cost. Our approach does not require any additional patient
data or biosensors including electrocardiography, making it potentially
applicable in a wide range of clinical scenarios. | Johannes F. Kunz, Stefan Ruschke, Reinhard Heckel | 2023-05-11T14:14:30Z | http://arxiv.org/abs/2305.06822v2 | # Implicit Neural Networks with Fourier-Feature Inputs for Free-breathing Cardiac MRI Reconstruction
###### Abstract
In this paper, we propose an approach for cardiac magnetic resonance imaging (MRI), which aims to reconstruct a real-time video of a beating heart from continuous highly under-sampled measurements. This task is challenging since the object to be reconstructed (the heart) is continuously changing during signal acquisition. To address this challenge, we represent the beating heart with an implicit neural network and fit the network so that the representation of the heart is consistent with the measurements. The network in the form of a multi-layer perceptron with Fourier-feature inputs acts as an effective signal prior and enables adjusting the regularization strength in both the spatial and temporal dimensions of the signal. We examine the proposed approach for 2D free-breathing cardiac real-time MRI in different operating regimes, i.e., for different image resolutions, slice thicknesses, and acquisition lengths. Our method achieves reconstruction quality on par with or slightly better than state-of-the-art untrained convolutional neural networks and superior image quality compared to a recent method that fits an implicit representation directly to Fourier-domain measurements. However, this comes at a higher computational cost. Our approach does not require any additional patient data or biosensors including electrocardiography, making it potentially applicable in a wide range of clinical scenarios.
## 1 Introduction
Real-time magnetic resonance imaging (MRI) is a dynamic imaging technique for assessing the anatomic structure and function of moving organs in the human body. The reconstruction of cardiac function is challenging due to the relatively fast organ movements compared to the achievable encoding speed. In this paper, we consider the reconstruction of free-breathing cardiac real-time MRI data, where the goal is to reconstruct a video of the beating heart from highly under-sampled data continuously acquired over multiple cardiac and respiratory cycles. The video depicts the cardiac function in real-time meaning that arrhythmia and unprecedented motion can be imaged without assuming periodicity of the motion. This task is especially difficult when only very few samples are collected at any given time instance during the cardiac and respiratory cycles.
Various reconstruction methods for cardiac MRI are based on gated or triggered data [1, 2]. Such methods bin the measurements according to different times in the cardiac and respiratory cycle. The bins are then reconstructed independently using methods for static MRI, or they are reconstructed jointly using signal priors for regularizing among bins [1, 2, 3]. However, binning can lead to artifacts since the heart may not be exactly in the same position at a given point in the cardiac and respiratory cycle, and periodicity of the motion needs to be assumed.
Another class of methods does not rely on gating or triggering, but reconstructs images from continuously acquired un-gated data by regularizing in the spatial and temporal dimensions with hand-crafted signal priors, for example with total-variation norm penalties [14] and through imposing low-rank priors [15, 16, 17, 18, 19]. Thereby, the heart can be imaged in real-time without binning-related inaccuracies.
With the advent of deep learning, signal priors can be learned from data. Supervised methods have been proposed for dynamic MRI that are trained on pairs of measurement data and corresponding ground-truth images [15, 16, 17, 18, 19]. However, ground-truth fully sampled images are generally not available for real-time MRI without assuming periodicity of the motion.
A recent elegant approach to avoid the use of training data but benefit from the prior inherent in neural networks are untrained networks that regularize by fitting a neural network to the measurement data. Untrained neural networks are typically based on convolutional neural networks (CNNs) [19, 16, 17, 18], since CNNs have a provable bias towards fitting smooth signals [13, 14].
In this paper, we propose an untrained approach for cardiac real-time MRI that is based on representing the beating heart with a multi-layer perceptron (MLP) with Fourier-feature inputs and fitting the network so that the representation is consistent with the measurements. Using a Fourier-feature MLP is critical for our approach since it enables us to impose a strong spatial and temporal prior on the heart to be reconstructed. Our contributions are as follows:
* We propose an approach for cardiac real-time MRI based on parameterizing the beating heart with an MLP with Fourier-feature inputs. The method allows controlling the spatial and temporal regularization strength through the setup of the Fourier-feature inputs.
* We compare our approach to the time-dependent deep image prior (t-DIP) [19], a state-of-the-art untrained method based on CNNs. We study the reconstruction quality and computational cost on various datasets in different operating regimes, including different image resolutions, slice thicknesses, and acquisition lengths, and find our method performs on par in terms of image quality, sometimes even marginally better, but at a higher computational cost.
* Compared to two very recently proposed approaches [19, 14] also based on implicit neural representations, we achieve better image quality at a higher computational cost. Specifically, Huang et al. [19] enable cardiac MRI by fitting a Fourier-feature MLP directly to the measurements in the Fourier domain. Fitting the MLP in the Fourier domain has the advantage of being faster compared to fitting it in the image domain. However, this comes at a significant loss in image quality, as we show later, since Fourier-feature MLPs are not a good model for representing an image in the Fourier domain. Specifically, an image is relatively smooth, whereas the spectrum of an image is not smooth, and a Fourier-feature MLP has a bias towards fitting smooth signals. Feng et al. [14]'s method is based on fitting an MLP with hash encoding to the data. Using a hash encoding also has computational benefits over our approach, but a hash encoding is not an effective image prior, thus the method requires additional explicit regularization.
Related Work
In this section, we briefly discuss classical approaches to cardiac real-time MRI, implicit neural networks and CNN-based neural networks for medical image reconstruction.
Approaches to cardiac MRI with sparse and low-rank regularization techniquesA variety of classical approaches to cardiac MRI are based on regularization in the spatial and temporal dimensions with hand-crafted signal priors. For example, Feng et al. [14] impose a total-variation norm penalty over time and Feng et al. [14] impose a total-variation penalty over cardiac phases.
Another line of work incorporates temporal relations by assuming low-rank structures within the temporal series of images, through low-rank tensor decompositions [15], through decompositions into image patches of low-rank [16], and through multi-scale low-rank decompositions [21].
Besides these linear signal models, methods have been developed that assume low-rankness in a kernel space [17, 18] or assume that the images lie on a smooth low-dimensional manifold [19, 1].
Implicit neural networksOur work relies on representing time-varying signals with an implicit neural network, i.e., a function parameterized by a neural network that maps a spatial coordinate vector and a time variable to a pixel value. Implicit neural networks are successfully used for representing images and volumes in a variety of applications. The architecture of the network is important, for images and volumes transforming the coordinates with a Fourier-feature map before passing them through a multi-layer perceptron works well, and most current architectures including NeRF [19], SIREN [18], and MLPs with Fourier-feature inputs [16] use this architecture.
Implicit networks for medical imagingImplicit neural networks are excellent image models and can therefore be used as an image prior for image reconstruction tasks, as demonstrated by Tancik et al. [16] on a toy dataset. Subsequently, Shen et al. [20] have improved the reconstruction quality for static CT and MRI imaging by pre-training the network on a previously reconstructed image of the same subject. Shen et al. [20] also reconstructed a temporal series of 3D images, and used the first image for pre-training the network. Contrary to our approach, the method does not model temporal variations.
Implicit networks have also been used for super-resolution tasks in medical imaging, for example for obtaining high-resolution 3D MRI [22] from low-resolution 2D images, and for building a scale-agnostic model for MRI [23].
Implicit networks for dynamic MRI reconstructionAs mentioned before, most related to our work are two recent methods using implicit networks for dynamic MRI reconstruction. Huang et al. [11] introduced the method NIK which uses a SIREN network [18] with additional explicit regularization to represent the signal in k-space. It outputs a k-space value given the k-space coordinate, the time, and the measurement coil index. Once the network has been trained on the measured frequencies, it can be evaluated at the missing k-space frequencies to yield a reconstructed k-space. We compare our approach to a variant of NIK called k-space Fourier-feature MLP (KFMLP) that we introduce in the methodology section.
Feng et al. [14] learn a neural representation of the dynamic object in the image domain. The implicit network is implemented by an MLP with a hash-encoding for the input coordinates. The MLP is small in size compared to our methods and can be evaluated very efficiently. However, the method relies on additional explicit regularization of the images' temporal total variation and an additional loss term to enforce low-rankness of the solution. Thus, the method relies on explicit regularization and not on an implicit bias inherent to the network architecture as we do.
Untrained methods based on CNNsOur approach relies on regularization with a neural network in the form of an MLP with Fourier-feature inputs. We find that the network has a regularizing effect that is very similar to state-of-the-art untrained CNNs.
Untrained neural networks enable image reconstruction by fitting a neural network to the measurement data, as observed by Ulyanov et al. in the seminal DIP paper [13]. Very simple CNNs enable excellent signal reconstruction performance without any training since CNNs can be excellent image priors [12], and provably reconstruct smooth signals [15, 16]. Untrained networks can outperform classical sparsity based reconstruction for accelerated MRI [17].
The method time-dependent deep image prior (t-DIP) [21] extends the deep image prior [13] to dynamic MRI. Like our approach, the t-DIP fits a single model to the entire series of images, and time is used as an input coordinate. In contrast to using a CNN to regularize in the spatial dimension, we model both the temporal and the spatial dimensions through an implicit network.
The method Gen-SToRM [18] trains a CNN to generate a temporal series of images from a temporal series of low-dimensional network inputs. The network inputs are trained jointly with the CNN. The temporal derivative of the network inputs and the CNNs Jacobian are regularized to favour reconstructions that are on a smooth low-dimensional manifold in image space and that vary smoothly in time. A conceptually similar method has been applied for video reconstruction [1] aside from the context of MRI reconstruction.
## 3 Problem Formulation
Our goal is to reconstruct a video of a moving object from sequentially acquired under-sampled linear measurements. We consider an object \(\mathbf{x}(\sigma)\in\mathbb{R}^{w\times h}\) in the form of an image with width \(w\) and height \(h\) that is parameterized by a motion state \(\sigma\). In our setup, the object \(\mathbf{x}(\sigma)\) is the image of a heart at a certain state in the cardiac and respiratory cycle.
We consider the standard MRI measurement model, where we collect noisy linear measurements
\[\mathbf{y}_{\tau,c}=\mathbf{M}_{\tau}\mathbf{FS}_{c}\mathbf{x}(\sigma_{\tau})+\mathbf{n}_{\tau,c} \tag{1}\]
at the receiver coils \(c=1,\dots,C\). Here, \(\mathbf{F}\) is the 2-D Fourier matrix, \(\mathbf{S}_{c}\) is a diagonal matrix containing the sensitivities of receiver coil \(c\), and \(\mathbf{M}_{\tau}\) is a binary sampling mask encoding the frequencies collected during time \(\tau\). In this work, we consider Cartesian sampling patterns. The measurements are distorted by additive noise \(\mathbf{n}_{\tau,c}\).
Note that the object \(\mathbf{x}(\sigma)\) is changing continuously. The measurement model (1) assumes that the object is essentially constant during a very short time frame which we index by \(\tau\). During this short time frame, only very few measurements (frequencies as selected by the mask \(\mathbf{M}_{\tau}\)) are collected.
The measurements from different coils can be stacked into a single measurement vector
\[\mathbf{y}_{\tau}=\mathbf{A}_{\tau}\mathbf{x}(\sigma_{\tau})+\mathbf{n}_{\tau},\]
where \(\mathbf{y}_{\tau}=[\mathbf{y}_{\tau,1}^{T},\ldots,\mathbf{y}_{\tau,C}^{T}]^{T}\) and \(\mathbf{n}_{\tau}=[\mathbf{n}_{\tau,1}^{T},\ldots,\mathbf{n}_{\tau,C}^{T}]^{T}\). The matrix \(\mathbf{A}_{\tau}\) is the resulting forward map for frame \(\tau\).
Our goal is to reconstruct the image series (or video) \(\mathbf{x}(\sigma_{\tau}),\,\tau=1,\ldots,T\) from the noisy measurements \(\mathbf{y}_{\tau}\). During each time frame indexed by \(\tau\), only very few measurements are collected. Therefore, reconstructing each frame individually results in poor image quality, even if we take prior information about the image into account. Successful reconstruction relies on using prior information about the images in spatial directions as well as in the temporal direction.
Note that in our problem formulation, the index \(\tau\) refers to a short time frame that is used for binning the measurement data such that motion can be neglected. This time frame may not necessarily correspond to the time frame that is used in the video that is ultimately rendered. Thus, the generation of the final video might require re-sampling at a different frame rate. An advantage of our method based on implicit representation is that we can use different frame rates for reconstruction and for the output showed to the end-user.
## 4 Methods
Our FMLP method is based on parameterizing the object \(\mathbf{x}(\sigma)\) with a Fourier-feature MLP and fitting the neural network to the measurement data. Once fitted, we query the neural network to generate a video as an output. The Fourier-feature MLP we use is a good prior for a (slowly) time-varying smooth object, and we use no other regularization for reconstruction.
We parameterize the object \(\mathbf{x}(\sigma)\) by a neural network \(f_{\mathbf{\theta}}:\mathbb{R}^{3}\rightarrow\mathbb{R}^{2}\) mapping a spatio-temporal coordinate vector \(\mathbf{c}=[x,y,t]^{T}\) to a complex image intensity value, represented by a real and an imaginary part. We focus on 2D images, but the method can easily be extended to 3D by adding another spatial coordinate so that \(\mathbf{c}=[x,y,z,t]\). The coordinates \(x\) and \(y\) denote physical locations within the field of view measured in meters, and the time-coordinate \(t\) is time measured in seconds. The origins of the spatial and temporal coordinate axes can be set arbitrarily as the particular neural network that we use is shift-invariant. We define \(t=0\,\mathrm{s}\) as the beginning of the measurement process. For computing a pixel-based image \(\mathbf{f}_{\mathbf{\theta}}(t_{\tau})\) at time \(t_{\tau}\), the network \(f_{\mathbf{\theta}}\) is evaluated on a regular grid of spatial coordinates within the field of view with the temporal coordinate fixed to \(t_{\tau}\)
Figure 1: The FMLPs’ network consists of separate spatial and temporal Fourier-feature embeddings that are concatenated, followed by an MLP that outputs the complex image intensity at the specified coordinate.
We fit a network by minimizing the reconstruction loss defined as
\[\mathcal{L}(\mathbf{\theta})=\frac{1}{T}\sum_{\tau=1}^{T}\left\|\mathbf{A}_{\tau}\mathbf{f}_{ \mathbf{\theta}}(t_{\tau})-\mathbf{y}_{\tau}\right\|_{2}^{2}.\]
The time \(t_{\tau}\) is chosen to be at the center of the acquisition time window of frame \(\tau\) and the spatial coordinate grid used for computing \(\mathbf{f}_{\mathbf{\theta}}(t_{\tau})\) is matched to the Cartesian sampling grid in k-space. After fitting, reconstructed images can be sampled from the implicit representation as \(\mathbf{\hat{x}}(\sigma_{\tau})=\mathbf{f}_{\mathbf{\theta}}(t_{\tau})\).
The network architecture is critical for performance. Our architecture is based on using Fourier-features as inputs to a ReLU-MLP and is depicted in Figure 1. The Fourier-feature inputs are used in a variety of implicit neural network architectures [11, 12, 13]. The Fourier-feature layer embeds the spatial coordinates and the temporal coordinate separately as \(\mathbf{\gamma}(\mathbf{c})=[\mathbf{\gamma}_{\text{spatial}}([x,y])\), \(\mathbf{\gamma}_{\text{temporal}}(t)]^{T}\) with
\[\mathbf{\gamma}_{\text{spatial}}([x,y]) =\left[\sin\left(\mathbf{B}\begin{bmatrix}s_{\text{x}}x\\ s_{\text{y}}y\end{bmatrix}\right),\,\cos\left(\mathbf{B}\begin{bmatrix}s_{\text{ x}}x\\ s_{\text{y}}y\end{bmatrix}\right)\right]^{T},\] \[\mathbf{\gamma}_{\text{temporal}}(t) =\left[\sin(\mathbf{b}s_{\text{t}}t),\,\cos(\mathbf{b}s_{\text{t}}t) \right]^{T}.\]
Here, the \(\sin\) and \(\cos\) functions are applied element-wise. The coordinates are scaled by hyper-parameters \(s_{\text{x}}\), \(s_{\text{y}}\), and \(s_{\text{t}}\) with units \(\frac{1}{\text{m}}\) and \(\frac{1}{\text{s}}\), respectively, which control the variance of the angular frequencies in the spatial and temporal dimensions. The elements in the matrix \(\mathbf{B}\) and the vector \(\mathbf{b}\) are drawn independently from a zero-mean Gaussian distribution with unit variance \(\mathcal{N}\left(0,\,1\right)\) and are fixed and not optimized over. We choose \(\mathbf{B}\in\mathbb{R}^{256\times 2}\) and \(\mathbf{b}\in\mathbb{R}^{64}\) for a total feature size of \(512+128\).
We observed that embedding the spatial and temporal coordinates separately leads to sharper reconstructions compared to the intuitive setup of having one Fourier-feature embedding for all three coordinates together, see Section A.4 in the supplementary material.
The MLP takes the Fourier-feature vector \(\mathbf{\gamma}(\mathbf{c})\) as input. After each linear hidden layer, the ReLU activation is applied and the output feature vector is normalized to zero mean and unit variance. The output layer has two neurons with a linear activation function and without normalization, which output the real and imaginary part of the complex image intensity value, respectively. The weights of the linear layers are initialized by drawing from \(\mathcal{N}\left(0,\,\sigma_{\text{linear}}^{2}\right)\), where the variance \(\sigma_{\text{linear}}^{2}\) can be tuned. The final output is multiplied by a constant \(s_{\text{out}}\) that is tuned as a hyperparameter depending on the scaling of the k-space data and the sensitivity maps. In our implementation, we set \(\sigma_{\text{linear}}=0.01\) and the MLP consists of 7 hidden layers with 512 neurons each if not explicitly stated otherwise.
### Baseline methods
In our experimental results, we consider two method described in more detail below for comparison. The first method is a variant of Huang et al. [14]'s NIK method that is based on fitting a Fourier-feature MLP in the Fourier domain. We consider a variant of the method and not the original one, since code for the original is not available, our sampling setup is different, and we wanted to get best performance when using a Fourier-feature MLP in the Fourier domain, which
required us to deviate from NIK. The second method we compare to is the time-dependent DIP [20], a state-of-the-art untrained method based on CNNs.
#### 4.1.1 Fourier-feature MLP as implicit k-space prior (KFMLP)
The KFMLP is conceptually similar to the NIK [14] that has been proposed recently. It parameterizes the time-varying k-space \(\mathbf{FSx}(\sigma)\) with a neural network. The network \(f_{\mathbf{\theta}}:\mathbb{R}^{3}\rightarrow\mathbb{R}^{2C}\) maps frequency-time coordinates \([k_{\mathrm{x}},k_{\mathrm{y}},t]\) to complex k-space values that are represented by a real and an imaginary part for each of the \(C\) receiver coils. The frequency-coordinates \(k_{\mathrm{x}}\) and \(k_{\mathrm{y}}\) are normalized to \([-\pi,\pi)\) and the time-coordinate \(t\), as for the FMLP, measures time in seconds. The network is fitted to the measured k-space data at frequency-time coordinates along the sampling trajectory. For computational purposes, we segment the trajectory into frames that are treated as samples of the dataset. Specifically, we evaluate the network \(f_{\mathbf{\theta}}\) at the k-space coordinates and sampling times that correspond to the measurements in the vectors \(\mathbf{y}_{\tau}\). The resulting reconstructed k-space along the trajectory of frame \(\tau\) is denoted as \(\mathbf{f}_{\mathbf{\theta},\tau}^{\mathrm{traj}}\). We minimize a similar loss as proposed for the NIK [14],
\[\mathcal{L}(\mathbf{\theta})=\frac{1}{T}\sum_{\tau=1}^{T}\mathcal{L}_{\mathrm{ HDR}}(\mathbf{f}_{\mathbf{\theta},\tau}^{\mathrm{traj}},\mathbf{y}_{\tau})+R(\mathbf{f}_{\mathbf{ \theta},\tau}^{\mathrm{traj}}).\]
The loss is composed by a high dynamic range reconstruction loss and an explicit k-space regularization term. The high dynamic range loss is defined as
\[\mathcal{L}_{\mathrm{HDR}}(\hat{\mathbf{y}}_{\tau},\mathbf{y}_{\tau})=\left\|\frac{ \hat{\mathbf{y}}_{\tau}-\mathbf{y}_{\tau}}{|\operatorname{sg}(\hat{\mathbf{y}}_{\tau})|+ \varepsilon}\right\|_{2}^{2}.\]
Here, the operator \(\operatorname{sg}(\cdot)\) stops the propagation of the gradient during back-propagation, \(|\cdot|\) computes the absolute value accounting for the positive and negative k-space values, and \(\varepsilon>0\) is a hyperparameter that adjusts the compression of the dynamic range. The division is computed element-wise. Note that our definition of the high dynamic range loss differs from the NIK that does not take the absolute value in the denominator.
The k-space regularization term \(R(\mathbf{f}_{\mathbf{\theta},\tau}^{\mathrm{traj}})\) is adopted from the NIK and is defined as
\[R(\mathbf{f}_{\mathbf{\theta},\tau}^{\mathrm{traj}})=\lambda_{\mathrm{denoiser}} \mathcal{L}_{\mathrm{HDR}}(\mathbf{f}_{\mathbf{\theta},\tau}^{\mathrm{traj}},\mathbf{K}_{ \tau}\mathbf{f}_{\mathbf{\theta},\tau}^{\mathrm{traj}}),\]
where \(\mathbf{K}_{\tau}\) is a diagonal matrix with entries \(e^{-d/2\sigma^{2}}\) that weight the k-space values according to their respective coordinate distances \(d=\sqrt{k_{\mathrm{x}}^{2}+k_{\mathrm{y}}^{2}}\) to the k-space center. The hyper-parameters \(\lambda_{\mathrm{denoiser}}>0\) and \(\sigma>0\) adjust the regularization strength. By ablation studies, we find that \(\varepsilon=10^{4}\), \(\sigma=10\) and \(\lambda_{\mathrm{denoiser}}=0.1\) yield good results for our datasets. Note that the high dynamic range reconstruction loss and the k-space regularization term are not essential, but they yield a small improvement compared to using the standard \(l_{2}\) reconstruction loss without any explicit k-space regularization, see Section B.2 and Section B.3 in the supplementary materials.
After training, a reconstructed image of at time \(t_{\tau}\) is obtained by evaluating the network at all frequencies on the Cartesian grid and combining the reconstructed k-spaces via coil combination. The reconstructed k-space on the Cartesian grid, with time coordinate fixed to \(t_{\tau}\), is denoted
by \(\mathbf{f}_{\mathbf{\theta}}^{\text{grid}}(t_{\tau})\) that stacks the reconstructed k-spaces \(\mathbf{f}_{\mathbf{\theta},c}^{\text{grid}}(t_{\tau})\) of the receiver coils \(c=1,\ldots,C\). A coil-combined image is computed as
\[\mathbf{\hat{x}}(\sigma_{\tau})=\sum_{c=1}^{C}\bar{\mathbf{S}}_{c}^{*}\mathbf{F}^{-1}\mathbf{f}_ {\mathbf{\theta},c}^{\text{grid}}(t_{\tau}),\]
where \(\bar{\mathbf{S}}_{c}^{*}\) denotes the complex conjugate of the normalized sensitivity map of coil \(c\) given as
\[\bar{\mathbf{S}}_{c}=\left(\sum_{j=1}^{C}\mathbf{S}_{j}^{*}\mathbf{S}_{j}\right)^{-1}\mathbf{S }_{c}.\]
The KFMLP uses the same network architecture as the FMLP with Fourier-features that are given as input to an MLP with ReLU activation function and normalization. The k-space coordinates are embedded in a separate Fourier-feature vector from the time coordinate. Again, the coordinate scales \(s_{\text{x}}\), \(s_{\text{y}}\), and \(s_{\text{t}}\) are tuned as hyper-parameters.
The KFMLP architecture differs slightly from the NIK architecture that uses a sinusoidal activation function, jointly embeds the time and k-space coordinates, and treats the coils \(c=1,\ldots,C\) as an additional input dimension.
#### 4.1.2 Time-dependent DIP
We implement the time-dependent DIP [20] which is a state-of-the-art method for untrained reconstruction. It uses a CNN with time-varying network inputs as implicit prior. The time-varying network inputs are generated by embedding a temporal input coordinate on a helix trajectory and then mapping the embedding to the CNN's input features using a small fully connected network.
The helix is a trajectory in \(\mathbb{R}^{3}\) with unit radius and slope \(z_{\text{slack}}\) that we tune as a hyperparameter. The angular frequency of the helix introduces prior information about the heart rate and periodicity of motion. For matching the angular frequency of the helix with the heart rate, an estimate of the number of cardiac cycles is required, which we obtain from an electrocardiogram (ECG).
Due to the validation lines that are extracted at random positions, the frames are not equally spaced in time. We account for the deviation by modifying the sampled positions on the helix trajectory. Instead of sampling equally spaced points based on the frame index \(k\) as proposed by the authors, we use the measurement time \(t_{\tau}\) to ensure that the extraction of validation lines does not diminish the performance of the t-DIP. The sampled positions on the helix are mapped to a more expressive latent space by a small MLP called MapNet. The output of the MapNet is reshaped as input for the subsequent CNN.
We adopt the CNN architecture but modify it to fit our image resolution. The proposed architecture of the CNN up-samples the feature resolution in powers of two using nearest-neighbor interpolation. We adjust the input feature resolution and the number of up-sampling layers to obtain an output resolution that is close to our target resolution. The output image is then center-cropped to the target resolution. Between interpolation layers, two convolutional layers are applied. We tune the channel depth of the convolutional layers as a hyperparameter.
Experiments and Results
We assess the reconstruction quality and computational demands of FMLP on real data that we collected as well as on phantom data in the supplementary material. We compare to the KFMLP method and the time-dependent DIP, and find that FMLP performs as well as the t-DIP and significantly better than KFMLP in terms of image quality, but at a higher computational cost than both methods. Code to reproduce the results is available at [https://github.com/MLI-lab/cinemri](https://github.com/MLI-lab/cinemri).
### Datasets
We conduct experiments on datasets that were acquired on a 3 T Edition X scanner (Philips Healthcare, The Netherlands). The scans are taken of a healthy 30 years old male volunteer. This study was approved by the local ethics committee and written informed consent was obtained.
Measurement data was acquired at two different resolutions \(2.27\times 2.26\,\mathrm{mm}^{2}\) (\(264\times 186\) acquisition matrix size) and \(1.25\times 1.26\,\mathrm{mm}^{2}\) (\(480\times 334\)), with \(10\,\mathrm{mm}\) and \(5\,\mathrm{mm}\) slice thickness, respectively. We refer to those as 'low-resolution high-SNR' and 'high-resolution' datasets. A third dataset was acquired at low resolution with a reduced (isotropic) slice thickness of \(2.25\,\mathrm{mm}\), decreasing the signal strength and the signal-to-noise ratio (SNR). This dataset is referred to as the 'low-resolution low-SNR' dataset. For each of those configurations, a breath-hold triggered scan and a free-breathing scan were taken. The acquisition length was set retrospectively for the free-breathing scan and we conduct experiments with \(4.0\,\mathrm{s}\), \(8.0\,\mathrm{s}\), and \(15.9\,\mathrm{s}\) of acquired data.
For the breath-hold scan, the data acquisition is triggered by an ECG such that measurements of similar cardiac phases are binned. The dynamics are then reconstructed with a sparsity-based reconstruction method for static MRI which the MRI scanner performs by default. The breath-hold reconstructions serve as a visual reference for the free-breathing reconstructions.
During the free-breathing scan, measurements are acquired continuously (ungated). We use a partial-Fourier Cartesian sampling pattern, where the k-space is fully sampled along the \(k_{\mathrm{x}}\)-dimension (frequency-encoding) and randomly under-sampled along the \(k_{\mathrm{y}}\)-dimension (phase-encoding). Thus, measurements are taken sequentially along \(k_{\mathrm{y}}\)-lines in the k-space. The measured \(k_{\mathrm{y}}\)-lines are binned into frames retrospectively. Further details on the measurement parameters are listed in Section E in the supplementary materials.
### Performance metrics and quality criteria
It is common practice to measure the image quality of reconstruction methods with full-reference image quality metrics, such as the structural similarity index measure (SSIM) [23] or the visual information fidelity (VIF) [15]. For our synthetically generated phantom dataset in the supplement (see Section C), we measure performance with those metrics.
For our original data (and original data in the context of real-time MRI in general), ground-truth images are not available. We therefore measure performance in terms of an estimate of the MSE as well as through visual comparisons.
Specifically, we visually compare the reconstructed images to the ECG-triggered breath-hold reconstructions obtained from the scanner by binning-based reconstruction methods using sparsity regularization. We focus on anatomic details with the region of interest, the heart, such as papillary muscles in the left ventricle that are rapidly changing shape during systoles.
We estimate a normalized version of the mean squared error based on a hold-out set of randomly subsampled frequencies as follows. We randomly extract \(5\,\%\) of the measured \(k_{\text{y}}\)-lines for validation. The remaining \(k_{\text{y}}\)-lines are binned into frames \(\tau=1,\dots,T\) for training, such that the k-space of every frame contains \(N_{\text{lines}}=6\) many \(k_{\text{y}}\)-lines. Each validation line \(\mathbf{y}_{v},v=1,\dots,V\) is assigned to the frame \(\tau_{v}\) that is closest in sample time. Let \(\hat{\mathbf{y}}_{v}\) be the reconstructed k-space lines predicted by a reconstruction algorithm (FMLP, KFMLP, and t-DIP in our setup). The lines are obtained by first applying the sensitivity maps and Fourier transform to the reconstructed images \(\hat{\mathbf{x}}(\sigma_{\tau_{v}})\) and then extracting the \(k_{\text{y}}\)-line corresponding to \(\mathbf{y}_{v}\). We estimate the signal-to-error ratio (SER) as a metric for quantifying data consistency
\[\text{SER}=10\log_{10}\frac{\sum\limits_{v=1}^{V}\left\lVert\mathbf{y}_{v}\right \rVert_{2}^{2}}{\sum\limits_{v=1}^{V}\left\lVert\hat{\mathbf{y}}_{v}-\mathbf{y}_{v} \right\rVert_{2}^{2}}.\]
### Implementation details
We implemented the FMLP, KFMLP, and t-DIP using PyTorch. The models are optimized using Adam [1] with learning rate \(2\times 10^{-4}\) for the FMLP and the KFMLP, and learning rate \(1\times 10^{-4}\) for the t-DIP. The models are trained until no new SER high-score has been reached for 200 epochs. We evaluate the reconstruction quality at the epoch of maximum SER. The experiments are conducted on a server equipped with a RTX 6000 GPU with \(24\,\mathrm{GB}\) VRAM and an Intel(R) Core(TM) i9-9940X CPU with \(128\,\mathrm{GB}\) available RAM.
### Quantitative reconstruction quality analysis
In this section, we quantitatively compare the reconstruction quality of the FMLP, the KFMLP, and the t-DIP in terms of the estimated normalized MSE (specifically, in terms of the SER defined in the previous section).
Different operating regimesWe start by comparing the image reconstruction quality measured in terms of the SER in different operating regimes that feature different image resolutions and slice thicknesses. Specifically, we fit all three methods to the low-resolution high-SNR dataset, the low-resolution low-SNR dataset, and the high-resolution dataset. The hyper-parameters of the models are tuned on each dataset individually and the methods are trained on \(T=225\) frames.
The results in Table 1 show that the FMLP and the t-DIP achieve a similar SER with marginal differences in all evaluated operating regimes. While the t-DIP achieves a slightly higher SER than the FMLP on the low-resolution high-SNR dataset, the FMLP performs slightly better on the low-resolution low-SNR and the high-resolution datasets. The KFMLP, learning a representation of the k-space, achieves much lower SER scores than the FMLP or the t-DIP. The difference is substantial on all evaluated datasets.
Performance as a function of acquisition lengthNext, we study the reconstruction quality for different measurement acquisition length. We measure the image quality in terms of the SER within the first \(4\,\mathrm{s}\) of reconstructed images, while training the methods on increasing acquisition
lengths, i.e., on 4 s, 8 s, and 16 s of acquired data, respectively. The experiment is conducted on the low-resolution high-SNR dataset.
Figure 2 shows that the SER of the FMLP and the t-DIP improves when training the methods on more data, indicating that both methods not only take advantage of temporal correlations between adjacent frames but also over the entire acquisition length. The KFMLP, by contrast, does not improve substantially. Remarkably, the FMLP and the t-DIP improve by a similar margin even though the t-DIP uses prior information about the heart rate that is encoded by the angular frequency of the helix trajectory. Information about the cardiac phase might be advantageous for correlating frames over multiple cardiac cycles. The FMLP does not incorporate such prior but nevertheless improves at a similar rate as a function of the acquisition length.
### Visual quality assessment
We compare the reconstructed images of the FMLP, the KFMLP, and the t-DIP to the ECG-triggered breath-hold reconstructions that serve as a visual reference. The methods are compared on the low-resolution high-SNR, the low-resolution low-SNR, and the high-resolution dataset. For each
\begin{table}
\begin{tabular}{c|c|c|c} \hline \hline dataset & FMLP & KFMLP & t-DIP \\ \hline low-resolution, high-SNR & 17.16 & 12.53 & **17.19** \\ low-resolution, low-SNR & **10.62** & 8.64 & 10.59 \\ high-resolution & **9.00** & 6.51 & 8.97 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Maximum SER of the models on datasets in different operating regimes. FMLP and t-DIP perform similarly, while KFMLP performs significantly worse.
Figure 2: The validation SER within the first 4 s of acquisition time improves with increasing the amount of measurement data beyond 4 s. The reconstruction quality can be improved by training the model on a longer acquisition time. The models were trained on the low-resolution high-SNR dataset with the same configurations as in Figure 3 and \(z_{\text{slack}}=0.1,0.2\), and 0.4 for \(T=225\) (4 s), 450 (8 s), and 900 (16 s), respectively.
Figure 3: On the low-resolution high-SNR dataset, the image quality of the FMLP and the t-DIP are similar, whereby the FMLP recovers anatomic details such as the papillary muscles in the left ventricle (red arrow) more accurately. The reconstructions by the KFMLP are distorted by aliasing-like artifacts and fine-structured noise such that anatomic details are well not recognizable. The models were trained on \(T=225\) frames. The hyperparameter configuration are \(s_{\rm x}=s_{\rm y}=30\,\rm m^{-1}\), \(s_{\rm t}=1\,\rm s^{-1}\), 7 hidden layers, 512 neurons per hidden layer for the FMLP, \(s_{\rm x}=s_{\rm y}=15\), \(s_{\rm t}=1\,\rm s^{-1}\), 7 hidden layers, 512 neurons per hidden layer for the KFMLP, and 256 channels and \(z_{\rm slack}=0.1\) for the t-DIP.
dataset, we show reconstructions of four frames that have been selected to cover a cardiac cycle.
The reconstructions in the Figures 3, 4, and 5, show that FMLP performs on par with the t-DIP in terms of perceived image quality. Although the overall differences in quality are marginal, the FMLP recovers anatomic details slightly more accurately on the low-resolution high-SNR dataset. The differences are most apparent in small moving anatomic structures of the heart, such as the papillary muscles, see Figure 3. On the low-resolution low-SNR and the high-resolution datasets, the FMLP and the t-DIP exhibit similar artifacts, see Figures 4, and 5.
We find that the FMLP achieves a better image quality compared to the KFMLP on all datasets. On the low-resolution high-SNR dataset, the KFMLP suffers aliasing-like artifacts that blur anatomic details, such as the papillary muscles, see Figure 3. On the low-resolution low-SNR dataset, the aliasing-like artifacts are superimposed by the noise that further degrades the image quality compared to the FMLP (Figure 4). However, the noise of the KFMLP becomes most prominent on the high-resolution dataset, where it obscures any meaningful details in the cardiac
Figure 4: For an isotropic slice thickness (low-resolution low-SNR dataset) the reconstruction quality of all methods degrades and new artifacts are introduced. The FMLP and the t-DIP achieve a similar image quality and outperform the KFMLP. The models were trained on \(T=225\) frames. The hyperparameter configuration are: \(s_{\mathrm{x}}=s_{\mathrm{y}}=30\,\mathrm{m}^{-1}\), \(s_{\mathrm{t}}=2\,\mathrm{s}^{-1}\), 5 hidden layers, and 256 neurons per hidden layer for the FMLP, \(s_{\mathrm{x}}=s_{\mathrm{y}}=15\), \(s_{\mathrm{t}}=1\,\mathrm{s}^{-1}\), 7 hidden layers, and 512 neurons per hidden layer for the KFMLP, and 256 channels and \(z_{\mathrm{slack}}=0.5\) for the t-DIP.
region (Figure 5).
### Computational cost
The exact computational costs depend on the hyperparameter configuration, but for most of our tested hyperparameter configurations, training the FMLP requires more epochs and takes longer to train per epoch than the t-DIP and the KFMLP. The KFMLP and the t-DIP reach the maximum SER in a similar amount of training time, whereas the t-DIP converges in fewer epochs.
The computational performance characteristics of the configurations with the highest SER are listed in Table 2. We report the model sizes, memory requirements during training, training time per epoch, and training time until the maximum SER is reached on the low-resolution high-SNR dataset and on the high-resolution dataset, each with \(T=225\) frames. The total training time is measured by the wall-clock time and includes data-loading, computation of the SER for validation, and logging the training progress. The time per epoch is measured by the wall-clock time over 100 epochs of
Figure 5: On the high-resolution dataset, the FMLP and the t-DIP achieve a similar reconstruction quality and both methods suffer from similar artifacts. The quality of the KFMLP is decreased substantially by noise-like artifacts. The models were trained on \(T=225\) frames. The hyperparameter configuration are: \(s_{\rm x}=s_{\rm y}=30\,\rm m^{-1}\), \(s_{\rm t}=1.0\,\rm s^{-1}\), 7 hidden layers, 512 neurons per hidden layer for the FMLP, \(s_{\rm x}=s_{\rm y}=10\), \(s_{\rm t}=1.0\,\rm s^{-1}\), 7 hidden layers, 512 neurons per hidden layer for the KFMLP, and 256 channels and \(z_{\rm slack}=0.5\) for the t-DIP.
training without additional overhead. The hyperparameter configurations with maximum SER are chosen that are specified in Figure 3 and 5. Note that the t-DIP's CNN uses five up-sampling layers on the low-resolution dataset, whereas six up-sampling layers are required on the high-resolution dataset.
We note that the computational demands required for training scale differently with image resolution for the three methods. An advantage of the FMLP is that the network size can stay constant as the image resolution is increased. However, the network has to be evaluated for all pixels and, thus, the computational demands scale linearly with image resolution. The CNN of the t-DIP, by contrast, needs to be adapted to an increasing image resolution by adding up-sampling layers and convolutional layers. Thus, the number of trainable parameters increases with image resolution. However, doubling the image resolution merely requires adding one more stage of up-sampling and convolutional layers. Thus, the cost of evaluating the CNN scales efficiently with image resolution. The network size of the KFMLP can also be set independently of the image resolution. Moreover, the computational demands do not scale with the image resolution but with the number of measured coordinates. The network is evaluated on the full Cartesian grid only once after training.
## 6 Discussion and Conclusion
We proposed an untrained reconstruction method based on implicit networks, called FMLP, for cardiac real-time MRI. The method uses an ReLU-MLP with Fourier-feature inputs to encode spatio-temporal coordinate vectors for representing an image of the beating heart. We evaluated the method on experimental datasets for 2D free-breathing cardiac real-time MRI covering different operating regimes, i.e., different image resolutions, slice thicknesses, and acquisition lengths.
We find that FMLP can achieve reconstruction quality on par or slightly better with the best CNN-based untrained methods (t-DIP), as measured with quantitative metrics (SER) and as determined through visual comparisons. FMLP tends to improve slightly faster as a function of the acquisition time (see Figure 2), and slightly surpassed t-DIP for longer acquisition times. The FMLP only relies on the implicit neural network for regularization.
We note that the similarity in performance to the t-DIP is somewhat expected since MLPs with Fourier-feature inputs and CNNs both have a bias towards fitting smooth images and low-frequency
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline \hline res. & property & FMLP & KFMLP & t-DIP \\ \hline \multirow{4}{*}{\begin{tabular}{c} FMLP \\ \end{tabular} } & \# parameters & \(1.91\times 10^{6}\) & \(1.93\times 10^{6}\) & \(5.94\times 10^{6}\) \\ & memory on GPU & \(3.55\,\mathrm{GB}\) & \(1.14\,\mathrm{GB}\) & \(1.94\,\mathrm{GB}\) \\ & time per epoch & \(19.92\,\mathrm{s}\) & \(4.06\,\mathrm{s}\) & \(8.56\,\mathrm{s}\) \\ & number of epochs & \(744\) & \(309\) & \(252\) \\ & total training time & \(510\,\mathrm{min}\) & \(35\,\mathrm{min}\) & \(42\,\mathrm{min}\) \\ \hline \multirow{4}{*}{
\begin{tabular}{c} FMLP \\ \end{tabular} } & \# parameters & \(1.91\times 10^{6}\) & \(1.93\times 10^{6}\) & \(7.12\times 10^{6}\) \\ & memory on GPU & \(9.21\,\mathrm{GB}\) & \(1.22\,\mathrm{GB}\) & \(4.03\,\mathrm{GB}\) \\ \cline{1-1} & time per epoch & \(63.82\,\mathrm{s}\) & \(4.11\,\mathrm{s}\) & \(26.65\,\mathrm{s}\) \\ \cline{1-1} & number of epochs & \(1872\) & \(601\) & \(176\) \\ \cline{1-1} & total training time & \(3237\,\mathrm{min}\) & \(110\,\mathrm{min}\) & \(91\,\mathrm{min}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Computational performance characteristics.
contents of the signal first [14, 21]. This implicit bias suppresses high-frequency aliasing artifacts, thereby interpolating the missing frequencies in the k-space and reconstructing the image.
While an MLP with Fourier-features is a good image model, it is not a good model for representing an image in the Fourier domain. We found that KFMLP, which relies on representing the signal in the Fourier domain with a Fourier-feature MLP, suffered from aliasing and noise-like artifacts that distorted the images considerably. Although the KFMLP learned to represent the measurements along the sampling trajectory well, it poorly generalized to frequencies that had not been measured. We note that the partial-Fourier random Cartesian sampling pattern used for the experiments may pose an especially difficult challenge to the reconstruction methods as a small subset of k-space lines is measured repetitively. Thus, a large fraction of the k-space is not measured at any time and needs to be interpolated with an efficient image model. Huang et al. [12] tested the NIK (the KFMLP is a variant of the NIK) on retrospective data with a golden-angle radial trajectory. On such data, the method did not suffer from strong artifacts compared to the KFMLP on our datasets.
We hasten to add that the imaging quality comes at a high computational cost. The FMLP is more expensive to train than the t-DIP and the KFMLP. However, we think that future research (such as pre-training the FMLP) can substantially improve the running time of the FMLP, and different feature encoding altogether might also improve the running time.
A major advantage of the FMLP over the t-DIP and other approaches is it's flexibility: To generalized generalize to 3D data, we only need to add an additional spatial input.
## Acknowledgements
The authors would like to thank you Kilian Weiss (Philips GmbH Market DACH, Hamburg, Germany) for helpful discussion. The work is supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) 11 - 456465471, 464123524.
|
2305.13591 | A Single Multi-Task Deep Neural Network with a Multi-Scale Feature
Aggregation Mechanism for Manipulation Relationship Reasoning in Robotic
Grasping | Grasping specific objects in complex and irregularly stacked scenes is still
challenging for robotics. Because the robot is not only required to identify
the object's grasping posture but also needs to reason the manipulation
relationship between the objects. In this paper, we propose a manipulation
relationship reasoning network with a multi-scale feature aggregation (MSFA)
mechanism for robot grasping tasks. MSFA aggregates high-level semantic
information and low-level spatial information in a cross-scale connection way
to improve the generalization ability of the model. Furthermore, to improve the
accuracy, we propose to use intersection features with rich location priors for
manipulation relationship reasoning. Experiments are validated in VMRD datasets
and real environments, respectively. The experimental results demonstrate that
our proposed method can accurately predict the manipulation relationship
between objects in the scene of multi-object stacking. Compared with previous
methods, it significantly improves reasoning speed and accuracy. | Mingshuai Dong, Yuxuan Bai, Shimin Wei, Xiuli Yu | 2023-05-23T01:46:25Z | http://arxiv.org/abs/2305.13591v1 | A Single Multi-Task Deep Neural Network with a Multi-Scale Feature Aggregation Mechanism for Manipulation Relationship Reasoning in Robotic Grasping
###### Abstract
Grasping specific objects in complex and irregularly stacked scenes is still challenging for robotics. Moreover, it is an important prerequisite for people to use robots to ensure the safety of the ontology and the environment during robot grasping. Therefore the robot is not only required to identify the object's grasping posture but also to reason the manipulation relationship between the objects to improve the interaction safety. In this paper, we propose a manipulation relationship reasoning network with a multi-scale feature aggregation (MSFA) mechanism for robot grasping tasks. MSFA aggregates high-level semantic information and low-level spatial information in a cross-scale connection way to improve the environment understanding ability of the model. Furthermore, to improve the accuracy, we propose to use intersection features with rich location priors for manipulation relationship reasoning. Experiments are validated in VMRD datasets and real environments, respectively. Experimental results demonstrate that our method achieves state-of-the-art performance on the VMRD dataset. Furthermore, the experimental results in the real environment prove that our proposed method has the generalization ability and can be applied to actual scenarios.
Keywords: grasp detection, manipulation relationship, grasping order.
## I Introduction
With the development of robot technology, robotics can stably recognize and grasp objects in uncomplicated scenes[1, 2, 3, 4]. However, grasping specific objects in cluttered or stacked complex scenes is always a challenging problem[5]. As shown in Fig. 1, the necessary conditions for a robot to achieve safe grasping include perceiving objects and their grasping positions in the complex working scene, understanding the positional relationships between objects, and estimating a reasonable grasping strategy. Humans can quickly grasp target objects in any scene because humans can understand the environment and easily estimate the position relationship between objects and make reasonable decisions. Yet, this is a challenging task for robots. Therefore, it is essential to improve the environmental understanding ability of the robot so that it can reason about the position relationship between the objects in the stacked state and formulate a reasonable grasping strategy.
At present, researchers have achieved satisfactory results in the research of vision-based robot grasping in the single or multiple target objects scene. The structure of the above scene is straightforward, and there is no mutual occlusion and stacked position relationship between objects. Hence, the model only focuses on the grasping position of the object with low complexity. For complex stacked object scenes, [6] proposed a multi-task visual manipulation relationship reasoning network(VMRN), which added the grasp detection task branch and a manipulation relationship reasoning branch between objects on the basis of the Faster-RCNN network. VMRN converts the manipulation relation reasoning task into a classification task. According to the target location estimated by the object detection branch, the features of the object pair on the common feature layer are intercepted to perform classification operations. [7] proposes an end-to-end model based on the YOLO network, which can simultaneously realize object detection, grasp detection, and manipulation relationship inference. [8] uses graph neural networks to model the features between object pairs and predict the manipulation relationship between objects. These methods have achieved satisfactory results.
Different from the above methods, we propose to further improve the model's performance by improving the model's
ability to understand the environment. Firstly, we consider enriching the feature representation ability of the feature map to help the model understand the location relationship between different objects through richer fine-grained features and semantic features. In this paper, we propose a multi-scale feature aggregation module(MSFA) to fuse the information carried by features of different dimensions through two top-down and bottom-up feature fusions so as to enrich the representation ability of feature maps. Second, we utilize intersection features with rich location priors to assist the model in estimating the manipulation relationships between different objects. In general, if there is no intersection between two objects, there must be no occlusion and stacking position relations between objects. Therefore, this method is different from the complex relationship filtering network proposed by [8], and we improve the model's manipulation relationship discrimination ability through simple intersection features.
In summary, the main contributions of this paper are as follows:
A multi-task neural network based on multi-scale feature fusion is proposed to directly reason the object relationships and the manipulation order.
In order to improve the efficiency and accuracy of the manipulation relationship inference network, we propose to use the intersection feature with location prior for inference, which has the relation filtering property to prune the objects that are unlikely to have a relationship.
The rest of this paper is organized as follows: Section II reviews the state-of-the-art in existing robotic manipulation relations. Section III details our proposed multi-task manipulation relationship inference network. In Section IV, we implement experiments to validate our proposed components as well as the performance of the model, and finally, in Section V, conclusions and discussions are drawn.
## II Related work
This section introduces state-of-the-art research on grasp detection and manipulation relationship reasoning in stacking scenarios.
### _Robotic grasp detection_
Robotic grasping has always been the focus of researchers. In particular, vision-based deep learning technology is increasingly applied to robot grasping and achieves satisfactory results in specific scenarios. For example, [1, 2, 3] transformed the grasp detection problem into a problem similar to object detection and achieved state-of-the-art results on the standard single-object Cornell grasp detection dataset. The above method uses a two-stage detection framework commonly used in object detection. In the first stage, candidate grasp configurations are generated. In the second stage, the offset between the grasp candidate and the ground truth is predicted. On the contrary, [4] proposed a one-stage model to use direct regression to predict the possible grasps in the input image. The direct regression method has a simple network structure but can only predict one grasp configuration per image. [9] proposed a region-proposal-based grasp detection network, which first generates multiple reference anchors and then predicts the grasp configuration based on the features of these anchors. This method improves the perception ability of the one-stage model to the cluttered scene. [10] proposed a two-stream convolutional neural network for simultaneous segmentation and grasp detection. This method uses a fully convolutional neural network to predict the grasp configuration for each pixel. [11] and [12] proposed introducing the Transformer architecture into the grasp detection model. Based on the powerful attention mechanism and global modeling ability of the Transformer architecture, the performance of the grasp detection model in the cluttered environment is improved.
The aforementioned methods have achieved satisfactory results in single or multi-objective environments. However, these models are difficult to apply to a random and complex stacking scenario. Therefore, applying these methods in industry or daily life is challenging. In addition, to expand the application scenarios of robots, it is necessary to have the ability to perceive various stacking scenes and reason about the spatial position relationship of objects. And according to the objects' spatial position relationship, the robot's manipulation sequence is further deduced.
### _Manipulation relationship reasoning_
The reasoning of visual relations based on convolutional neural networks has achieved good performance [13, 14, 15]. However, the visual-based manipulation relationship reasoning in robotics is still rapidly developing.[6] proposed to define the relationship that guides the orderly grasping of the robot as the manipulation relationship. The robot grasps objects correctly and efficiently in the complex stacked scene according to the manipulation relationship and does not cause damage to other targets. Moreover, [6] proposed a visual manipulation relationship reasoning network, which first predicted the position relationship between objects in the stacked state and then constructed the robot manipulation relationship tree according to the position relationship. In practical applications, robots sequentially grasp objects in the working environment according to the manipulation relation tree. [7]
Fig. 1: Environment perception and decision-making are essential for robots to complete grasping operations in stacked scenes. It can recognize the objects in the environment, estimate the grasp position of each object and the position relationship between the objects, and make a reasonable grasp strategy.
proposed a single-stage multi-task deep neural network that can simultaneously perform object detection, grasp detection, and object relation reasoning. Compared with the two-stage model of [6], the proposed method has higher computational efficiency. Different from [10] and [7, 8] proposes to represent the positional relationship between objects in the form of a graph. And the graph network is introduced to predict the spatial position relationship of objects in the robot working scene. At the same time, in order to further reduce the amount of computation in the model reasoning stage, [8] designed a relation filtering network to eliminate irrelevant object pairs before model reasoning.
According to the above studies, we can see that although there have been some recent works to solve the problem of visual manipulation relationship reasoning in an object-stacked environment, there are still challenges to improve the robot's ability to understand the environment and more accurately and effectively recognize the positional relationships of objects in the input scene.
## III Method
In this section, we first introduce the grasp representation and the representation of the relation representation methods used in this work. Then we give the details of the overall structure of our proposed model, including the detection and reasoning modules.
### _Problem description_
_Grasp Representation_: [4, 16] and other grasp detection works have proved that the 5D grasp configuration representation method \(\{x,y,w,h,\theta\}\) can be widely used in robotic grasping tasks with parallel grippers. This representation is effective for robotic grasp scenarios that do not consider object categories. For ordered grasps or grasps with a specified object category, additional processing is needed to match the grasp representation to the correct object category. However, most of the current matching methods use post-processing, which reduces the running efficiency of the model. Different from the above methods, in this paper, we propose a 6D grasp representation \(\{x,y,w,h,\theta,cls\}\) that adds category information to the original 5D grasp representation. Furthermore, the model uses the features of the grasp position to synchronously predict the category attribute of the grasp position in the prediction process. Thus, the structure of the model can be simplified, and the efficiency of the model can be improved.
_Positional Relation Representation_: Constructing reasonable manipulation relationships in complex stacked scenes is necessary to guide the robot to grasp objects sequentially without causing potential damage to other objects. Inspired by [6, 7, 8, 17], we also use the following three visual manipulation relations: object A is on object B (_on_), object A is under object B (_under_), and object A and object B have no relationships (_no_rel_). Through the manipulation relationship between the objects, we constructed the manipulation relationship tree to guide the robot to grasp the objects in the workspace safely and orderly.
### _Architecture_
Fig. 2 shows the architecture of our proposed model, which is a multi-task deep neural network with three parts: the backbone feature extractor, detection head, and reasoning head. In this network, we chose ResNet101 and VGG16 as the backbone feature extractor to extract the features of
Fig. 2: The structure of our proposed model. It includes four parts: the backbone feature extractor, the grasp detector, the object detector, and the manipulation relationship reasoner based on the multi-scale feature aggregation module. Feature extractors usually use ResNet or VGG network for feature extraction and output feature maps. The object detector and the grasp detector are used to detect the object in the input image and the grasp position of the object, respectively. The manipulation relationship reasoning module uses multi-scale aggregated features to predict the positional relationship between objects and the manipulation order.
the RGB image. Then the detection and reasoning branches use the extracted features to perform tasks such as grasp detection, object detection, and manipulation relationship reasoning, respectively. To obtain satisfactory object detection accuracy for reasoning the manipulation relationships, we use the RPN structure in Faster R-CNN[18] to extract object proposals. In the grasp detection branch, we use a single-stage prediction method to predict the grasp configuration according to the extracted features directly. For manipulation relationship reasoning, we propose a multi-scale feature aggregation mechanism to aggregate multi-dimensional features for manipulation relation prediction. The following is a detailed description of each part of our proposed model.
_Object Detection_: We use Faster R-CNN architecture with higher accuracy for the object detection task. Therefore the object detection process consists of two stages. First, the RPN network estimates the region of interest(ROI) where the object may exist in the feature map and resize it to the same size through the pooling operation. Then, the object detector takes the pooled mini-batch ROIs features as input. The classifier and regressor without a hidden layer in the object detector are used to predict the position of the object bounding box and the classification result.
_Grasp Detection_: Different from [6], we do not take the ROIs pooled features as the input of the grasp detector; we adopt the features extracted by the common feature extractor as the input of the grasp detector. We consider the axial symmetry of the ROI, which has limited filtering ability for background features. In addition, for the attribution of the grasp position, we use the direct prediction method to estimate the target object corresponding to the grasp position, which does not rely on the category information of the ROI.
In order to estimate the grasp position of the object in the robot working scene, we first divided the input feature map to the grasp detector into W*H grids. Inspired by our previous work [12], each pixel on the feature map corresponds to a grid. We design three anchors with different sizes at each grid point as priors. The grasp detector evaluates the offset and confidence of each prior anchor concerning the grasp position and predicts the object class corresponding to the grasp position. In addition, we adopt a classification approach to predict the rotation angle of the parallel gripper when the robot performs a grasp. We divided the grasp angle in the range (-90,90) into 19 categories equally. Therefore, in our work, the grasp detector head consists of three parts: a regression head for predicting the grasp position, a classification head for angle prediction and confidence evaluation, and a classification head for the predicted class.
_Manipulation Relationship Reasoning_: In our work, we aggregate three different scales of features for robot manipulation relationship reasoning. For RGB images with an input of 600*600, we use 300*300, 100*100, and 40*40 feature maps for aggregating features at different scales. The aggregated features include high-resolution fine-grained features and high-dimensional features with rich semantic features. High-resolution fine-grained features can guide the model to learn more refined local features between different objects. Low resolution with high-dimensional features guides the model to pay more attention to global features and learn the spatial structure between objects in the input scene as a whole. At the same time, we perform two top-to-down and bottom-to-up feature fusions in this feature aggregation module.
To reason the manipulation relationship of the object in the robot working scene, we use the results of the object detector to crop the feature map and use the cropped features to make predictions. Different from the Object Pairing Pooling Layer proposed in [5] to obtain the object pair features, we add the intersection features between object pairs on this basis. In contrast to union features, intersection features do not exist between all pairs of objects. Because when the objects are far away from each other, or there is no stacking relationship, the characteristic shown in the 2D RGB image is that there is no prominent intersection area between the two objects. Therefore, from this particular property of the intersection feature, we can conclude that the intersection feature can guide the model to filter out pairs of objects where no stacking relationship exists. Hence, the addition of intersection features can improve the discrimination ability of the model for manipulating relationships between objects.
After cropping the features using the object bounding box, we use ROI Pooling to adaptively resize the features into the same size. The features of object pairs we use include the features of object 1, the features of object 2, the union features, and the intersection features. It should be noted in particular that the object pair (O1, O2) is different from the object pair (O2, O1), and the manipulation relation between them cannot be interchanged entirely. As shown in Fig. 2, in the manipulation relationship reasoning stage, the pooled features are passed through the convolutional layer, and finally, the manipulation relationship of the object pair is classified through the fully connected network.
After obtaining the manipulation relations of each object pair in the robot working scene, we build the manipulation relations between objects in the whole scene as a relation tree, as shown in Fig. 2. When the robot grasps an object in the scene, it can obtain from the manipulation tree whether there is a leaf node of the object. If there is a leaf node, the robot should remove the leaf node first so as not to damage other objects.
### _Loss function_
The loss function used by our network during training consists of three parts: object detection loss \(L_{O}\), grasp detection loss \(L_{G}\), and manipulation relation reasoning loss \(L_{R}\).
The object detection loss \(L_{O}\) includes Smooth L1 loss for supervised object location training and cross-entropy loss for supervised object category training, as shown in Eq.(1) and Eq.(2).
\[Smooth\_L1(\{gt\},\{pre\})\] \[=\frac{1}{n}\sum_{i=1}^{n}\left\{\begin{array}{c}0.5\times(gt_{i}- pre_{i})^{2},if\left|gt_{i}-pre_{i}\right|<1\\ \left|gt_{i}-pre_{i}\right|-0.5,otherwise\end{array}\right.\] \[H(gt,pre)=-\sum_{i=1}^{n}gt_{i}\log(pre_{i}) \tag{2}\]
\[L_{O}=Smooth\_L1(\{gt\},\{pre\})+H(gt,pre) \tag{3}\]
We apply BCE loss to evaluate the deviation between the ground truth and the predicted value in the grasp detection loss, as shown in Eq. (4).
\[L_{G}=-\sum_{i=1}^{m}gt_{i}\log(pre_{i})+(1-gt_{i})\log(1-pre_{i}) \tag{4}\]
Where \(gt\) is the ground truth of the object, \(pre\) is the prediction result of the model, \(n\) represents the number of objects in the image, and \(m\) describes the number of grasp configurations in the input image.
In addition, we use a multi-class cross-entropy loss to supervise the training of the manipulated relation prediction branch.
\[L_{R}=\sum_{(Obj_{i},Obj_{j})}\log(p_{r}^{(Obj_{i},Obj_{j})}) \tag{5}\]
Where \(p_{r}^{(Obj_{i},Obj_{j})}\) is the probability that the operation relation between \((Obj_{1},Obj_{2})\) object pairs belongs to \(r\), where \(r\in\{1,2,3\}\) is the ground truth of the operation relation between the object pair \((Obj_{1},Obj_{2})\).
In summary, in the process of network training, we define the whole loss of the model as \(L_{total}\), as shown in Eq (6).
\[L_{total}=L_{O}+\alpha L_{G}+\beta L_{R} \tag{6}\]
In order to balance the training weights of the model, we set both \(\alpha\) and \(\beta\) to 5 during our experiments.
## IV Experiments set
In this section, we mainly introduce the details of our experiments during training and testing, including the dataset used in the experiment process, the dataset's augmentation method, the experimental platform's details, and the model's evaluation metrics.
### _Dataset_
We used the VMRD dataset proposed in [19] to train the model. In this dataset, each image contains multiple objects, and there are occlusions and stacks between objects. The dataset consists of 4683 images, where objects in each image are accurately labeled with location, category, grasp configuration, and manipulation relationships between objects. The training set and test set are set to 4233 and 450 images, respectively, in this dataset.
During the training process, we take advantage of online random data augmentation to prevent overfitting and improve the generalization ability of the model for different working scenarios. Data augmentation methods include image scaling, flipping, and gamut transformation. Hence, the images fed into the model in each training iteration are random and different. During model testing, we do not augment the input image but only resize the image to a fixed size.
### _Implementation details_
We use Faster R-CNN to train the object detection module, where the backbone feature extractor adopts the pre-trained VGG16 and Resnet101, respectively. In addition, the two-stage method is used to train our network. We first train the feature extraction and object detection modules and then fix the training parameters to train the manipulation relation reasoning and grasp detection modules synchronously.
Our model is implemented using Pytorch deep learning model framework. And our models are trained on GTX 2080TI with 11G memory. During model training, we set the batch size to 8 and the learning rate to 0.001, which is divided by 10 every 10,000 iterations. Moreover, the object detection module is trained for 100 epochs separately, and the grasp detection and manipulation relation reasoning modules are trained together for 200 epochs.
### _Metrics_
In our model, we evaluate the performance of each task separately using different methods. For the object detection task, we use mAP to measure the detection results of all classes of objects. In the grasp detection task, we use the rectangle metric to validate the grasp detection results of the model. The rectangle metric contains two parts, a) the Jacquard index between the predicted grasping rectangle and the ground truth is greater than 25%; b) The difference between the angle of the predicted grasping rectangle and the ground truth is within 30\({}^{\circ}\). We consider a predicted as correct when the above two metrics are simultaneously satisfied between the predicted grasp position and the ground truth.
In order to verify the performance of the manipulation relation reasoning branch, inspired by [8], we also use the following three metrics:
**OR** (Object-based Recall): This metric is used to evaluate the recall of the model prediction results, which we consider correct when both the category of a pair of objects as well as the manipulation relationship between them are predicted correctly.
**OP** (Object-based Precision): The average precision of the three manipulation relationships between object pairs predicted by the model.
**IA** (Image-based Accuracy): This metric is the evaluation accuracy based on all objects in the image. The prediction of an image is considered correct when all object categories in the image are predicted correctly and all manipulation relationships between object pairs are predicted correctly.
## V Results and analysis
In this section, we first describe the details and results of ablation experiments to demonstrate the effectiveness of our proposed method. We then set up contrast experiments to compare our results with state-of-the-art methods. Finally, we design physical grasping experiments in the stacking scene to verify the performance of our model.
### _Ablation study_
Table I summarizes the results of our ablation studies on the VMRD test dataset. We utilize the VGG backbone to verify the effectiveness of our proposed improvements through a series of self-contrast experiments. In experiment A, the multi-scale feature aggregation (MSFA) module and intersection features were not used for manipulation relationship reasoning. In experiment B, the MSFA module was added to verify the contribution of multi-scale features to the model performance. Experiment C adds the intersection features of object pairs to the inference of manipulation relations based on Experiment A. Finally, experiment D shows the performance of our proposed model.
Combining the results of Experiment A and Experiment B, we can find that after the model uses the MSFA module, the OR index is improved by 2.7%, and the IA index is improved by 5.4%. We consider that the MSFA module aggregates low-dimensional features and high-dimensional features with richer fine-grained features compared to the feature map used by the original model to manipulate relationship reasoning. Thus, the aggregated feature map has a more powerful feature representation ability, and it can better learn the features between object pairs, thereby improving the model's performance.
Comparing Experiments A and C, we can see that adding intersection features to the manipulation relation inference process leads to a slight rise in the performance of the model on the OR and IA metrics. The reason for this phenomenon is that intersection features can help the model improve the recognition accuracy of the manipulation relationship between no-relationship (_no_rel_) object pairs. Nevertheless, there is no apparent contribution to reasoning about other manipulation relationships. However, it is also sufficient to demonstrate that the positional priors between objects in the intersection features contribute to the reasoning about the manipulation relationships.
Experiment D shows the experimental results of our proposed model, which uses both the MSFA module and the intersection features. Compared with experiment A, we can find that the OR index of the model is improved by 3.2%, and the IA index is improved by 6.3% after adding the MSFA module and intersection features. This demonstrates that our proposed method improves the manipulation relation inference performance of the model.
### _Results for manipulating relationship reasoning_
Table II presents the results of different manipulation relationship reasoning models. We can see that our method has better performance. Compared with the method proposed in [5], our method has comparable results. Especially for the OR metric, the accuracy of our proposed method is 1.7% higher. Compared to VMRN[6], which has a similar network structure, our method achieves better performance in all indicators in two networks with different backbone structures. We can conclude that the method based on multi-scale feature aggregation can better learn the location features between objects in the image to infer the manipulation relationship between objects more accurately.
Compared with GVMRN and GVMRN-RF [8] using graph networks, our method outperforms the above models in terms of OR and OP metrics in the ResNet101 backbone model. In the model with a VGG16 backbone, our method is better than GVMRN and GVMRN-RF in OR, OP, IA, and mAP. This also further demonstrates the effectiveness of our proposed feature aggregation method and the involvement of intersection features in manipulation relationship reasoning. The running time of our proposed model has a slight gap from that of VMRN and GVMRN-RF, but it does not affect the real-time running effect of the robot.
Table III presents the reasoning results of the manipulation relationships for the different numbers of stacked objects. When there are only two objects in the robot working scene, the accuracy of our model manipulation relationship inference is only 83.1%, which is lower than GVMRN and GVMRN-RF proposed by [8]. When the number of objects in the robot working scene is more fantastic than four, the accuracy of our model on the test set is much better than that of GVMRN and GVMRN-RF. This trend of accuracy change is similar to the Multi-task CNN model proposed by [5], and the reason for this phenomenon is that we use a similar network structure and the same test set in [5]. Compared with the Multi-task CNN method, we can conclude that our proposed method dramatically improves the inference accuracy of manipulation relations, especially in scenes with more than three multi-objects.
The three examples in Fig. 3 demonstrate the output of our proposed reasoning network for visual manipulation relations. From the experimental results, it can be seen that our proposed model can accurately predict the manipulation relationship between objects in different numbers of object scenes. And according to the manipulation relationship, the manipulation relationship tree can be constructed to guide the robotic grasping. Furthermore, the last row shows the grasping positions predicted by the model for different objects.
\begin{table}
\begin{tabular}{c c c c c} \hline & **MSFA** & **Intersection feature** & **OR (\%)** & **IA (\%)** \\ \hline A & \(\times\) & \(\times\) & 87.3 & 68.4 \\ B & \(\checkmark\) & \(\times\) & 90 & 73.8 \\ C & \(\times\) & \(\checkmark\) & 88.2 & 69 \\ D & \(\checkmark\) & \(\checkmark\) & 90.5 & 74.7 \\ \hline \end{tabular}
\end{table} TABLE I: Self-comparison experiments on the VMRD dataset under different conditions.
Fig. 3: Experiment results of our proposed model on the VMRD dataset. The first row is the object detection results of the model in scenes with different numbers of objects. The second row is the prediction result of the operation relationship. The third is the robot manipulation relation tree estimated by the inference result of the manipulation relationship. The fourth row shows the grasp detection results.
### _Robot grasping experiment_
To verify the performance of our proposed model in real scenarios, we deploy the model on a real grasping platform. The overall layout of the platform is shown in Fig. 4(A). The experimental platform contains a 6-DOF AUBO-i5 robotic arm, a RealSense-D435i depth camera, and a parallel plate gripper. Fig. 4(B) illustrates a portion of the scene during the real experiment of the robot. Each scene contains three to five randomly stacked placed objects, which are used to verify the performance of our model in real scenes as well as the generalization ability.
As shown in Fig. 5, the sorting process of the robot in the multi-object stacking scene is constrained by the spatial position relationship and category between objects. The robot detects objects in the working scene and deduces the positional relationship between each object to generate a manipulation relationship tree. The robot consecutively grasps from top to bottom according to the recommendation of the manipulation relation tree.
For the scenario presented in Fig. 6, the grasping target
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{**Backbone**} & \multirow{2}{*}{**Model**} & \multirow{2}{*}{**mAP(\%)**} & \multirow{2}{*}{**OR(\%)**} & \multirow{2}{*}{**OP(\%)**} & \multirow{2}{*}{**IA(\%)**} & \multirow{2}{*}{**Time (ms)**} \\ \hline \multirow{8}{*}{ResNet101} & Multi-task CNN[5] & - & 86.0 & **88.8** & 67.1 & - \\ & VMRN[6] & 95.4 & 85.4 & 85.5 & 65.8 & 98 \\ & GVMRN[8] & 94.5 & 86.3 & 87.1 & 68.0 & 102 \\ & GVMRN-RF[8] & 94.6 & 87.4 & 87.9 & **69.3** & **67** \\ & **Ours** & **96.3** & **87.7** & 88.7 & 67.1 & 110 \\ \hline \multirow{8}{*}{VGG16} & VMRN[6] & 94.2 & 86.3 & 88.8 & 68.4 & 71 \\ & GVMRN[8] & 95.4 & 87.3 & 89.6 & 69.7 & 92 \\ \cline{1-1} & GVMRN-RF[8] & 95.4 & 89.1 & 89.7 & 70.9 & **58** \\ \cline{1-1} & **Ours** & **97.4** & **90.0** & **89.8** & **73.8** & 90 \\ \hline \hline \end{tabular}
\end{table} TABLE II: Performance summary of different manipulation relationship reasoning methods on the VMRD dataset.
Fig. 4: Experimental environment for robot grasping. (A) the physical platform used for robotic grasping experiments, and (B) some examples of grasping experiment scenarios.
Fig. 5: The sorting process of the robot in a stacking scenario.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{**Backbone**} & \multirow{2}{*}{**Model**} & \multicolumn{4}{c}{**Image-based accuracy (IA)**} \\ \cline{3-6} & & \multicolumn{4}{c}{**Object number per image**} \\ \cline{3-6} & & **2** & **3** & **4** & **5** \\ \hline \multirow{8}{*}{ResNet101} & Multi-task CNN[5] & 67.1 & 87.7 & 64.1 & 56.6 & 72.9 \\ & VMRN[6] & 65.8 & - & - & - & - \\ & GVMRN[8] & 68.0 & 90.0 & 68.8 & 60.3 & 56.2 \\ & GVMRN-RF[8] & 69.3 & **91.4** & **69.5** & 62.1 & 58.9 \\ & **Ours** & **73.0** & 82.6 & 68.1 & **70.1** & **71.4** \\ \hline \multirow{8}{*}{VGG16} & VMRN[6] & 68.4 & - & - & - & - \\ & GVMRN[8] & 69.7 & 91.4 & 69.9 & 62.9 & 58.9 \\ \cline{1-1} & GVMRN-RF[8] & 70.9 & **92.9** & 70.7 & 64.6 & 61.6 \\ \cline{1-1} & **Ours** & **73.8** & 83.1 & **68.4** & **70.6** & **73.1** \\ \hline \hline \end{tabular}
\end{table} TABLE III: Summary of performance of different models in scenes with different numbers of objects.
of the robot is the tape. However, we found that the tape was pressed under the toothpaste, and a box was pressed on top of the toothpaste. Therefore, if the robot grasps the tape directly, then the toothpaste and the box will be displaced indeterminately. If the toothpaste and the box are fragile objects, then this blind operation will damage the above objects. Hence, our model reasoned that the sequence of actions between objects in this scene was to grasp the box first, then the toothpaste, and finally the tape, as shown in Fig. 6. Through the above experiments, we can conclude that our model can infer the position relationship between objects in the multi-object stacked scene, and can guide the robot to grasp the target object in a reasonable manipulation sequence.
## VI Conclusion
In this paper, we propose a multi-task model for multi-object stacked scenes to reason about the positional relationships between objects, formulate grasping plans, and estimate the grasping positions of different objects. Thus, the robot is guided to achieve autonomous grasping. In addition, in order to improve the model's ability to understand the location relationship between objects, we propose a multi-scale feature aggregation network to enrich the representation ability of the model and introduce intersection features to increase the location relationship prior. Compared with the existing models, our method improves the reasoning accuracy of the model by 4.5%, which fully proves the effectiveness of our method. Moreover, we deploy our proposed model in a physical experimental scenario to further verified the usability of the model.
|
2310.11130 | Topological Expressivity of ReLU Neural Networks | We study the expressivity of ReLU neural networks in the setting of a binary
classification problem from a topological perspective. Recently, empirical
studies showed that neural networks operate by changing topology, transforming
a topologically complicated data set into a topologically simpler one as it
passes through the layers. This topological simplification has been measured by
Betti numbers, which are algebraic invariants of a topological space. We use
the same measure to establish lower and upper bounds on the topological
simplification a ReLU neural network can achieve with a given architecture. We
therefore contribute to a better understanding of the expressivity of ReLU
neural networks in the context of binary classification problems by shedding
light on their ability to capture the underlying topological structure of the
data. In particular the results show that deep ReLU neural networks are
exponentially more powerful than shallow ones in terms of topological
simplification. This provides a mathematically rigorous explanation why deeper
networks are better equipped to handle complex and topologically rich data
sets. | Ekin Ergen, Moritz Grillo | 2023-10-17T10:28:00Z | http://arxiv.org/abs/2310.11130v2 | # Topological Expressivity of ReLU Neural Networks
###### Abstract
We study the expressivity of ReLU neural networks in the setting of a binary classification problem from a topological perspective. Recently, empirical studies showed that neural networks operate by changing topology, transforming a topologically complicated data set into a topologically simpler one as it passes through the layers. This topological simplification has been measured by Betti numbers, which are algebraic invariants of a topological space. We use the same measure to establish lower and upper bounds on the topological simplification a ReLU neural network can achieve with a given architecture. We therefore contribute to a better understanding of the expressivity of ReLU neural networks in the context of binary classification problems by shedding light on their ability to capture the underlying topological structure of the data. In particular the results show that deep ReLU neural networks are exponentially more powerful than shallow ones in terms of topological simplification. This provides a mathematically rigorous explanation why deeper networks are better equipped to handle complex and topologically rich datasets.
## 1 Introduction
Neural networks are at the core of many AI applications. A crucial task when working with neural networks is selecting the appropriate architecture to effectively tackle a given problem. Therefore, it is of fundamental interest to understand the range of problems that can be solved by neural networks with a given architecture, i.e., its _expressivity_.
In recent years, many theoretical findings have shed light on the expressivity of neural networks. Universal approximation theorems (Cybenko, 1989)(Hornik, 1991) state that one hidden layer is already sufficient to approximate any continuous function with arbitrary accuracy. On the other hand, it is known that deep networks can represent more complex functions than their shallow counterparts, see e.g. (Telgarsky, 2016; Eldan and Shamir, 2016; Arora et al., 2018).
The measure of expressivity of a neural network should always be related to the problem it has to solve. A common scenario in which neural networks are employed is the binary classification problem, where the network serves as a classifier for a binary labeled dataset. Since topological data analysis has revealed that data often has nontrivial topology, it is important to consider the topological structure of the data when dealing with a binary classification problem. Naitzat et al. (2020) show through empirical methods that neural networks operate topologically, transforming a topologically complicated dataset into a topologically simple one as it passes through the layers. Given a binary labeled dataset, they assume that the positively labeled and the negatively labeled points are sampled from topological spaces \(M_{a}\) and \(M_{b}\) respectively that
are entangled with each other in a nontrivial way. Their experiments show that a well-trained neural network gradually disentangles the topological spaces until they are linearly separable in the end, i.e, the space \(M_{b}\) is mapped to the positive real line and \(M_{a}\) to the negative real line. From a theoretical point of view, it is of interest to determine the extent of "topological change" that can be achieved by neural networks of a particular architecture. The topological expressivity of a neural network can therefore be measured by the complexity of the most complex topological spaces it can separate and is directly related to the complexity of the binary classification problem.
In this paper we investigate the topological expressivity of ReLU neural networks, which are one of the most commonly used types of neural networks (Glorot et al., 2011; Goodfellow et al., 2016). A \((L+1)\)_-layer neural network (NN)_ is defined by \(L+1\) affine transformations \(T_{\ell}\colon\mathbb{R}^{n_{\ell-1}}\to\mathbb{R}^{n_{\ell}}\), \(x\mapsto A_{\ell}x+b_{\ell}\) for \(A_{\ell}\in\mathbb{R}^{n_{\ell-1}\times n_{\ell}},b_{\ell}\in\mathbb{R}^{n_{ \ell}}\) and \(\ell=1,\ldots,L+1\). The tuple \((n_{0},n_{1},\ldots,n_{L},n_{L+1})\) is called the _architecture_, \(L+1\) the _depth_, \(n_{\ell}\) the _width of the \(\ell\)-layer_, \(\max\{n_{1},\ldots,n_{L}\}\) the _width_ of the NN and \(\sum_{\ell=1}^{L}n_{\ell}\) the _size_ of the NN. The entries of \(A_{\ell}\) and \(b_{\ell}\) for \(\ell=1,...,L+1\) are called _weights_ of the NN and the vector space of all possible weights is called the _parameter space_ of an architecture. A ReLU neural network computes the function
\[F=T_{L+1}\circ\sigma_{n_{L}}\circ T_{L}\circ\cdots\circ\sigma_{n_{1}}\circ T_ {1},\]
where \(\sigma_{n}\colon\mathbb{R}^{n}\to\mathbb{R}^{n}\) is the _ReLU function_ given by \(\sigma_{n}(x)=(\max(0,x_{1}),\ldots,\max(0,x_{n}))\).
Note that the function \(F\) is piecewise linear and continuous. In fact, it is known that any continuous piecewise linear function \(F\) can be computed by a ReLU neural network (Arora et al., 2018). However, for a fixed architecture \(A\), the class \(\mathcal{F}_{A}\) of piecewise linear functions that is representable by this architecture is not known (Hertrich et al., 2021; Haase et al., 2023). Conveniently, in the setting of a binary classification problem we are merely interested in the _decision regions_, i.e., \(F^{-1}((-\infty,0])\) and \(F^{-1}((0,\infty))\) rather than the continuous piecewise linear function \(F\) itself.
A common choice to measure the complexity of a topological space \(X\) is the use of algebraic invariants. Homology groups are the essential algebraic structures with which topological data analysis analyzes data (Dey and Wang, 2022) and hence Betti numbers as the ranks of these groups are the obvious measure of topological expressivity. Intuitively, the \(k\)-th Betti number \(\beta_{k}(X)\) corresponds to the number of \((k+1)\)-dimensional holes in the space \(X\) for \(k>0\) and \(\beta_{0}(X)\) corresponds to the number of path-connected components of \(X\). Thus, one can argue that when a space (the support of one class of the data) has many connected components and higher dimensional holes, it is more difficult to separate this space from the rest of the ambient space, e.g., mapping it to the negative line. In Appendix 5.1.2 we present a brief introduction to homology groups. For an in-depth discussion of the aforementioned concepts, we refer to (Hatcher, 2002).
In order to properly separate \(M_{a}\) and \(M_{b}\), the sublevel set \(F^{-1}((-\infty,0])\) of the function \(F\) computed by the neural network should have the same topological complexity as \(M_{a}\). Bianchini and Scarselli (2014) measured the topological complexity of the decision region \(F^{-1}((-\infty,0])\) with the sum of all its Betti numbers. This notion of topological expressivity does not differentiate between connected components and higher dimensional holes. On the other hand, if an architecture is not capable of expressing the Betti numbers of different dimensions of the underlying topological space of the dataset, then for every \(F\in\mathcal{F}_{A}\) there is a set of data points \(U\) such that \(F\) misclassifies every \(x\in U\)(Guss and Salakhutdinov, 2018). Therefore it is of fundamental interest to understand each Betti number of the decision regions and hence we propose the following definition:
**Definition 1**.: _The topological expressivity of a ReLU neural network \(F\colon\mathbb{R}^{d}\to\mathbb{R}\) is defined as the vector \(\beta(F)=(\beta_{k}(F))_{k=0,\ldots,d-1}=(\beta_{k}(F^{-1}((-\infty,0]))_{k=0, \ldots,d-1}\)._
### Main results
Our main contribution consists of lower and upper bounds on the topological expressivity for ReLU NNs with architectures. These bounds demonstrate that the growth of Betti numbers in neural networks depends on their depth. With an unbounded depth, Betti numbers in every dimension can exhibit exponential growth as the network size increases. However, in the case of a shallow neural network, where the depth remains constant, the Betti numbers of the sublevel set are polynomially bounded in size. This implies that increasing the width of a network while keeping the depth constant prevents exponential growth in the Betti numbers. Consequently, if a dataset possesses exponentially high Betti numbers (parameterized by some parameter \(p\)), accurate modeling of the dataset requires a deep neural network when the size of the neural network is constrained to be polynomial in parameter \(p\) since the topological expressivity serves, as discussed above, as a bottleneck measure for effective data representation.
In Theorem 36, the lower bounds for the topological expressivity are given by an explicit formula, from which we can derive the following asymptotic lower bounds:
**Corollary 2**.: _Let \(A=(d,n_{1},\ldots,n_{L},1)\) with \(n_{L}\geq 4d\) and \(M=2\cdot\prod_{\ell=1}^{L-1}\left\lfloor\frac{n_{\ell}}{2d}\right\rfloor\), then there is a ReLU NN \(F\colon\mathbb{R}^{d}\mapsto\mathbb{R}\) with architecture \(A\) such that_
1. \(\beta_{0}(F)\in\Omega(M^{d}\cdot n_{L})\)__
2. \(\beta_{k}(F)\in\Omega(M^{k}\cdot n_{L})\) _for_ \(0<k<d\)_._
_In particular, given \(\textbf{v}=(v_{1},\ldots,v_{d})\in\mathbb{N}^{d-1}\), there is a ReLU NN \(F\) of size \(O\left(\log\left(\sum_{k=1}^{d-1}v_{k}\right)\right)\) such that \(\beta_{k}(F)\geq v_{k+1}\) for all \(k\in\{0,\ldots,d-1\}\)._
Corollary 2 provides a proof for a conjecture on lower bounds for the zeroth Betti number of the decision region given in (Guss and Salakhutdinov, 2018); in fact, it generalizes the statement to arbitrary dimensions. Furthermore, we observe that \(L=2\) hidden layers are already sufficient to increase the topological expressivity as much as we want at the expense of an increased width due to the above lower bound.
**Corollary 3**.: _Given \(v\in\mathbb{N}^{d}\), there exists an NN \(F\colon\mathbb{R}^{d}\to\mathbb{R}\) of depth 2 such that \(\beta_{k}(F)\geq v_{k+1}\) for all \(k\in\{0,\ldots,d-1\}\)._
We obtain the lower bound by making choices for the weights of the NN, nevertheless, we show that our construction is robust with respect to small perturbations. In fact, in Proposition 12 we prove that we actually have an open set in the parameter space such that the respective functions all have the same topological expressivity.
Using an upper bound on the number of linear regions (Serra et al., 2017), we obtain an explicit formula for an upper bound on \(\beta_{k}(F)\) for an arbitrary architecture in Proposition 14. This gives rise to the following asymptotic upper bounds:
**Corollary 4**.: _Let \(F\colon\mathbb{R}^{d}\to\mathbb{R}\) be a neural network of architecture \((d,n_{1},\ldots,n_{L},1)\). Then it holds that \(\beta_{k}(F)\in O\left(\left(\prod_{i=1}^{L}n_{i}\right)^{d^{2}}\right)\) for \(k\in[d-2]\) and \(\beta_{0},\beta_{d-1}\in O\left(\left(\prod_{i=1}^{L}n_{i}\right)^{d}\right)\)._
By combining Corollary 2 and Corollary 4, we can conclude that there is an exponential gap in the topological expressivity between shallow and deep neural networks. This aligns with other popular measures of expressivity, such as the number of linear regions, where similar exponential gaps are known (Serra et al., 2017; Montufar et al., 2014; Montufar, 2017).
### Related Work
#### 1.2.1 Topology and Neural Networks
Recently, there is a vast stream of research studying neural networks by means of topology using empirical methods (Petri and Leitao, 2020; Guss and Salakhutdinov, 2018; Naitzat et al., 2020; Li et al., 2020) as well as from a theoretical perspective (Basri and Jacobs, 2017; Melodia and Lenz, 2020; Grigsby and Lindsey, 2022; Bianchini and Scarselli, 2014; Grigsby et al., 2022; Hajji and Istvan, 2020). Bianchini and Scarselli (2014) were the first that used Betti numbers as a complexity measure for decision regions of neural networks. Their work studies NNs with sigmoidal activation functions and shows that there is an exponential gap with respect to the sum of Betti numbers between deep neural networks and neural networks with one hidden layer. However, there are no insights about distinct Betti numbers. In Guss and Salakhutdinov (2018), the decision regions of ReLU neural networks are studied with empirical methods and an exponential gap for the zeroth Betti number is conjectured. Our results prove the conjecture and extend the results of Bianchini and Scarselli (2014) for the ReLU case (see Section 3 and Appendix). Furthermore topological characteristics such as connectivity or boundedness of the decision regions are also investigated in (Fawzi et al., 2018; Grigsby and Lindsey, 2022; Grigsby et al., 2022; Nguyen et al., 2018).
#### 1.2.2 Expressivity of (ReLU) neural networks
In addition to the universal approximation theorems (Cybenko, 1989; Hornik, 1991), there is a significant amount of research on the expressivity of neural networks, e.g., indicating that deep neural networks can be exponentially smaller in size than shallow ones. For ReLU neural networks, the number of linear regions is often used as a measure of complexity for the continuous piecewise linear (CPWL) function computed by the network. It is well established that deep ReLU neural networks can compute CPWL functions with exponentially more linear regions than shallow ones, based on various results such as lower and upper bounds on the number of linear regions for a given architecture (Montufar, 2017; Serra et al., 2017; Montufar et al., 2014; Arora et al., 2018). We partially use techniques from their works to establish our bounds on topological expressivity, which offers the advantage of being directly related to the complexity of binary classification problems.
### Notation and Definitions
A function \(F\colon\mathbb{R}^{d}\to\mathbb{R}^{d}\) is continuous piecewise linear (CPWL) if there is a polyhedral complex covering \(\mathbb{R}^{d}\), such that \(F\) is affine linear over each polyhedron of this complex. A linear region of \(f\) is a maximal connected convex subspace \(R\) such that \(f\) is affine linear on \(R\), i.e., a full-dimensional polyhedron of the complex.1 For a survey on polyhedral theory in deep learning see Huchette et al. (2023), and for a general introduction to polyhedra we refer to Schrijver (1986).
Footnote 1: In the literature there exists also a slightly different definition of a linear region leaving out the necessity of the region being convex, but the bounds we use are all applicable to this definition of a linear region.
We denote by \([n]\) the set \(\{1,\ldots,n\}\) and by \([n]_{0}\) the set \(\{0,\ldots,n\}\). We denote by \(\pi_{j}\colon\mathbb{R}^{d}\to\mathbb{R}\) the projection onto the \(j\)-th component of \(\mathbb{R}^{d}\) and by \(p_{j}\colon\mathbb{R}^{d}\to\mathbb{R}^{j}\) the projection onto the first \(j\) components.
A crucial part of our construction is decomposing a unit cube into a varying number of small cubes. Thereby, given \(\mathbf{m}=(m_{1},\ldots,m_{L})\in\mathbb{N}^{L}\) and \(M=\left(\prod_{\ell=1}^{L}m_{\ell}\right)\), the set \(W_{i_{1},\ldots,i_{d}}^{(L,\mathbf{m},d)}\) is defined as the cube of volume \(\frac{1}{M^{d}}\) with "upper right point" \(\frac{1}{M}\cdot(i_{1},\ldots,i_{d})\), i.e., the cube
\(\prod_{k=1}^{d}[\frac{(i_{k}-1)}{M},\frac{i_{k}}{M}]\subset[0,1]^{d}\). The indices \((L,\mathbf{m},d)\) are omitted whenever they are clear from the context.
We denote by \(D^{k}=\{x\in\mathbb{R}^{k}\colon\|x\|<1\}\) the \(k\)-dimensional standard (open) disk and by \(S^{k}=\{x\in\mathbb{R}^{k+1}\colon\|x\|=1\}\) the \(k\)-dimensional standard sphere. We consider these sets as "independent" topological spaces. Therefore, it is justified to abstain from picking a specific norm, since all norms on \(\mathbb{R}^{k}\) are equivalent.
For \(k,m\in\mathbb{N}\) with \(m\leq k\), the _(\(j\)-dimensional open) \(k\)-annulus_ is the product space \(S^{k}\times D^{j-k}\). Note that since \(S^{k}\) has one connected component and a \((k+1)\)-dimensional hole, it holds that \(\beta_{0}(S^{k})=\beta_{k}(S^{k})=1\) and the remaining Betti numbers equal zero. The \(j\)-dimensional \(k\)-annulus is an \(j\)-dimensional manifold that can be thought as a thickened \(k\)-sphere and hence its Betti numbers coincide with the ones from the \(k\)-sphere. In Appendix 5.1.2 the reader can find a more formal treatment of the latter fact.
In contrast to \(D^{k}\) and \(S^{k}\), which are only seen as spaces equipped with a topology, we also consider neighborhoods around certain points \(x\in\mathbb{R}^{d}\) as subsets of \(\mathbb{R}^{d}\). To make a clear distinction, we define the space \(B^{d}_{r}(x)\) as the _\(d\)-dimensional open \(r\)-ball around \(x\) with respect to the \(1\)-norm_, i.e., the space \(\{y\in\mathbb{R}^{d}\colon\|x-y\|_{1}<r\}\). Note that for \(r<r^{\prime}\), the set \(B^{k}_{r}(x)\setminus\overline{B^{k}_{r^{\prime}}(x)}\) is homeomorphic to a \(k\)-dimensional \((k-1)\)-annulus and we will refer to them as \((k-1)\)-annuli as well. These annuli will be the building blocks of our construction for the lower bound.
The rest of the paper is devoted to proving the lower and upper bounds. Most of the statements come with an explanation or an illustration. In addition, formal proofs for these statements are also provided in the appendix.
## 2 Lower Bound
In this section, our aim is to construct a neural network \(F\colon\mathbb{R}^{d}\to\mathbb{R}\) of depth \(L+2\) such that \(\beta_{k}(F)\) grows exponentially in the size of the neural network for all \(k\in[d-1]_{0}\).
We propose a construction that is restricted to architectures where the widths \(n_{1},\ldots,n_{L+1}\) of all hidden layers but the last one are divisible by \(2d\). This construction, however, is generalized for any architecture where the dimension of all hidden layers is at least \(2d\) by inserting at most \(2d\) auxiliary neurons at each layer at which a zero map is computed. Correspondingly, one obtains a lower bound by rounding down the width \(n_{\ell}\) at each layer to the largest possible multiple of \(2d\). In particular, a reduction to the case in Theorem 35 does not have an effect on the asymptotic size of the NN.
The key idea is to construct \(F=f\circ h\) as a consecutive execution of two neural networks \(f\) and \(h\), where the map \(h\colon\mathbb{R}^{d}\to\mathbb{R}^{d}\) is an ReLU NN with \(L\) hidden layers that identifies exponentially many regions with each other. More precisely, \(h\) cuts the unit cube of \(\mathbb{R}^{d}\) into exponentially many small cubes \(W_{i_{1},\ldots,i_{d}}\in[0,1]^{d}\) and maps each of these cubes to the whole unit cube by scaling and mirroring. The one hidden layer ReLU NN \(f\) then cuts the unit cube into pieces so that \(f\) on the pieces alternatingly takes exclusively positive respectively negative values. Since \(h\) maps all \(W_{i_{1},\ldots,i_{d}}\) to \([0,1]^{d}\) by scaling and mirroring, every \(W_{i_{1},\ldots,i_{d}}\) is cut into positive-valued and negative-valued regions by the composition \(f\circ h\) in the same way as \([0,1]^{d}\) is mapped by \(f\), up to mirroring. The cutting of the unit cube and the mirroring of the small cubes in the map to \([0,1]^{d}\) are chosen in such a way that the subspaces on which \(F\) takes negative values form \(k\)-annuli for every \(k\in[d-1]\). Since \(h\) cuts the unit cube into exponentially many small cubes, we obtain exponentially many \(k\)-annuli for every \(k\in[d-1]\) in \(F^{-1}((-\infty,0))\). We will then argue that \(F^{-1}((-\infty,0))\) is homotopy equivalent to \(F^{-1}((-\infty,0])\).
The idea of constructing a ReLU neural network that folds the input space goes back to Montufar et al. (2014), where the construction was used to show that a deep neural network with ReLU activation function can have exponentially many linear regions. Using their techniques,
we first build a \(1\)-hidden layer NN \(h^{(1,m,d)}\colon\mathbb{R}^{d}\to\mathbb{R}^{d}\) for \(m\in\mathbb{N}\) even that folds the input space, mapping \(m^{d}\) many small cubes \(W^{(1,m,d)}_{i_{1},\ldots,i_{d}}\subset[0,1]^{d}\) by scaling and mirroring to \([0,1]^{d}\). More precisely, the NN \(h^{(1,m,d)}\) has \(m\cdot d\) many neurons in the single hidden layer, who are partitioned into \(m\) groups. The weights are chosen such that the output of the neurons in one group depends only on one input variable and divides the interval \([0,1]\) into \(m\) subintervals of equal length, each of which is then mapped to the unit interval \([0,1]\) by the output neuron. Figure 2 illustrates this construction. In Appendix 5.2.1 or in Montufar et al. (2014), the reader can find an explicit construction of \(h^{(1,m,d)}\).
The map \(h^{(1,m,d)}\) identifies only \(O(m^{d})\) many cubes with each other. To subdivide the input space into exponentially many cubes and map them to the unit cube, we need a deep neural network. For this purpose, we utilize a vector \(\mathbf{m}\) of folding factors instead of a single number \(m\). Let \(\mathbf{m}=(m_{1},\ldots,m_{L})\in\mathbb{N}^{L}\) with \(m_{\ell}\) even for all \(\ell\in[L]\) and define the neural network \(h^{(L,\mathbf{m},d)}\) with \(L\) hidden layers as \(h^{(L,\mathbf{m},d)}=h^{(1,m_{L},d)}\circ\cdots\circ h^{(1,m_{1},d)}\). Since each of the \(m_{1}^{d}\) cubes that results from the subdivision by the first layer is mapped back to \([0,1]^{d}\), each cube is subdivided again into \(m_{2}^{d}\) cubes by the subsequent layer. Thus, after \(L\) such layers, we obtain a subdivision of the input space into \(\left(\prod_{\ell=1}^{L}m_{\ell}\right)^{d}\) cubes.
In the following, we define variables that are fixed but arbitrary: \(L\in\mathbb{N}\), \(\mathbf{m}=(m_{1},\ldots,m_{L})\in\mathbb{N}^{L}\) and \(M=\left(\prod_{\ell=1}^{L}m_{\ell}\right)\) with \(m_{\ell}>1\) even for all \(\ell\in[L]\). The following lemma states that \(h^{(L,\mathbf{m},d)}\) actually enjoys the aforementioned properties.
**Lemma 5**.: _(cf. Montufar et al. (2014)) Let \(d\in\mathbb{N}\), then:_
1. \(h^{(L,\mathbf{m},d)}(W^{(L,\mathbf{m},d)}_{(i_{1},\ldots,i_{d})})=[0,1]^{d}\)__
2. \(\pi_{j}\circ h^{(L,\mathbf{m},d)}_{|W^{(L,\mathbf{m},d)}_{(i_{1},\ldots,i_{d} )}}(x_{1},\ldots,x_{d})=\left\{\begin{array}{ll}M\cdot x_{j}-(i_{j}-1)&i_{j }\text{ odd}\\ -M\cdot x_{j}+i_{j}&i_{j}\text{ even}\end{array}\right.\)__
_for all \((i_{1},\ldots,i_{d})\in[M]^{d}\)._
We now define cutting points as the points that are mapped to the point \((1,1,1,\ldots.,1,0)\) by the map \(h^{(L,\mathbf{m},d)}\) since they will play a central role in counting the annuli in the sublevel set of \(F\).
**Definition 6**.: _We call a point \(x\in[0,1]^{d}\) a cutting point if it has coordinates of the form \(x_{i}=\frac{x_{i}^{\prime}}{M}\) for all \(i\in\{1,\ldots,d\}\), where the \(x_{i}^{\prime}\) are odd integers for \(1\leq i\leq d-1\) and \(x_{d}^{\prime}\) is an even integer._
Figure 1: The graph of the function \(\pi_{j}\circ h^{(1,2,d)}\) that folds the unit interval, i.e., mapping the interval \([0,0.5]\) and \([0.5,1]\) to the unit interval. This function is realised by a hidden layer with \(2\) hidden neurons.
Next, for \(w\geq 2\), we build a 1-hidden layer neural network \(\hat{g}^{(w,d)}\colon\mathbb{R}^{d}\to\mathbb{R}\) that cuts the d-dimensional unit cube into \(w\) pieces such that \(\hat{g}^{(w,d)}\) maps the pieces alternatingly to \(\mathbb{R}_{\geq 0}\) and \(\mathbb{R}_{\leq 0}\), respectively. We omit the indices \(w\) and \(d\) whenever they are clear from the context.
In order to build the neural network, we fix \(w\) and \(d\) and define the maps \(\hat{g}_{q}\colon\mathbb{R}^{d}\to\mathbb{R}\), \(q=0,\ldots,w+1\) by
\[\hat{g}_{q}(x)=\left\{\begin{array}{ll}\max\{0,\mathbf{1}^{T}x\}&q=0\\ \max\{0,\mathbf{1}^{T}x-\frac{1}{4}\}&q=w+1\\ \max\{0,2(\mathbf{1}^{T}x-(2q-1)/8w)\}&\text{else}\end{array}\right.\]
Later in this section, we will iteratively construct \(k\)-annuli in the sublevel set of \(F\) for all \(k\in[d-1]\). In order to ensure that these annuli are disjoint, it is convenient to place them around the cutting points. To achieve this, we mirror the map \(\hat{g}\) before precomposing it with \(h\). The mirroring transformation that maps the origin to the point \((1,\ldots,1,0)\) is an affine map \(t:[0,1]^{d}\to[0,1]^{d}\) defined by \(t(x_{1},x_{2},\ldots,x_{d})=(1-x_{1},1-x_{2},\ldots,1-x_{d-1},x_{d})\). We define the neural network \(g=\hat{g}\circ t\) as the consecutive execution of \(\hat{g}\) and \(t\).
**Lemma 7**.: _Let \(d,w\in\mathbb{N}\) with \(w\) odd and_
\[R_{q}=\{x\in[0,1]^{d}:\frac{q}{4w}<\|(1,1,\ldots,1,0)-x\|_{1}<\frac{q+1}{4w}\}.\]
_Then there exists a 1-hidden layer neural network \(g^{(w,d)}\colon\mathbb{R}^{d}\to\mathbb{R}\) of width \(w+2\) such that \(g^{(w,d)}(R_{q})\subseteq(-\infty,0)\) for all odd \(\in[w-1]_{0},g^{(w,d)}(R_{q})\subseteq(0,\infty)\) for all even \(q\in[w-1]_{0}\) and \(g^{(w,d)}(x)=0\) for all \(x\in[0,1]^{d}\) with \(\|(1,1,\ldots,1,0)-x\|_{1}\geq\frac{1}{4}\)._
Lemma 31 in the appendix characterizes the regions around cutting points that admit positive respectively negative values under the map \(g^{(w,d)}\circ h^{(L,\mathbf{m},d)}\). We focus on the regions that admit negative values, i.e., the space \(Y_{d,w}\coloneqq(g^{(w,d)}\circ h^{(L,\mathbf{m},d)})^{-1}((-\infty,0))\) and observe that we obtain \(d\)-dimensional \((d-1)\)-annuli around each cutting point.
Combining Lemma 31 and further observations about the number of cutting points (cf. Observation 32 in the appendix), we can finally describe \(Y_{d,w}\) as a topological space.
**Proposition 8**.: _The space \(Y_{d,w}\) is homeomorphic to the disjoint union of \(p_{d}=\frac{M^{(d-1)}}{2^{d-1}}\cdot\left(\frac{M}{2}-1\right)\cdot\left\lceil \frac{w}{2}\right\rceil\) many \((d-1)\)-annuli and \(p^{\prime}_{d}=\frac{M^{(d-1)}}{2^{d-2}}\cdot\left\lceil\frac{w}{2}\right\rceil\) many disks, that is,_
\[Y_{d,w}\cong\coprod_{k=1}^{p_{d}}(S^{d-1}\times[0,1])\sqcup\coprod_{k=1}^{p^{ \prime}_{d}}D^{d}.\]
In order to obtain exponentially many \(k\)-annuli for all \(k\in[d-1]\), we follow a recursive approach: At each step, we start with a \(k\)-dimensional space that has exponentially many \(j\)-annuli for all \(j\in[k-1]\). We then cross this space with the interval \([0,1]\), transforming the \(k\)-dimensional \(j\)-annuli into \((k+1)\)-dimensional \(j\)-annuli. Finally, we "carve" \((k\!+\!1)\)-dimensional \(k\)-annuli in this newly formed product space. To allow us flexibility with respect to the numbers of annuli carved in different dimensions, we fix an arbitrary vector \(\mathbf{w}=(w_{1},\ldots,w_{d-1})\in\mathbb{N}^{d-1}\) such that \(\sum_{i=1}^{d-1}(w_{i}+2)=n_{L+1}\). We iteratively define the \(1\)-hidden layer neural network \(f^{(w_{1},\ldots,w_{k-1})}\colon\mathbb{R}^{k}\to\mathbb{R}\) of width \(n_{L+1}\) by \(f^{(w_{1})}=g^{(w_{1},2)}\) and
\[f^{(w_{1},\ldots,w_{k-1})}=f^{(w_{1},\ldots,w_{k-2})}\circ p_{k-1}+g^{(w_{k-1},k)}\]
for \(k\leq d\). Roughly speaking, the following lemma states that the carving map does not interfere with the other maps, i.e., there is enough space in the unit cubes to place the \(k\)-annuli after having placed all \(k^{\prime}\)-annuli (\(k^{\prime}<k\)) in the same, inductive manner.
**Lemma 9**.: _For \(k\leq d\) and \(\mathbf{w}=(w_{1},\ldots,w_{d-1})\in\mathbb{N}^{d-1}\) it holds that_
1. \(f^{(w_{1},\ldots,w_{k-2})}\circ p_{k-1}(x)\neq 0\implies g^{(w_{k-1},k)}(x)=0\) _and_
2. \(g^{(w_{k-1},k)}(x)\neq 0\implies f^{(w_{1},\ldots,w_{k-2})}\circ p_{k-1}(x)=0\)__
_for all \(x\in[0,1]^{k}\)._
Using Lemma 9 and the fact that the folding maps \(h^{(L,\mathbf{m},k)}\) are compatible with projections (cf. Lemma 33 in Appendix), we can make sure that we can construct the cuts iteratively so that we obtain \(k\)-annuli for every \(k\in[d-1]\), which is stated in the following lemma.
**Lemma 10**.: _For \(2\leq k\leq d\), the space \(X_{k}\coloneqq(f^{(w_{1},\ldots,w_{k-1})}\circ h^{(L,\mathbf{m},k)})^{-1}((- \infty,0))\) satisfies_
\[X_{k}=(X_{k-1}\times[0,1])\sqcup Y_{k,w_{k-1}}\]
_with \(X_{1}\coloneqq\emptyset\)._
Lemma 10, Proposition 8 and the disjoint union axiom (Proposition 23 in Appendix 5.1.2) allow us to compute the Betti numbers of the decision region of \(F\coloneqq f^{(w_{1},\ldots,w_{d-1})}\circ h^{(L,\mathbf{m},d)}\) as stated in Theorem 35 in the appendix. One can easily generalize this statement by rounding down the widths \(n_{1},\ldots,n_{L}\) to the nearest even multiple of \(d\):
**Theorem 11**.: _Given an architecture \(A=(d,n_{1},\ldots,n_{L},1)\) with \(n_{\ell}\geq 2d\) for all \(\ell\in[L]\) and numbers \(w_{1},\ldots,w_{d-1}\in\mathbb{N}\) such that \(\sum_{k=1}^{d-1}(w_{k}+2)=n_{L}\), there is a neural network \(F\in\mathcal{F}_{A}\) such that_
1. \(\beta_{0}(F^{-1}((-\infty,0)))=\sum_{k=2}^{d}\frac{M^{(k-1)}}{2^{k-1}}\cdot \left(\frac{M}{2}+1\right)\cdot\left\lceil\frac{w_{k}}{2}\right\rceil\)__
2. \(\beta_{k}(F^{-1}((-\infty,0)))=\frac{M^{(k-1)}}{2^{k-1}}\cdot\left(\frac{M}{2}- 1\right)\cdot\left\lceil\frac{w_{k-1}}{2}\right\rceil\) _for_ \(0<k<d\)_,_
_where \(M=\prod_{\ell=1}^{L-1}2\cdot\lfloor\frac{n_{\ell}}{2d}\rfloor\)._
In Appendix 5.2.7 we modify the construction slightly by adding a small constant \(b\) to the output layer, such that we obtain in Theorem 36 the same lower bounds for \(\beta_{k}(F)=\beta_{k}(F^{-1}((-\infty,0])\). The special case \(\left\lfloor\frac{w_{1}}{2}\right\rfloor=\ldots=\left\lfloor\frac{w_{d}}{2}\right\rfloor\) then corresponds precisely to Corollary 2.
In order to obtain the lower bound we choose the weights explicitly, but the construction is robust to small perturbations. Basically this relies on the fact that since we have finitely many linear regions and no hyperplanes of non-linearity introduced at different applications of the ReLU function that coincide, one can perturb the weights slightly, such that the combinatorial structure of the polyhedral complex is preserved. From this we easily conclude the maintenance of the existence of all the annuli. In fact, if we denote by \(\Phi\colon\mathbb{R}^{D}\to C(\mathbb{R}^{d})\) the map that assigns a vector of weights to the function computed by the ReLU neural network with this weights, in Section 5.3 we prove the following:
**Proposition 12**.: _There is an open set \(U\subseteq\mathbb{R}^{D}\) in the parameter space of the architecture \(A=(d,n_{1},\ldots,n_{L},1)\) such that \(\Phi(u)\) restricted to the unit cube has at least the same topological expressivity as \(F\) in Theorem 36 for all \(u\in U.\)_
As mentioned previously, the sum of Betti numbers, the notion of topological expressivity used in Bianchini and Scarselli (2014), does not provide us with an understanding of holes of different dimensions. On the other hand, our bounds are clearly an extension of this result. In addition, the dimension-wise lower bound allows further implications, one of them being a lower bound on the _Euler characteristic_, which is the alternating sum \(\chi(X)=\sum_{k=1}^{d}\beta_{k}(X)\) of the Betti numbers.
**Corollary 13**.: _Let \(A\) be the architecture as in Theorem 36, then there is a ReLU NN \(F\colon\mathbb{R}^{d}\mapsto\mathbb{R}\) with architecture \(A\) such that \(\chi(F^{-1}((-\infty,0]))\in\Omega\left(M^{d}\cdot\sum\limits_{i=1}^{d-1}w_{i}\right)\), where \(\chi(F^{-1}((-\infty,0]))\) denotes the Euler characteristic of the space \(F^{-1}((-\infty,0])\)._
Upper Bound
In this section we derive an upper bound for \(\beta_{k}(F)\) for a ReLU neural network \(F\colon\mathbb{R}^{d}\to\mathbb{R}\) for all \(k\in[d-1]\), showing that they are polynomially bounded in the width using an upper bound on the linear regions of \(F\). A linear region \(R\) of \(F\) contains at most one maximal convex polyhedral subspace where \(F\) takes on exclusively non-negative function values. Intuitively, every such polyhedral subspace can be in the interior of at most one \(d\)-dimensional hole of the sublevel set \(F^{-1}((-\infty,0])\) and thus the number of linear regions is an upper bound for \(\beta_{d-1}(F)\). In the following proposition we will formalize this intuition and generalize it to \(\beta_{k}(F)\) for all \(k\in[d-1]_{0}\).
**Proposition 14**.: _Let \(F\colon\mathbb{R}^{d}\to\mathbb{R}\) be a neural network of architecture \((d,n_{1},\ldots,n_{L},1)\). Then it holds that \(\beta_{0}(F)\leq\sum_{(j_{1},\ldots,j_{L})\in J}\quad\prod_{\ell=1}^{L}\binom {n_{\ell}}{j_{\ell}}\) and for all \(k\in[d-1]\) that_
\[\beta_{k}(F)\leq\binom{\sum_{(j_{1},\ldots,j_{L})\in J}\quad\prod_{\ell=1}^{L }\binom{n_{\ell}}{j_{\ell}}}{d-k-s},\]
_where \(J=\big{\{}(j_{1},\ldots,j_{L})\in\mathbb{Z}^{L}\colon 0\leq j_{\ell}\leq \min\{d,n_{1}-j_{1},\ldots,n_{\ell-1}-j_{\ell-1}\}\) for all \(\ell=1,\ldots,L\big{\}}\) and \(s\in[d]\) is the dimension of the lineality space of a refinement of the canonical polyhedral complex of \(F\)._
Proof sketch.: Theorem 1 in [Serra et al., 2017] states that \(F\) has at most \(r\coloneqq\sum_{(j_{1},\ldots,j_{L})\in J}\prod_{l=1}^{L}\binom{n_{l}}{j_{l}}\) linear regions. In Section 5.4 we will provide a formal proof for the statement that we sketch here. Let \(\mathcal{P}\) be the canonical polyhedral complex of \(F\), i.e, \(F\) is affine linear on all polyhedra in \(\mathcal{P}\) (c.f Definition 38 in the appendix) and \(\mathcal{P}^{-}\) be a subcomplex of a refinement of \(\mathcal{P}\) such that \(F\) takes on exclusively non-positive values on all polyhedra in \(\mathcal{P}^{-}\). Therefore, the support \(|\mathcal{P}^{-}|\) of \(\mathcal{P}^{-}\) equals \(F^{-1}((-\infty,0])\) and we then proceed by showing the chain of inequalities
\[\beta_{k}(F)=\beta_{k}(|\mathcal{P}^{-}|)\leq\#\mathcal{P}_{k+1-\ell}\leq \binom{r}{d-k-\ell}\]
using cellular homology and polyhedral geometry, where \(\mathcal{P}_{k+1}\subseteq\mathcal{P}\) is the set of \((k+1)\)-dimensional polyhedra in \(\mathcal{P}\). This concludes the proof, since it also holds that \(\beta_{0}(|\mathcal{P}^{-}|)\leq\#\mathcal{P}_{d}=r\).
This implies that we obtain an upper bound that is polynomial in the width:
**Corollary 4**.: _Let \(F\colon\mathbb{R}^{d}\to\mathbb{R}\) be a neural network of architecture \((d,n_{1},\ldots,n_{L},1)\). Then it holds that \(\beta_{k}(F)\in O\left(\left(\prod_{i=1}^{L}n_{i}\right)^{d^{2}}\right)\) for \(k\in[d-2]\) and \(\beta_{0},\beta_{d-1}\in O\left(\left(\prod_{i=1}^{L}n_{i}\right)^{d}\right)\)._
## 4 Conclusion, Limitations and Outlook
Since it is widely accepted that data sets often have nontrivial topologies, investigating a neural network's ability to capture topological properties, as characterized by all Betti numbers, is an exciting and essential question that yields insight into the nature of ReLU networks. In an attempt to shed light on this question, we proved lower and upper bounds for the topological expressivity of ReLU neural networks with a given architecture. Our bounds give a rough estimate on how the architecture needs to be in order to be at least theoretically able to capture the topological complexity of the data set in these dimensions; in particular, in the first few dimensions where Betti numbers are computable in practice.
As a byproduct of our analysis we saw that two hidden layers are sufficient to increase the topological expressivity as much as we want at the expense of an increased width. Even though Betti numbers are a common complexity measure for topological spaces in data analysis, they only provide a coarse classification, i.e., two spaces can have the same Betti numbers but still look very different. Although there are finer topological invariants such as cohomology rings or homotopy groups, from a computational point of view, Betti numbers are a good trade-off between the ability to capture differences of spaces and tractability. Nevertheless, it might be interesting to find further topological or geometrical invariants to investigate the expressivity of neural networks in the setting of classification tasks.
Even though our lower bounds apply under certain restrictions of neural network architecture, this does not pose a big limitation for our purposes. Since our results are of a theoretical and mostly asymptotic nature, a constant factor (in the hidden layers resp. in the last hidden layer) is negligible. Besides, since our layers merely consists of many small layers put in parallel, one could also concatenate the layers in order to achieve a smaller width maintaining all the asymptotic results.
It seems straightforward that the construction in Section 2 can be adapted to neural networks with sigmoidal activation functions in a "smoothed" way. Therefore we conjecture that the same lower bound holds for the topological expressivity of neural networks with sigmoidal activation function, which would generalise the lower bound for the zeroth Betti number given in Bianchini and Scarselli (2014) to all Betti numbers.
AcknowledgementsThe authors would like to thank Christoph Hertrich and Martin Skutella for many valuable discussions and their careful proofreading. Moreover, we thank the various anonymous referees for their remarks.
|
2307.07389 | Learning Sparse Neural Networks with Identity Layers | The sparsity of Deep Neural Networks is well investigated to maximize the
performance and reduce the size of overparameterized networks as possible.
Existing methods focus on pruning parameters in the training process by using
thresholds and metrics. Meanwhile, feature similarity between different layers
has not been discussed sufficiently before, which could be rigorously proved to
be highly correlated to the network sparsity in this paper. Inspired by
interlayer feature similarity in overparameterized models, we investigate the
intrinsic link between network sparsity and interlayer feature similarity.
Specifically, we prove that reducing interlayer feature similarity based on
Centered Kernel Alignment (CKA) improves the sparsity of the network by using
information bottleneck theory. Applying such theory, we propose a plug-and-play
CKA-based Sparsity Regularization for sparse network training, dubbed CKA-SR,
which utilizes CKA to reduce feature similarity between layers and increase
network sparsity. In other words, layers of our sparse network tend to have
their own identity compared to each other. Experimentally, we plug the proposed
CKA-SR into the training process of sparse network training methods and find
that CKA-SR consistently improves the performance of several State-Of-The-Art
sparse training methods, especially at extremely high sparsity. Code is
included in the supplementary materials. | Mingjian Ni, Guangyao Chen, Xiawu Zheng, Peixi Peng, Li Yuan, Yonghong Tian | 2023-07-14T14:58:44Z | http://arxiv.org/abs/2307.07389v1 | # Learning Sparse Neural Networks with Identity Layers
###### Abstract
The sparsity of Deep Neural Networks is well investigated to maximize the performance and reduce the size of overparameterized networks as possible. Existing methods focus on pruning parameters in the training process by using thresholds and metrics. Meanwhile, feature similarity between different layers has not been discussed sufficiently before, which could be rigorously proved to be highly correlated to the network sparsity in this paper. Inspired by interlayer feature similarity in overparameterized models, we investigate the intrinsic link between network sparsity and interlayer feature similarity. Specifically, we prove that reducing interlayer feature similarity based on Centered Kernel Alignment (CKA) improves the sparsity of the network by using information bottleneck theory. Applying such theory, we propose a plug-and-play **CKA**-based **S**parsity **R**egularization for sparse network training, dubbed CKA-SR, which utilizes CKA to reduce feature similarity between layers and increase network sparsity. In other words, layers of our sparse network tend to have their own identity compared to each other. Experimentally, we plug the proposed CKA-SR into the training process of sparse network training methods and find that CKA-SR consistently improves the performance of several State-Of-The-Art sparse training methods, especially at extremely high sparsity. Code is included in the supplementary materials.
Keywords:Network sparsity Inter-layer feature similarity Network compression.
## 1 Introduction
Deep Neural Networks (DNNs) achieve great success on many important tasks, including but not limited to computer vision and natural language processing. Such accurate solutions highly rely on overparameterization, which results in a tremendous waste of resources. A variety of methods are proposed to solve such issues, including model pruning [26, 56, 29] and sparse training [38, 33, 1, 40]. Sparse training aims to train a sparse network from scratch, which reduces both training and inference expenses.
A recent study [15] shows the close relation between overparameterization and interlayer feature similarity (_i.e._ similarity between features of different layers, as shown in Figure 1(a) ). Specifically, overparameterized models possess obviously greater similarity between features of different layers. Concluding from the facts above, we know that both interlayer feature similarity and network sparsity are deeply related to overparameterization. Inspired by this, we utilize the interlayer feature similarity to increase network sparsity and preserve accuracy at a high level, namely by adopting similarity methods to solve sparsity problems.
Following this path, we survey similarity measurements of features, including Canonical Correlation Analysis (CCA) [42; 41; 18] and Centered Kernel Alignment (Linear-CKA and RBF-CKA) [9], etc. Among these measurements, CKA measurement is advanced and robust, for it reliably identifies correspondences between representations in networks with different widths trained from different initializations. Theoretically, CKA measurement has many good properties, including invariance to orthogonal transform and isotropic scaling, and close correlation with mutual information [25]. The advantages of CKA make it possible to propose robust methods to solve sparsity problems with interlayer feature similarity.
To this end, we propose **CKA**-based **S**parsity **R**egularization (CKA-SR) by introducing the CKA measurement into training loss as a regularization term, which is a plug-and-play term and forces the reduction of interlayer feature similarity. Besides, we further prove that the proposed CKA-SR increases the sparsity of the network by using information bottleneck(IB) theory [7; 4; 6; 25]. Specifically, we mathematically prove that our CKA-SR reduces the mutual information between the features of the intermediate and input layer, which is one of the optimization objectives of the information bottleneck method. Further, we prove that reducing the mutual information above is equivalent to increasing network sparsity. By these proofs, we demonstrate the equivalence of reducing interlayer feature similarity and increasing network sparsity, which heuristically investigates the intrinsic link between interlayer feature similarity and network sparsity.
To validate the proposed CKA-SR, we conduct experiments on several advanced sparse training methods, such as Lottery Ticket Hypothesis (LTH) [33], Gradient Signal Preservation (GraSP) [40], Dual Lottery Ticket Hypothesis (DLTH) [38], and Random Sparse Training [1]. Specifically, we introduce our CKA-SR regularization to the training process of these sparse training methods and thus achieve consistent performance gains across these methods. Moreover, we introduce CKA-SR to the training and finetuning process of network pruning methods such as l1-norm filter pruning [3], non-structured weight-level pruning [56], and knapsack channel pruning [2], and thus achieve performance improvements. In short, CKA-SR boosts the performance of sparse training and network pruning methods. Appendix and codes are included in the supplementary materials. See them in [https://anonymous.4open.science/r/Learning-Sparse-Neural-Networks-with-Identity-Layers-9369](https://anonymous.4open.science/r/Learning-Sparse-Neural-Networks-with-Identity-Layers-9369).
Our contributions are four-fold:
* We heuristically investigate the intrinsic link between interlayer feature similarity and network sparsity. To the best of our knowledge, we are the first to find that reducing interlayer feature similarity directly increases network sparsity.
* Theoretically, we prove the equivalence of interlayer feature similarity reduction, interlayer mutual information reduction, and network sparsity increment.
* We proposed Identity Layers Regularization (ILR) with few-shot samples increases network sparsity and weakens overparameterization by explicitly reducing interlayer feature similarity. Specifically, we implement ILR as CKA-SR.
* Experimentally, our CKA-SR regularization term increases network sparsity and improves the performance of multiple sparse training methods and several pruning methods.
## 2 Related Works and Preliminaries
### Centered Kernel Alignment
Here we provide the formalization of Centered Kernel Alignment (CKA). For the feature map \(X\in\mathbb{R}^{n\times p_{1}}\) and feature map \(Y\in\mathbb{R}^{n\times p_{2}}\) (where \(n\) is the number of examples, while \(p_{1}\) and \(p_{2}\) are the number of neurons), we use kernels \(k\) and \(l\) to transform \(X\) and \(Y\) into \(K\) and \(L\) matrices, where the elements are defined as: \(K_{ij}=k(x_{i},x_{j}),L_{ij}=l(y_{i},y_{j})\). Further, the formalization of CKA-based similarity measurement \(\mathcal{F}\) of \(K\) and \(L\) matrices could be formulated as:
\[\mathbf{CKA}(K,L)=\frac{\text{HSIC}(K,L)}{\sqrt{\text{HSIC}(K,K)\text{HSIC}(L,L)}} \tag{1}\]
Figure 1: Reduction of interlayer feature similarity with CKA-SR. (a) Interlayer feature similarity visualization of baseline models. (b) Interlayer feature similarity visualization of models pre-trained with CKA-SR. (c) Comparison of weight distribution between baseline and CKA-SR models.
where HSIC is the empirical estimator of Hilbert-Schmidt Independence Criterion [19]. Then, the formalizations of CKA-based similarity measurement for linear kernel \(k(x,y)=x^{T}y\) is as follows:
\[\mathbf{CKA}_{Linear}(X,Y)=\frac{||Y^{T}X||_{F}^{2}}{||X^{T}X||_{F}||Y^{T}Y||_{F}} \tag{2}\]
### Interlayer feature similarity of overparameterized models
Nguyen _et al._[15] investigate the relationship between overparameterized models and similar feature representations. Specifically, wide ResNets, deep ResNets and ResNets trained on small datasets possess extremely similar feature representations between adjacent layers, named block structure. Then they infer an empirically verified hypothesis that _overparameterized models possess similar feature representations_. Besides, similar observations also appear in ViT [22] based architectures. We may conclude that such block structure is a common problem in different architectures. This prompts us to explore the potential benefits of reducing interlayer feature similarity and learning sparse neural networks with identity layers.
## 3 Methodology
### Sparsity regularization based on Centered Kernel Alignment
As discussed above, the interlayer feature similarity of overparameterized models motivates us to learn sparse neural networks with identity layers. We choose Centered Kernel Alignment (CKA) as the basis of our method, for it's widely applied to measuring feature similarity of different layers. On the other side, the high similarity of layers indicates the overparameterization of Deep Neural Networks. Hence, CKA similarity measurement could be regarded as a scale of overparameterization. This reminds us of directly reducing this measurement to solve overparameterization problems. Even more remarkable, CKA owns many excellent properties, including robustness, invariance to orthogonal transformation, and invariance to scale transformation. These properties make CKA ideal for designing a regularization term to solve overparameterization problems.
Specifically, we add a CKA-based regularization term to the training loss function. For a model with empirical loss (cross-entropy loss) \(\mathcal{L}_{\mathcal{E}}\), the training loss with CKA-SR is formalized as:
\[\mathcal{L}=\mathcal{L}_{\mathcal{E}}+\mathcal{L}_{\mathcal{C}}=\mathcal{L}_{ \mathcal{E}}+\beta\cdot\sum_{s=1}^{S}\sum_{i=0}^{N_{s}}\sum_{j=0,j\neq i}^{N_{ s}}w_{ij}\mathbf{CKA}_{Linear}(X_{i},X_{j}) \tag{3}\]
where \(\mathcal{L}_{\mathcal{C}}\) is CKA-SR and \(\beta\) is the weight of \(\mathcal{L}_{\mathcal{C}}\). \(S\) is the number of stages in the network. For networks with only one stage such as DeiTs, \(N_{s}\) is the total number of layers. And for networks with several stages such as ResNets, \(N_{s}\) is the number
of layers in each stage \(s\). \(w_{ij}\) is the weight of CKA measurement between the \(i^{th}\) and the \(j^{th}\) layer, and it's optional. \(X_{0}\) is the input representation and \(X_{i}\) is the output representation of the \(i^{th}\) layer.
The \(\mathcal{L_{C}}\) part in Eq.(3) forcibly reduces the sum of the pairwise similarity of all layers in the network, _i.e._ forcibly reduces the interlayer similarity of the network.
### Theoretical analysis
#### 3.2.1 Approximate sparsity.
To further explore the relationship between the Frobenius norm of weight matrix and network sparsity, we expand sparsity to approximate sparsity. We define \(\epsilon\)-sparsity (_i.e._, approximate sparsity) of a neural network as follows:
\[S_{\epsilon}=\frac{|\{w|w\in\mathbb{W}\wedge|w|<\epsilon\}|}{|\mathbb{W}|} \tag{4}\]
where \(\epsilon\) is a number close to zero, \(\mathbb{W}\) is the set consisting of all parameters of the network's weight matrix, \(|\mathbb{W}|\) is the total number of parameters, and \(\{w|w\in\mathbb{W}\wedge|w|<\epsilon\}\) is the set consisting of small parameters (_i.e._, parameters with an absolute value smaller then \(\epsilon\)) of the weight matrix.
In Eq. (4), \(S_{\epsilon}\) represents the proportion of network parameters that approach 0. We define this as \(\epsilon\)-sparsity of the network. Further, we prove that \(\epsilon\)-sparsity and sparsity (_i.e._, proportion of network parameters that equal 0) of neural networks are approximately equivalent in practice. Our theory is formulated as Theorem 3.1. See the detailed proof of Theorem 3.1 in the Appendix.
Theorem 3.1: _The \(\epsilon\)-sparsity and the sparsity of neural networks are approximately equivalent._
#### 3.2.2 Information bottleneck.
The information bottleneck (IB) theory proposed by Tishby _et al._[4] is an extension of the rate distortion theory of source compression. This theory shows a trade-off between preserving relevant label information and obtaining efficient compression. Tishby _et al._[6] further research the relationship between information bottleneck theory and deep learning. They interpret the goal of deep learning as an information-theoretic trade-off between compression and prediction. According to the principles of information bottleneck theory, for a neural network \(Y=f(X)\) with input \(X\) and output \(Y\), the best representation of intermediate feature map \(\hat{X}\) captures the relevant features and ignores the irrelevant features (features that have little contribution to the prediction of \(Y\)) at the same time. This process is called "compression". One of its minimization objectives is as follows:
\[L=I(X;\hat{X})-\alpha I(\hat{X};Y)\]
where \(I(X;\hat{X})\) is the mutual information between input \(X\) and intermediate representation \(\hat{X}\), \(I(\hat{X};Y)\) is the mutual information between intermediate representation \(\hat{X}\) and output \(Y\), and \(\alpha\) is a weight parameter for adjusting their proportions.
#### 3.2.2 Minimizing the mutual information.
Firstly, we prove that our CKA-SR is continuous and optimizable in Theorem 3.1, which makes it possible to minimize CKA-SR in machine learning. See the detailed proof of Theorem 3.1 in the Appendix. Then we prove that minimizing CKA-SR minimizes the mutual information \(R=I(X;\hat{X})\) between the intermediate and input representation. Besides, the \(\alpha I(\hat{X};Y)\) part of Eq. (5) is implicitly optimized through the cross entropy loss \(\mathcal{L}_{\mathcal{E}}\). Thus, we prove that our method minimizes the optimization objective in Eq. (5), _i.e._, our CKA-SR method conforms to the principles of information bottleneck theory, and it's beneficial to the representation compression process. Our theory is formulated as Theorem 3.1.
Theorem 3.1: \(\mathcal{L}_{\mathcal{C}}\) _is continuous and optimizable._
Theorem 3.2: _Minimizing \(\mathcal{L}_{\mathcal{C}}\) minimizes the mutual information \(R=I(X;\hat{X})\) between intermediate representation \(\hat{X}\) and input representation \(X\)._
To prove Theorem 3.1, we first review Lemma 1 and Lemma 2 from [25] as follows. Following [25], we assume that \(X\sim\mathcal{N}(\mathbf{0},\boldsymbol{\Sigma}_{X})\) and \(Y\sim\mathcal{N}(\mathbf{0},\boldsymbol{\Sigma}_{Y})\), _i.e._, feature maps \(X\) and \(Y\) follow Gaussian distribution.
Lemma 1: _Minimizing the distance between \(X^{T}Y\) and zero matrix is equivalent to minimizing the mutual information \(I(X;Y)\) between representation \(X\) and \(Y\)._
Lemma 2: _Minimizing \(\mathbf{CKA}_{Linear}(X,Y)\) is equivalent to minimizing \(I(X;Y)\)._
These two lemmas illustrate the relationship between the CKA similarity measurement and information theory. That is, _minimizing the CKA similarity between two feature representations is equivalent to minimizing the mutual information between them_. Based on these two lemmas, we prove Theorem 3.1. See the detailed proof of the two lemmas and Theorem 3.1 in the Appendix.
Theorem 3.1 connects CKA-SR with information bottleneck theory. In short, _minimizing CKA-SR is equivalent to optimizing the optimization objective \(I(X;\hat{X})\) of information bottleneck theory_.
#### 3.2.3 Increasing the sparsity of neural networks.
Further, starting from the information bottleneck theory, we prove that CKA-SR increases the network sparsity, formulated as Theorem 3.1.
Theorem 3.2: _Minimizing \(R=I(X;\hat{X})\Leftrightarrow\) Minimizing \(||W||_{F}^{2}\Leftrightarrow\) Increasing the approximate sparsity of network \(\Leftrightarrow\) Increasing network sparsity._
Proof: According to Theorem 3.3, CKA-SR minimizes \(R=I(X;\hat{X})\) for any \(X\). Further, combining this with Lemma 1, for any \(X\), CKA-SR minimizes the distance between \(X^{T}\hat{X}\) and \(0\) matrix. For a fully-connected layer, we have \(\hat{X}=W^{T}X+b\). Hence, due to the discussions above, we have: for any \(X\), CKA-SR minimizes the distance between \(X^{T}(W^{T}X+b)=X^{T}W^{T}X+X^{T}b\) and \(0\) matrix. We take an orthogonalized \(X\). Due to the unitary invariance (_i.e._, orthogonal invariance in the real number field) of Frobenius norm, \(||W||_{F}^{2}\) equals to \(||X^{T}W^{T}X||_{F}^{2}\). Therefore, minimizing the distance between \(X^{T}W^{T}X+X^{T}b\) and \(0\) matrix is equivalent to minimizing \(||X^{T}W^{T}X||_{F}^{2}\) and further equivalent to minimizing \(||W||_{F}^{2}\).
The above minimization of \(||W||_{F}^{2}\) minimizes the norm of parameter values in weight matrix \(W\), thus making the values more concentrated around \(0\) value. This increases the network's approximate sparsity (defined earlier in this article). Further, according to Theorem 3.1, the approximate sparsity and sparsity are approximately equivalent. So we prove that the above minimization of \(||W||_{F}^{2}\) increases the network sparsity.
Theorem 3.4 connects the optimization objective of information bottleneck theory with network sparsity, thus connecting CKA-SR with network sparsity. In short, _CKA-SR models are more sparse_. We validate this conclusion with our experimental results. Fig.1(c) compares parameter distribution between CKA-SR and baseline models. It's evident that the absolute value of CKA-SR network parameters is more concentrated around \(0\).
## 4 Experiments
### Implementations
#### 4.1.1 Datasets and backbone models.
We validate the effectiveness of our CKA-SR method on image classification, network pruning, and advanced sparse training. We use ResNet18, ResNet20, ResNet32 and ResNet50 [20] as backbones to conduct extensive experiments on CIFAR-10, CIFAR-100 and ImageNet datasets.
#### 4.1.2 Implementations.
We implement our CKA-SR as a regularization of the loss function. We develop a plug-and-play CKA-SR class in PyTorch and plug it into various pre-training and sparse training codes. Because CKA-SR is a regularization of layerwise parameters instead of feature maps themselves, we could utilize few-shot samples of each batch (_generally 8 samples when the batch size is 128 or 256_) to compute CKA-SR. This reduces the computational complexity, thus reducing training expenses. Precisely, we strictly follow the experimental settings of the pruning [2, 56, 3] and sparse training methods [38, 33, 1, 40] and make fair comparisons with them using CKA-SR. The total number of epochs, batch size, optimizer, weight decay, and learning rates all stay the same with the methods to be compared with.
### Pre-Training with CKA-SR
As previously proved, our CKA-SR increases network sparsity. So we validate the performance of CKA-SR in network pruning tasks. We directly prune models pre-trained with CKA-SR on large-scale datasets such as ImageNet. We carry out experiments on several pruning methods and find that our method is effective. As shown in Figure 2, at the same pruning ratio, CKA-SR models outperform baseline models.
#### 4.2.1 Structured pruning.
Following the setting of [3], we perform filter pruning on models pre-trained with CKA-SR without finetuning. Specifically, we prune the filter according to the L1-Norm. The relationship between the pruning ratio and performance is shown in Figure 2(a). When a few filters are pruned, the performance reduction of CKA-SR models is significantly smaller than that of baseline models.
As a State-Of-The-Art method for channel pruning, we perform Knapsack channel pruning [2] on models pre-trained with CKA-SR and achieve higher classification accuracy. The results of Knapsack pruning (w/o finetuning) are shown in Figure 2(b). When a few channels are pruned, the performance reduction of CKA-SR models is much smaller than that of baseline models, which means CKA-SR models possess much higher sparsity.
#### 4.2.2 Non-structured pruning.
We perform non-structured weight-level pruning [56] according to the absolute values of individual weights and compare the performance between baseline ResNet models and pre-trained ResNets with CKA-SR. The relationship between pruning ratio and performance is shown in Figure 2(c). It could be concluded that when massive weights are pruned, the performance reduction of CKA-SR models is smaller than that of baseline models.
Figure 2: Performances of several pruning methods with CKA-SR. The red lines represent CKA-SR models and the blue lines represent baseline models. (a) Performances of L1-norm filter pruning with ResNet18 on ImageNet. (b) Performances of knapsack channel pruning with ResNet50 on ImageNet. (c) Performances of non-structured weight-level pruning with ResNet50 on ImageNet.
Generally, pre-trained models with CKA-SR outperform baseline models in both structured and non-structured pruning methods.
### Sparse network training with CKA-SR
We conduct extensive experiments on several State-Of-The-Art sparse training methods. For fair comparisons, our experiments follow the same settings and backbones of these methods [38, 33, 1, 40]. Note that we conduct experiments on extremely high sparsity (such as 99.8%) settings in GraSP [40], Random sparse training [1], and DLTH [38]. From Table 1, we can find that CKA-SR consistently improves the performance at different levels of sparsity ratios in LTH [33], GraSP [40], Random sparse training [1], and DLTH [38].
#### 4.3.1 Lth.
Lottery Ticket Hypothesis (LTH) [33] is proposed to train a sparse network from scratch, which states that any randomly initialized dense network contains sub-networks achieving similar accuracy to the original network. We plug our CKA-SR into the training process of LTH. We use the code implemented for LTH by [39], adopt ResNet32 as the backbone, and apply sparsity ratios from 0.70 to 0.98 for fair comparisons. The results are given in the first row of Table 1.
#### 4.3.2 GraSP.
Gradient Signal Preservation (GraSP) [40] proposes to preserve the gradient flow through the network during sparse training. We plug our CKA-SR into the sparse training process of GraSP, adopt ResNet32 as the backbone, and apply sparsity ratios from 0.70 to 0.998. The results are given in the second row of Table 1.
#### 4.3.3 Random sparse training.
As one of the newest and State-Of-The-Art sparse training methods, it has been proven that sparse training of randomly initialized networks can also achieve remarkable performances [1]. We plug our CKA-SR into the random sparse training process, adopt ResNet20 as the backbone, and apply sparsity ratios from 0.70 to 0.998. The results are given in the third row of Table 1.
#### 4.3.4 Dlth.
As one of the newest and State-Of-The-Art LTH-based sparse training methods, Dual Lottery Ticket Hypothesis (DLTH) [38] proposes to randomly select subnetworks from a randomly initialized dense network, which can be transformed into a trainable condition and achieve good performance. We apply our CKA-SR to the training process of the DLTH method, adopt ResNet20 as the backbone, and apply sparsity ratios from 0.70 to 0.998. The results are given in the final row of Table 1.
As shown in Table 1, our CKA-SR can be plugged into multiple sparse training methods and improves the model performance consistently. The CKA-SR is effective consistently at different sparse networks, especially at extremely high
sparsity. For GraSP, CKA-SR achieves more than 4.0% of performance improvement at sparsity 99.5% and 6.0% at sparsity 99.8%.
### Ablation studies
#### 4.4.1 Ablation study of regularization term.
Savarese _et al._[34] develop a regularization-based sparse network searching method named Continuous Sparsification. This method introduces \(L_{0}\) Regularization into sparse training. We compare our CKA-SR with \(L_{0}\) Regularization theoretically and experimentally. Theoretically, CKA-SR and \(L_{0}\) regularization regularize networks from different granularity levels. \(L_{0}\) regularization regularizes networks from the individual parameter level, while CKA-SR regularizes networks from the layer level. These regularizations from different granularity levels could work together. Experimentally, we conduct sparse training experiments with ResNet18 on CIFAR-10 using the official code of the CS method. We find that our CKA-SR is able to replace \(L_{0}\) regularization and achieves better performance. Besides, combining CKA-SR and \(L_{0}\) improves performance by 0.4%, demonstrating that our CKA-SR could cooperate with other regularizations. The results are shown in Table 2.
#### 4.4.2 Ablation study of hyperparameter \(\beta\).
We conduct the ablation study of hyperparameter \(\beta\) with Random Sparse Training [1] method on CIFAR-10 dataset. Taking ResNet20 model at a sparsity of 0.95 and adjusting the weight hyperparameter \(\beta\) of our CKA-SR, we get the results shown in Table 3.
\begin{table}
\begin{tabular}{|l|l|c|c|c|c|c|c|} \hline \multirow{2}{*}{Backbone} & \multirow{2}{*}{Method} & \multicolumn{6}{c|}{Sparsity} \\ \cline{3-8} & & _0.70_ & _0.85_ & _0.90_ & _0.95_ & _0.98_ & _0.998_ \\ \hline \multirow{2}{*}{ResNet32} & LTH[33] & 72.28 & 70.64 & 69.63 & 66.48 & 60.22 & ✗ \\ & +CKA-SR & **72.67** & **71.90** & **70.11** & **67.07** & **60.36** & ✗ \\ \hline \multirow{2}{*}{ResNet32} & GraSP[40] & 71.98 & 70.22 & 69.19 & 65.82 & 59.46 & 12.19 \\ & +CKA-SR & **72.19** & **70.25** & **69.28** & **66.29** & **59.49** & **18.44** \\ \hline \multirow{2}{*}{ResNet20} & Random[1] & 65.42 & 60.37 & 56.96 & 47.27 & 33.74 & 2.95 \\ & +CKA-SR & **65.60** & **60.86** & **57.25** & **48.26** & **34.44** & **3.32** \\ \hline \multirow{2}{*}{ResNet20} & DLTH[38] & 67.63 & 65.33 & 62.90 & 57.33 & 48.08 & 19.32 \\ & +CKA-SR & **67.95** & **65.80** & **63.19** & **57.99** & **49.26** & **20.81** \\ \hline \end{tabular}
\end{table}
Table 1: The accuracy (%) when plugging CKA-SR to different sparse training methods on CIFAR-100 from scratch. (LTH is broken when sparsity ratio is larger than 0.99 due to destruction of the structure.)
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline Settings & CKA-SR Only & \(L_{0}\) Only & CKA-SR+\(L_{0}\) \\ \hline Top1-Acc & 91.63 & 91.56 & 91.92 \\ \hline \end{tabular}
\end{table}
Table 2: Ablation study of regularization terms
We conclude that multiple values of hyperparameter \(\beta\) between 1e-05 and 1e-03 increase the performance of sparse networks. However, when the hyperparameter \(\beta\) becomes too large, it would weaken the succession of information through layers, thus causing a reduction in performance. That is to say, there is a trade-off between the identity of layers and the succession of information through layers. In the view of sparsity, there is a trade-off between high sparsity and ideal performance.
## 5 Conclusion
Our work reveals the relationship between overparameterization, network sparsity, and interlayer feature similarity. We thus propose to use the robust and advanced CKA similarity measurement to solve the overparameterization issue. Specifically, we propose a plug-and-play sparsity regularization named CKA-SR which explicitly reduces interlayer similarity. Theoretically, we reveal the equivalence of reducing interlayer similarity and increasing network sparsity, thus proving the CKA-SR increases network sparsity. Experimentally, our CKA-SR consistently improves the performances of several State-Of-The-Art sparse training methods and several pruning methods. Besides, our CKA-SR outperforms former regularization methods. In the future, considering our limitations of expenses to manually select hyperparameters and calculate loss, we will continue to investigate the cooperation of multiple regularizations in sparse training and reduce the expenses of sparse training.
|
2307.06097 | Learning Stochastic Dynamical Systems as an Implicit Regularization with
Graph Neural Networks | Stochastic Gumbel graph networks are proposed to learn high-dimensional time
series, where the observed dimensions are often spatially correlated. To that
end, the observed randomness and spatial-correlations are captured by learning
the drift and diffusion terms of the stochastic differential equation with a
Gumble matrix embedding, respectively. In particular, this novel framework
enables us to investigate the implicit regularization effect of the noise terms
in S-GGNs. We provide a theoretical guarantee for the proposed S-GGNs by
deriving the difference between the two corresponding loss functions in a small
neighborhood of weight. Then, we employ Kuramoto's model to generate data for
comparing the spectral density from the Hessian Matrix of the two loss
functions. Experimental results on real-world data, demonstrate that S-GGNs
exhibit superior convergence, robustness, and generalization, compared with
state-of-the-arts. | Jin Guo, Ting Gao, Yufu Lan, Peng Zhang, Sikun Yang, Jinqiao Duan | 2023-07-12T11:38:34Z | http://arxiv.org/abs/2307.06097v1 | # Learning Stochastic Dynamical Systems as an Implicit Regularization with Graph Neural Networks
###### Abstract
Stochastic Gumbel graph networks are proposed to learn high-dimensional time series, where the observed dimensions are often spatially correlated. To that end, the observed randomness and spatial-correlations are captured by learning the drift and diffusion terms of the stochastic differential equation with a Gumble matrix embedding, respectively. In particular, this novel framework enables us to investigate the implicit regularization effect of the noise terms in S-GGNs. We provide a theoretical guarantee for the proposed S-GGNs by deriving the difference between the two corresponding loss functions in a small neighborhood of weight. Then, we employ Kuramoto's model to generate data for comparing the spectral density from the Hessian Matrix of the two loss functions. Experimental results on real-world data, demonstrate that S-GGNs exhibit superior convergence, robustness, and generalization, compared with state-of-the-arts.
## 1 Introduction
Multivariate time series enable us to thoroughly investigate the statistical pattern among the observed dimensions, and to make predictions. Over last decades, an amount of works have been dedicated to modeling time series data arising in applications including finance, biology, and etc. Real-world time series data often exhibit a certain amount of correlations or causal relationships between the oberved dimensions. For instance, the prices of many financial derivatives, could be simultaneously influenced by the common market signals and interventions. The dynamics of electroencephalogram (EEG) brain signals, implicitly reflect a latent graph structure [1]. Hence, it is crucial to exploit the graph structure in modelling multivariate time series, within a dynamical system.
Graph neural networks (GNN) [2] aim to extract both local and global features, by leveraging available graph structure, using neural networks. Canonical GNNs, including graph convolutional networks (GCN) [3] and graph attention networks (GAT) [4], have demonstrated their strong capacity in capturing graph-structured data. Recently, there has been growing interests in predicting time series with graph neural networks to capture the underlying graph structure. For instance, temporal graph convolutional neural networks (T-GCN) [5] can integrate temporal information with the available graph structure. Nonetheless, the graph information, which may characterize the correlations or causal relations between observed dimensions, are either not immediately available, or contain noise.
Stochastic dynamical system, as a mathematical tool [6] to describe physical system evolving over time, becomes more and more popular nowadays to model various real world complex phenomena. Stochastic dynamical system [7], has been receiving increasing attention specifically in machine learning domain [8, 9], because of its mathematical rigor and strong modelling flexibility. Besides, it is also a bridge connecting real data with deep learning algorithms [10, 11]. On the one hand, many kinds of
neural network has the corresponding continuous version of differential equations, which helps building some convergence guaranteed neural networks such as [12, 13, 14, 15]. On the other hand, mathematical analysis from dynamical system point of view could also provide insights on loss landscape and critical points [16]. Furthermore, investigation on Edge of Stability (EOS) phenomenon are also promising research directions [17, 18].
Moreover, the Gumbel graph neural network (GGN) [19], is advanced to recover the graph structure underlying time series, within a dynamical system. By effectively leveraging the graph structure, the GGN achieves better accuracy in predicting high-dimensional time series. Nevertheless, the GGN suffer from the problems of limited generalization capabilities and excessive smoothness stemming from the inherent properties of graph neural networks themselves. To address these concerns, this paper proposes the stochastic Gumbel Graph Network (S-GGN) model by introducing a diffusion term. The main contributions of this paper can be summarized as follows:
* A stochastic Gumbel graph network (S-GGN) model is proposed to improve the model robustness and generalization ability in capturing high-dimensional time series, by introducing the noise term. In particular, we thoroughly study the convergence of the S-GGN model with theoretical analysis (Sec.2).
* A grouped convolution S-GGN structure, is advanced to capture _noisy_ graph-structured time series. Using convolution operations, the model can effectively reconstruct the dynamics by leveraging the external node features to remove the noise effects.
* Experiments on real-world problems [20], demonstrate the superior generalization capability of the S-GGN model without compromising its accuracy, compared with GCNs (Sec.3).
## 2 The S-GGN Frameworks
### S-GGN model
The GGN, consisting of a network generator and a dynamics learner, recovers the underlying dynamical systems from observations such as high-dimensional time series data. The network generator within GGN utilizes the reparameterization technique known as the Gumbel-softmax trick [21], whereby the graph is sampled based on probabilities. The application of the Gumbel-softmax trick allows the GGN to directly apply the backpropagation algorithm to calculate the gradient and optimize the network. Based on the connection between the GGN model and the discrete representation of the dynamical system, we extend its applicability to a formulation that aligns with the discrete form of the stochastic dynamical system, called Stochastic Gumbel Graph Neural Network (S-GGN). Thus, the dynamic learner of S-GGN can be represented as
\[X^{t+1}_{predict}=X^{t}+f(X^{t},A)\Delta t+\sigma(X^{t},A)\xi_{t}\sqrt{\Delta t} \tag{1}\]
where \(X^{t}\) denotes the state vector of all \(N\) nodes at moment \(t\) and \(A\) is the symmetric adjacency matrix constructed by the network generator. Here \(\xi_{t}\sim\mathcal{N}(0,I)\) is an independent standard normal random vector.
The graph neural network module within S-GGN can be depicted as a composition of the following mappings:
\[\begin{split}& H^{t}_{e_{1}}=f_{v\to e}(X^{t}\otimes(X^{t})^{T}), \\ & H^{t}_{e_{2}}=f_{e}(H^{t}_{e_{1}}),\\ & H^{t+1}_{v_{1}}=f_{e\to v}(A*H^{t}_{e_{2}}),\\ & H^{t+1}_{v_{2}}=f_{v}(H^{t+1}_{v_{1}}),\end{split} \tag{2}\]
where \(X^{t}\in\mathbf{R}^{N\times d_{x}}\) denotes the features of \(N\) nodes with each node has feature dimension \(d\). Let \(A\) denotes the adjacency matrix that describes the relationships between the nodes. The \(f_{v\to e},f_{e},f_{e\to v},f_{v}\) consist of a linear layer and an activation function. The operation \(\otimes\) signifies pairwise concatenation, and the symbol \(*\) denotes multiplication by elements. The composition of these four mappings in (2) corresponds to the function \(f\), the same networks' structures as \(\sigma\) in equation (1).
Next, we train the two neural networks for functions \(f\) and \(\sigma\) in (1), denoted as \(f_{NN}\) and \(\sigma_{NN}\) respectively. Their networks structures are the same with different values.
### Spectral Analysis
We denote the parameters of GGN networks as \(\mathbf{\omega}_{t}\) at the \(t\)-th interaction. Consider the division \(0:=t_{0}<t_{1}<\cdots<t_{M}:=T\) of \([0,T]\), we define that \(\delta_{m}:=t_{m+1}-t_{m}\). The discretization of parameters' evolution in GGN network can be expressed as
\[\mathbf{\omega}_{t+1}=\mathbf{\omega}_{t}+f_{NN}(\mathbf{\omega}_{t},X^{t})\delta_{t},\]
where \(f_{NN}:\mathbb{R}^{d_{\mathbf{\omega}}}\times\mathbb{R}^{d_{x}}\). After introducing noise \(\varepsilon\) to our networks, we consider a rescaling of the noise \(\sigma_{NN}\mapsto\varepsilon\sigma_{NN}\), then the following discretization stochastic differential equation (SDE) holds
\[\mathbf{\omega}_{t+1}^{\varepsilon}=\mathbf{\omega}_{t}^{\varepsilon}+f_{NN}(\mathbf{ \omega}_{t}^{\varepsilon},X_{t})\delta_{t}+\varepsilon\sigma_{NN}(\mathbf{\omega} _{t}^{\varepsilon},X^{t})\xi_{t}\sqrt{\delta_{t}}\,, \tag{3}\]
where \(\sigma_{NN}:\mathbb{R}^{d_{\mathbf{\omega}}}\times\mathbb{R}^{d_{x}\times r}\), and \(\xi_{t}\sim\mathcal{N}(0,I)\) is an independent \(r\)-dimensional standard normal random vector. That is the evolution of parameters in our S-GGN network. Besides, we have the \(\mathbf{\omega}_{0}^{\varepsilon}=\mathbf{\omega}_{0}\).
Then we give some conditions on drift and diffusion term. To enhance readability, we denote the network functions \(f_{NN}\) and \(\sigma_{NN}\) as \(f\) and \(\sigma\) respectively.
**Assumption 2.1**: _We assume that the drift term \(f\) and diffusion term \(\sigma\) satisfy_
_(i) For all \(t\in[0,T]\) and \(X\in\mathbb{R}^{d_{x}}\), the maps \(\mathbf{\omega}\mapsto f(\mathbf{\omega},X)\) and \(\mathbf{\omega}\mapsto\sigma(\mathbf{\omega},X)\) have Lipschitz continuous partial derivatives in each coordinate up to order three (inclusive)._
_(ii) For any \(\mathbf{\omega}\in\mathbb{R}^{d_{\mathbf{\omega}}},t\mapsto f(\mathbf{\omega},X)\) and \(t\mapsto\sigma(\mathbf{\omega},X)\) are bounded and Borel measurable on \([0,T]\)._
Under the above assumption, we define the distinct between the loss of S-GGN and GGN networks as
\[\mathbf{\mathcal{D}(\mathbf{\omega})}:=\mathbb{E}[l_{S-GGN}(\mathbf{\omega}_{M}^{ \varepsilon})-l_{GGN}(\mathbf{\omega}_{M})], \tag{4}\]
where \(l\) denotes the loss function.
**Proposition 2.2**: _(Comparison of the noise induced loss and the deterministic loss) Under Assumption 2.1, the following holds_
\[\mathbf{\mathcal{D}(\mathbf{\omega})}=\frac{\varepsilon^{2}}{2}[\hat{R}(\mathbf{\omega})- \hat{S}(\mathbf{\omega})]+\mathcal{O}(\varepsilon^{3}), \tag{5}\]
_as \(\varepsilon\to 0\), where the \(\hat{R}\) and \(\hat{S}\) represent_
\[\hat{R}(\mathbf{\omega}) =(\nabla l(\mathbf{\omega}_{M}))^{T}\sum_{k=1}^{M}\delta_{k-1}\hat{ \Phi}_{M-1,k}\sum_{m=1}^{M}\delta_{m-1}\mathbf{v}_{m},\] \[\hat{S}(\mathbf{\omega}) =\sum_{m=1}^{M}\delta_{m-1}\text{tr}(\sigma_{m-1}^{T}\hat{\Phi}_{ M-1,m}^{T}H_{\mathbf{\omega}_{M}l}\hat{\Phi}_{M-1,k}\sigma_{m-1}),\]
_with \(\hat{\Phi}_{m,k}:=\hat{J}_{m}\hat{J}_{m-1}\cdots\hat{J}_{k}\), the state-to-state Jacobians \(\hat{J}_{m}=I+\delta_{m}\frac{\partial f}{\partial\mathbf{\omega}}(\mathbf{\omega}_{ m},X_{m})\) and the \(\mathbf{v}_{m}\) is a vector with the \(p\)-th component (\(p=1,\cdots,d_{\mathbf{\omega}}\)):_
\[[\mathbf{v}_{m}]^{p}=\text{tr}(\sigma_{m-1}^{T}\hat{\Phi}_{M-2,m}^{T}H_{\mathbf{ \omega}}[f_{M}]^{p}\hat{\Phi}_{M-2,m}\sigma_{m-1}).\]
_Moreover, we have_
\[|\hat{R}(\mathbf{\omega})|\leq C_{R}\Delta^{2},|\hat{S}(\mathbf{\omega})|\leq C_{S}\Delta \tag{6}\]
_for \(C_{R},C_{S}>0\) independent of \(\Delta\), where \(\Delta:=\text{max}_{m\in\{0,1,\cdots,M-1\}}\delta_{m}\)._
**Proof.** We refer a proof similar to [22]. First, we can apply a Taylor expansion to \(\omega_{t}\), the drift and the diffusion coefficients at a small neighbourhood of \(\omega_{0}\). With Ito formula and comparing the corresponding terms of \(\varepsilon\) in the two sides of equation (3), we can obtain the result in 3. Next, in conjunction with Lemma 1, 2 and 3 in the literature [22], it can be demonstrated that (6) holds. \(\blacksquare\)
**Remark 2.3**: _Proposition 2.2 indicates that introducing noise to the state of a deterministic graph can be considered, on average, as an approximation of a regularized objective functional._
Experiments
We conduct two experiments to verify the performance of our S-GGN.
### Kuramoto Model
The Kuramoto model is a nonlinear model describing the interaction and synchronization of oscillator groups:
\[\frac{d\theta_{i}}{dt}=\omega_{i}+K\sum_{j\neq i}A_{ij}sin(\theta_{j}-\theta_{i}),i=1,2,...,N,\]
where \(\theta_{i}\) denotes the \(i\)-th oscillator phase, \(\omega_{i}\) denotes its natural frequency, \(N\) denotes the number of vibrator, \(K\) denotes coupling strength which measures the strength of the interaction between the oscillator. Here \(A_{ij}\in\{0,1\}\) are the elements of \(N\times N\) adjacency matrix. The model takes into account the phase differences of the oscillators and the interactions between them to explain the synchronization phenomenon.
Here, we need to initialize the corresponding initial phase and natural frequency for each oscillator, and then calculate the phase of the oscillator attractively according to the above formula.
* Data preparation: the numerical solution of the Kuramoto model by the fourth-order Longe-Kutta method.
* Data pre-processing: the sin value and frequency of the phase value at the corresponding time, and these two characteristics are taken as the characteristics of the node at the corresponding time. Set the window length to 20.
* Experiment settings: the optimizer of network generator and dynamic learner, the number of iteration steps is 3 and 7 respectively.
For both models, the Hessian Matrix of the empirical loss with respect to weights can be obtained every 10 epochs, and the corresponding eigenvalues over epochs is shown in the Figure 1. Observing that S-GGN has to smaller largest eigenvalue, which indicates that S-GGN can find flatter optimal weights, allowing its better performance from sharpness awareness point of view. Figure 2 shows the
Figure 1: Eigenvalues of the Hessian matrix for GGN (left) and S-GGN (right).
Figure 2: The spectral distribution for GGN (left) and S-GGN (right).
distribution of eigenvalues of the two models after the first, 50-th and 100-th epochs. We can see that the eigenvalues' concentration of S-GGN is much stronger than that of GGN, suggesting different convergence of the two models.
### Wireless communication data
In this experiment, the data is obtained from channel measurements in real-world scenarios. Consider a typical line-of-sight (LOS) scenario with a 24-millisecond spacing between points, and an 8x1 uniform linear array (ULA) at the transmitter end.
* Data standardization: the wireless communication signal data from each base station is decomposed into real and imaginary parts for normalisation.
* Data preparation: time window is chosen as 72 points. Within each time window, features are extracted using group convolution applied to data with a window length of 36.
* Experimental settings: The Adam optimizer is selected to optimize both the network generator and the dynamic learner, with 3 and 12 iteration steps respectively.
Due to the highly noisy, non-linear and non-smooth nature of Wireless communication data, direct modelling of the raw signal is challenging.Convolutional neural networks offer significant advantages in feature extraction, automatically learning local features and retaining spatial structure information. We take a rolling prediction approach wherein a dataset of length \(W\) is employed to forecast sequences ranging from \(W+1\) to \(W+p\). Notably, the actual data ranging from \(W+1\) to \(W+p\) is not incorporated into the model during this process. The construct is illustrated in Figure 3.
The S-GNN model is employed to predict the Wireless communication data. Figure 4 presents the mean square error and mean absolute error of both the S-GGN and GGN models on the test set. Notably, the S-GGN model demonstrates a smaller error and exhibits superior generalization performance compared to the GGN model.
Figure 4: The MSE (left) and MAE (right) for GGN and S-GGN in test set.
Figure 3: The process of S-GGN Prediction.
Figure 5: Predictions for 8 base stations signals.
Figure 5 exhibits the comparative prediction outcomes of the GGN and the S-GGN models, utilizing a training sample on the test set. Both the real and imaginary components of the signals originating from the eight node base stations are plotted. The solid blue line represents the actual data, while the solid yellow line corresponds to the prediction results of the S-GGN model, and the solid green line represents the prediction results of the GGN model. We can see the S-GGN outperforms and its prediction result is more close to the true data.
## 4 Conclusion
Considering the complexity of the noise and underlying dynamics of the data, we bring Stochastic Dynamical Systems as a tool to address this problem. First, by comparing the loss of S-GGN and GGN, we can see the term of the multiplicative noise can be treated as a regularization term for the perturbation in a small neighborhood of neural network's weights. As a result, the S-GGN model could achieve better generalization capabilities and robustness on noisy data. Second, we aim to explore the spectral density over iteration steps for Hessian Matrix eigenvalues of the empirical loss w.r.t. weights. We conduct experiments on data from Kuramoto model to verify the effectiveness of S-GGN. Finally, in real-world applications such as wireless communication data, we introduce group convolution techniques as our data preprocessing, which helps us to get better long-term prediction results. Despite this, there are still some problems that need to be solved which we would like to further notice, such as loss analysis through a sharpness awareness point of view, more applications in complex spatial-temporal data like EEG signals in the brain, financial data, molecular dynamics, climate forecasting and so on.
## Acknowledgments
This work was supported by the National Key Research and Development Program of China (No. 2021ZD0201300), the National Natural Science Foundation of China (No. 12141107), the Fundamental Research Funds for the Central Universities (5003011053).
## Data availability
The data that support the findings of this study are available in GitHub at [https://github.com/xiaolangege/sggn](https://github.com/xiaolangege/sggn).
|
2301.09994 | Optical convolutional neural network with atomic nonlinearity | Due to their high degree of parallelism, fast processing speeds and low power
consumption, analog optical functional elements offer interesting routes for
realizing neuro-morphic computer hardware. For instance, convolutional neural
networks lend themselves to analog optical implementations by exploiting the
Fourier-transform characteristics of suitable designed optical setups. However,
the efficient implementation of optical nonlinearities for such neural networks
still represents challenges. In this work, we report on the realization and
characterization of a three-layer optical convolutional neural network where
the linear part is based on a 4f-imaging system and the optical nonlinearity is
realized via the absorption profile of a cesium atomic vapor cell. This system
classifies the handwritten digital dataset MNIST with 83.96% accuracy, which
agrees well with corresponding simulations. Our results thus demonstrate the
viability of utilizing atomic nonlinearities in neural network architectures
with low power consumption. | Mingwei Yang, Elizabeth Robertson, Luisa Esguerra, Kurt Busch, Janik Wolters | 2023-01-24T13:44:27Z | http://arxiv.org/abs/2301.09994v1 | # Optical convolutional neural network with atomic nonlinearity
###### Abstract
Due to their high degree of parallelism, fast processing speeds and low power consumption, analog optical functional elements offer interesting routes for realizing neuromorphic computer hardware. For instance, convolutional neural networks lend themselves to analog optical implementations by exploiting the Fourier-transform characteristics of suitable designed optical setups. However, the efficient implementation of optical nonlinearities for such neural networks still represents challenges. In this work, we report on the realization and characterization of a three-layer optical convolutional neural network where the linear part is based on a 4f-imaging system and the optical nonlinearity is realized via the absorption profile of a cesium atomic vapor cell. This system classifies the handwritten digital dataset MNIST with 83.96% accuracy, which agrees well with corresponding simulations. Our results thus demonstrate the viability of utilizing atomic nonlinearities in neural network architectures with low power consumption.
oe 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement
## 1 Introduction
In recent years, convolutional neural networks (CNNs) have established themselves as a key method in computer vision tasks, with applications that range from fundamental studies in condensed-matter physics [1] and particle physics [2] all the way to autonomous driving [3] and cancer detection [4]. With the rising popularity of CNNs, significant concerns regarding their energy consumption relative to simpler network architectures have emerged. Specifically, about \(\sim\) 80% of the inference time required by CNNs is utilized for carrying out the convolution [5] so that energy-efficient computing paradigms for computing convolutions have hence become an active field of research. Due to their inherent parallelism, potential for GHz modulation speeds and low energy consumption (when using only passive elements), free-space-optics implementations have been identified as an attractive possibility for analog computations of convolutions [6, 7]. In fact, within the broader context of artificial neural networks, linear-optics implementations have been demonstrated based on diffractive materials [8], spatial light modulators (SLMs) [9, 10, 11, 12, 13], ring resonators [14], arrays of Mach-Zender interferometers [15, 16] and wavelength-division multiplexing techniques [17]. For further information, we would like to refer to recent reviews [18, 19, 20] and the convolutional layer design that has recently been demonstrated by Miscuglio et al. [21]. However, the most efficient optical implementation of the nonlinearities required by neural networks remains an open question. Recently, Zuo et al. have demonstrated an optical nonlinearity by utilizing electromagnetically induced transparency in a gas of ultra-cold \({}^{87}\)Rb atoms [22]. Ryou et al. have avoided the overhead associated with ultra-cold atoms by realizing a nonlinearity through saturable absorption in a thermal vapor of Rb
atoms [23]. Other mechanisms for realizing optical nonlinearities for neuromorphic applications include phase-change materials (PCMs) [17] and the Kerr effect combined with two-photon absorption [24].
In this work, we introduce a nonlinearity in the form of a saturated absorption profile of cesium atomic vapor into an optical convolution setup based on free-space SLMs. As we shall demonstrate below, this system classifies the handwritten digit dataset MNIST [25] with 83.96% accuracy. As an essential part of the experimental development of multi-layer optical neural networks, the optical nonlinearity is proven to be effectively provided by the cesium atomic vapor. With only one stable isotope and a vapor pressure of \(\sim 2\cdot 10^{-6}\) Torr at room temperature, the Cs D lines show excellent absorption properties, enabling pronounced nonlinearities without isotopic purification or power-consuming cell heating. This simplicity in the use of Cs cell is particularly important as we have developed the system with an eye toward space applications. In fact, we expect a potential benefit of optical neural networks with nonlinearity for data processing on board of satellites. The standard procedure for processing complex sensor data is the use of artificial neural networks, which is served on the ground by specialized digital hardware such as graphics cards, tensor flow processors, etc. The availability of these options is limited for data processing in orbit due to the extreme requirements for energy consumption, thermal management and radiation hardness. However, data transfer and processing on the ground can also be challenging due to the immense amount of data. Therefore, high-performance computers under space conditions would be desirable and optical computers have a high potential to fill the gap, enabling energy-efficient machine learning in orbit.
## 2 Methods
Here we demonstrate an optical convolutional neural network (OCNN) in which both linear operations and the nonlinearity are realized optically. We implement the OCNN with one input layer, one optical convolutional layer, one fully connected layer followed by one output layer. An optical nonlinearity is applied after the convolutional layer [Fig. 1]. The convolution of the input and the kernel of the OCNN is performed optically by pointwise multiplications in the Fourier plane, based on a 4f-imaging system and the convolution theorem. A CMOS camera acts as an analog-to-digital converter. A digital fully connected layer is implemented in the computer to connect the nonlinear activated feature maps and the output layer. We simulate the system based on the experimental setup. In the simulation, the kernel of the OCNN and the digital fully connected layer are pretrained and applied to the experiment. The performance of the simulation and the experimental results are compared.
We train and test the system with handwritten digits from the MNIST dataset. The input
Figure 1: Layout of the optical convolutional network architecture used in this work.
data is encoded as a two-dimensional intensity profile by an amplitude-only SLM, which is based on a digital micromirror device (DMD). This electro-optic conversion is implemented by controlling the individual micromirrors of the DMD. These mirrors can rapidly tilt between an on-and-off position and selectively reflect the incident light to the optical path of the OCNN. One micromirror and its corresponding memory unit constitute one pixel of the DMD. The properties of the large-scale display (\(1920\times 1080\) pixels) and the high update rate up to \(10^{4}\) Hz for binary images of the micromirror array allow in principle complex multi-channel data processing at high speed.
The conventional convolution process is formed of the pixel-wise multiplication and summation of a subsection of an image with a kernel. The kernel is scanned across the whole image, where this multiplication and summation repeats, resulting in an image convolved with a kernel. By contrast, the OCNN performs the convolution process by employing the convolution theorem \(f(x)*h(x)=\mathcal{F}^{-1}\{\mathcal{F}[f(x)]\cdot\mathcal{F}[h(x)]\}\) and the Fourier transform properties of lenses [26]. Specifically, the convolution (\(*\)) of the input \(f(x)\) and the kernel \(h(x)\) is the inverse Fourier transform (\(\mathcal{F}^{-1}\)) of the pointwise product (\(\cdot\)) of their Fourier transforms (\(\mathcal{F}[f(x)],\mathcal{F}[h(x)]\)). By constructing a 4f-system with two SLMs and two lenses, the Fourier transform of the input, the dot product in Fourier space and the inverse Fourier transform of the dot product can be performed optically and passively. Thus, the convolved images can be observed in the front focal plane of the second lens.
In the experiment, we place the cesium vapor cell after the convolutional layer, so that the convolved pattern is nonlinearly absorbed, thereby introducing an optical nonlinearity into the system. The CMOS camera is positioned in the front focal plane of the second lens. The images are captured by the camera and saved on a computer. Figure 2 depicts the sketch of the experimental setup. The light source is a distributed feedback diode laser assembly with a Doppler-free spectroscopy setup. The wavelength is fine-tuned and actively stabilized to the cesium D1 line transition (\(6^{2}S_{1/2}\to 6^{2}P_{1/2}\)) at about 894 nm. The laser is further coupled into and emerges from an optical fiber. A half-wave plate and a polarizing beam splitter are used to adjust the light intensity. Thereafter, two lenses act as a telescope to enlarge the beam diameter
Figure 2: Sketch of the experimental setup. M: mirror, HWP: half-wave plate, PBS: polarizing beam splitter, L: lens, FM: flip mirror. The intensity of light is controlled by the combination of the half-wave plate and the polarizing beam splitter. A telescope is composed of a 50 mm and a 100 mm focal length lens, expanding the beam to 8 mm diameter.
to approximately 8 mm. This large collimated beam is sufficient to cover the pattern surface of SLM1 (DLP6500, Texas Instruments, pixel size \(7.6\ \mu\)m \(\times\) 7.6 \(\mu\)m). A computer connects and loads the patterns to SLM1, therefore, the incoming beams are specifically modulated to the image shapes of the binarized MNIST dataset and reflected to the optical path. For the used SLM model, a theoretical maximum pattern rate of 100 kHz can be reached with binary data. A lens (\(f\) = 250 mm, L1) is located at a distance of 250 mm from SLM1. The second identical SLM is then placed in its back focal plane, multiplying the corresponding Fourier-transformed input with the displayed Fourier-transformed kernel. The amplitude modulated dot products of the Fourier patterns are then inverse Fourier transformed by a second lens (\(f\) = 250 mm, L2), and subsequently captured by the CMOS camera (CS165MU- 1.6 MP Monochrome CMOS Camera, Thorlabs, pixel size \(3.45\ \mu\)m \(\times\) 3.45 \(\mu\)m) after being nonlinearly activated by the Cs vapor cell. Consequently, the images captured by the camera are the nonlinearly activated convolutions of the input images and the kernel. The computer collects all such processed images and converts them into arrays of gray scale values. Each array is connected to a trained fully connected layer, which outputs the prediction of the neural network via matrix multiplication. In addition, a flip mirror can be used to reflect the beam to a second CMOS camera for the purpose of imaging the Fourier pattern displayed on SLM2.
As discussed above, much attention has been drawn to optical nonlinearities. Atomic vapor cells are competitive options due to their non-vacuum, room-temperature experimental environment. Moreover, as a result of the passive pass of laser light, there is no need for additional energy. The relation between the output intensity \(I\) and the input intensity \(I_{0}\) is [27]
\[I=I_{0}\exp\left(\frac{-\mathrm{OD}_{0}}{1+I_{0}/I_{s}}\right), \tag{1}\]
where \(I\), \(I_{0}\), \(I_{s}\) represent the output intensity, input intensity, and saturation intensity, respectively. The optical depth at \(I_{0}=0\) is given by \(\mathrm{OD}_{0}=N\sigma z\), where \(N\) is the number of cesium atoms per unit volume, \(\sigma\) denotes the corresponding cross-section, and \(z\) is the cell length. When the input intensity \(I_{0}\) reaches the saturation intensity \(I_{s}\), the absorption is reduced by 50%. The input-output relation thus follows a nonlinear shape. In the experiment, the light intensities are converted to 8-bit arrays of integer values by the camera. To ensure that the corresponding simulation is consistent with the experiment, we experimentally determine the input-output relation. To this end, we display a square pattern on SLM1 representing the average number of bright pixels in the MNIST dataset. While varying the incident laser power, we measure the resulting pixel intensity detected by the CMOS camera with and without the Cs cell, respectively. The measured and fitted input-output relation of the nonlinearity is shown in Fig. 3.
As a programming language, we have employed Python (version 3.8.5.) in combination with the Pytorch framework (version 1.7.1) [28] for training. This combination features an autograd function that nicely supports our custom nonlinear activation function during the backpropagation process. Contrary to a conventional CNN training process, different methods and additional procedures must be considered in the simulation in order to fit the optical setup. The OCNN model has a three-layer structure including the atomic nonlinearity. The original image size of the MNIST is \(28\times 28\), however, for experimental convenience, the input size is enlarged to \(256\times 256\). Moreover, according to the convolution theorem, the kernel size must be same as the input size, so the kernel size is also set to \(256\times 256\). The input and kernel are binarized for the purpose of effectively interacting with the cesium atoms. In the OCNN model, the convolution stems from the inverse Fourier transform of the multiplication of the Fourier-transformed input and the kernel. Thus, we have implemented a custom convolutional layer into the Pytorch machine-learning framework [28]. Due to the fact that we initialize the learnable kernel values directly in complex Fourier space [29], no Fourier transform is required for the kernel during the training process. The real-part is composed of random positive values. As amplitude-only
SLMs do not deal with the phase information, all the imaginary values are set to zero. The first step is to perform a two-dimensional Fourier transform of the input images and multiply them with the kernel in Fourier space. Although the real-part values of the kernel are initialized to be positive, they may contain negative values during training. As a result, we apply a rectified linear unit (ReLU) function to the kernel to replace negative values with zero. Furthermore, we ensure that the pixel position of the Fourier pattern of the input light has a one-to-one match to the pixel position of the kernel structure on the SLM2. However, exact matching is experimentally difficult for the SLM hardware that has micron-scale pixels. Accordingly, the \(256\times 256\) kernel is max-pooled to \(32\times 32\), followed by a process of enlarging back to \(256\times 256\). Next, the element-wise multiplication is performed in Fourier space, and the following inverse Fourier transform gives the feature map of the OCNN in real space, the size maintains \(256\times 256\). It is worth mentioning that, in order for the optical patterns to be activated in a nonlinear manner, the light intensity is adjusted to that the input intensity of the vapor cell falls within the nonlinear region around the saturation intensity. The feature maps are max-pooled to \(28\times 28\) and flattened to one-dimensional arrays, then a fully connected layer with a size of \(784\times 10\) further processes the feature maps and outputs a \(10\times 1\) array representing the predicted digits. The model is trained for 5 epochs, using a gradient descent optimization algorithm with the Softmax classifier as loss function, Adam [30] as the optimizer, and 0.001 learning rate.
## 3 Results
We have performed the training of our OCNN with the \(60,000\) training images from MNIST and have determined the corresponding accuracy for \(10,000\) test data. For the MNIST test set, the simulation of the OCNN achieve \(92.6\%\) and \(93.2\%\) accuracy with the cesium absorption function and the ReLU function as nonlinearity, respectively. The accuracy is relatively low for the MNIST dataset comparing to other SLM-based optical network systems [11, 21], since the dataset is binarized in the OCNN and there is only one convolutional layer with a single kernel for the first demonstration and better reproducibility of the training. Without any nonlinearity in the entire network, the accuracy goes down to \(90.8\%\). For comparison, a conventional feed forward model is trained (5 epochs, 0.001 learning rate). This model which consists of one fully
Figure 3: Measured input-output average pixel brightness and the corresponding fit curve. Fitting the experimental data to Eq. (1) gives \(OD_{0}=1.38,I_{\mathrm{S}}=62.31\) pixel brightness. The axes represent the average grayscale value of the picture without (input) and with (output) the vapor cell, respectively.
connected layer of size \(784\times 10\) gives an accuracy of \(80.5\%\).
The experimental classification accuracy for both cases, with and without the vapor cell are tested directly using the individually simulated kernel and fully connected layer. Figure 4(a) shows examples of simulated and experimentally captured feature maps. For the absence of the vapor cell, the OCNN setup achieves \(70.72\%\) accuracy for test data. With the cesium vapor cell in the setup, the performance is improved to \(71.84\%\). The confusion matrix indicates that the experimental results have particular errors in predicting the digits "1", "5", and "8" [Fig. 4(b)]. The comparable low accuracy is caused by experimental imperfections. In order to compensate for these imperfections, we train a mapping matrix using the full \(10,000\) test data being processed by the optical system. This linear mapping is used to modify the fully connected layer, i.e. to retrain it to the experimental data. As shown in the confusion matrix, the trained mapping matrix significantly reduces mispredictions [Fig. 4(c)]. With the help of this training, the accuracy
\begin{table}
\begin{tabular}{c|c|c|c} Non-linearity & Cs Vapor & ReLU & No nonlinearity \\ \hline Simulation & 92.6\% & 93.2\% & 90.8\% \\ \hline OCNN (No mapping matrix) & 71.84\% & 70.72\% & 45.59\% \\ \hline OCNN (With mapping matrix) & 83.96\% & 91.9\% & 89.3\% \\ \end{tabular}
\end{table}
Table 1: Summary of OCNN prediction accuracy. Each kernel is individually trained for each case. No nonlinearity means that no additional nonlinearity is added beyond the inherent nonlinearity of the camera.
Figure 4: Visualization of feature maps and confusion matrices. (a) Examples of simulated (first row) and experimental (second row) feature maps. (b) Experimental confusion matrix of the MNIST classification. (c) Confusion matrix of the MNIST classification with the trained mapping matrix.
increases to 83.96% and 91.9% as a result of with and without the vapor cell, respectively. For comparison, the accuracy of the OCNN without any additional nonlinearity gives an accuracy of 89.3%, indicating the effectiveness of the nonlinearity introduced by the camera. In Table 1. we have summarized the results on the accuracy. After retraining the mapping matrix, the experimental classification performance is in good agreement with the simulation. We attribute the slightly lower performance of the optical system to the reasons: 1) As the dynamic range and analog-to-digital conversion accuracy of the camera are limited, the distribution of pixel values does not accurately match the theoretical values. 2) The Fourier-transformed input images and the kernel image displayed on SLM2 do not overlap perfectly. 3) Distortions and aberrations caused by optical imperfections of the optical components also affect the experimental results.
## 4 Conclusion
In summary, we have demonstrated an optical convolutional neural network with atomic nonlinearity. The linear part of the system consists of a 4f-system made from SLMs and lenses, the optical nonlinearity is realized by a cesium vapor cell. The prediction accuracy of the OCNN setup for recognizing the MNIST handwritten digit dataset reaches 83.96%, which is in reasonable agreement with the simulation (92.6%). The effectiveness of the atomic vapor cell suggests that it has great potential to provide optical nonlinear activation functions for neural networks with other topologies. Moreover, it is an attractive candidate for use in multilayer optical neural networks with several SLMs and vapor cells in series, as the light beam passively acquires nonlinear output characteristics without requiring a further source of energy. In addition, the operation complexity is also relatively low and for further integrations, the atomic vapor cell can also be replaced by semiconductor saturable absorbers (SESAM) [31].
For the specific task of classifying MNIST, the nonlinearity has only limited impact. However, our results show the great potential of OCNN with nonlinearity for machine learning problems in general. The performance can theoretically reach \(10^{6}\times 10^{6}\times 10^{4}=10\times 10^{15}\) operations per second (OPS) with image and kernel size of \(10^{6}\) each and binary resolution allowing for a maximum pattern rate of \(10^{4}\) Hz. Then, bottlenecks will include the resolution and update rate of hardware, the transmission rate of interfaces, etc. As a comparison, Google's latest release of the state-of-the-art TPU v4 chip, provides up to \(\sim 275\cdot 10^{12}\) OPS. Regarding power consumption of the OCNN, the laser and camera together require only about 2 W, while the two SLMs need on the order of 10 W. The used computer requires about 50 W. However, the atomic nonlinearity enables the possibilities for realizing multilayer ONNs with all-optical training [32], so the computer might be omitted in the future. Moreover, the SLMs mainly consume power for switching. Thus, the numerous quasi-static SLMs that provide connectivity in such a multilayer ONN are expected to have negligible impact on the power consumption being dominated by the input SLM. Hence, we expect a multilayer optical neural network to drastically increase the computational performance without increasing the energy consumption significantly, thereby rendering optical neural networks with atomic nonlinearity an attractive alternative to conventional hardware.
Funding.This work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Project number 445183921. E.R. acknowledges funding through the Helmholtz Einstein International Berlin Research School in Data Science (HEIBRiDS).
We thank G. Gallego for the discussions about the optical convolutional neural network architecture.
Disclosures.The authors declare no conflicts of interest.
Data availability.Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request. |
2305.15692 | Deep Neural Networks in Video Human Action Recognition: A Review | Currently, video behavior recognition is one of the most foundational tasks
of computer vision. The 2D neural networks of deep learning are built for
recognizing pixel-level information such as images with RGB, RGB-D, or optical
flow formats, with the current increasingly wide usage of surveillance video
and more tasks related to human action recognition. There are increasing tasks
requiring temporal information for frames dependency analysis. The researchers
have widely studied video-based recognition rather than
image-based(pixel-based) only to extract more informative elements from
geometry tasks. Our current related research addresses multiple novel proposed
research works and compares their advantages and disadvantages between the
derived deep learning frameworks rather than machine learning frameworks. The
comparison happened between existing frameworks and datasets, which are video
format data only. Due to the specific properties of human actions and the
increasingly wide usage of deep neural networks, we collected all research
works within the last three years between 2020 to 2022. In our article, the
performance of deep neural networks surpassed most of the techniques in the
feature learning and extraction tasks, especially video action recognition. | Zihan Wang, Yang Yang, Zhi Liu, Yifan Zheng | 2023-05-25T03:54:41Z | http://arxiv.org/abs/2305.15692v1 | # Deep Neural Networks in Video Human Action Recognition: A Review
###### Abstract
Currently, video behavior recognition is one of the most foundational tasks of computer vision. The 2D neural networks of deep learning are built for recognizing pixel-level information such as images with RGB, RGB-D, or optical flow formats, with the current increasingly wide usage of surveillance video and more tasks related to human action recognition. There are increasing tasks requiring temporal information for frames dependency analysis. The researchers have widely studied video-based recognition rather than image-based(pixel-based) only to extract more informative elements from geometry tasks. Our current related research addresses multiple novel proposed research works and compares their advantages and disadvantages between the derived deep learning frameworks rather than machine learning frameworks. The comparison happened between existing frameworks and datasets, which are video format data only. Due to the specific properties of human actions and the increasingly wide usage of deep neural networks, we collected all research works within the last three years between 2020 to 2022. In our article, the performance of deep neural networks surpassed most of the techniques in the feature learning and extraction tasks, especially video action recognition.
Deep Learning, Video Human Action Recognition, Spatial-Temporal Analysis, Neural Network
## I Introduction
Video Human Behavior Recognition is widely utilized in multiple research areas, including scene graph generation, Visual Question Answering on multi-modality data, and attention-based feature generation. Due to the previously finished research, deep learning methodology has achieved outstanding identification performance and prediction tasks on video-based tasks. There are some previously investigated studies [1] working on the improvement of human action classification tasks by generating probability through multiple streams or the combination of models. Our research work investigations are based on the modality data, including the skeleton, RGB, RGB+D, optical flow, depth map, etc. The application scenarios are comprehensive, including driver abnormal behavior identification, scene graph relationship analysis, and pedestrian re-identification. Our investigated frameworks involve individual neural network architectures and the combination of multiple neural networks, such as two-stream feature map analysis and multi-stream analysis tasks. Some work [2] is based on modality data such as audio, text, video, and images when data fusion is utilized for human action recognition. Multiple models are proposed to improve the classification accuracy of computer vision tasks, especially in studies involving human-human interaction and human-object interaction.
This survey is structured as follows: Section II provided the investigation of related works of existing surveys and the latest usage of deep learning on video-based recognition tasks; Section III introduced the datasets involved in current tasks and their descriptions; Section IV provides the popular deep-learning based methodologies in the tasks of human behavior recognition works; Section V involved most of the application scenarios of action recognition in research perspective; Section VI introduced the frameworks of deep learning methods(neural networks) and their structures, as well as the comparison of frameworks; Section VII provided an overview of existing temporal-based block for solving the problems of time-sensitive tasks on human action recognition; Section VIII illustrated the 3D convolutional neural networks with multi-stream and double-stream structures; Section IX provided the information of existing obstacles need to be solved and waiting to be observed requiring further investigation works; Finally, we summarized all the related works mentioned in our articles and provided the futures of deep learning-based methodology in most of application scenarios. Also, this is the cited version of the conference paper "A survey of video human behavior recognition methodologies in the perspective of spatial-temporal".
### _Research Overview_
After we analyzed multiple kinds of research from previous years, including action identification from some popular databases, the bellowing word cloud in Figure 1 reflects the relationships of these topics, composed of numerous popular topics mentioned in current research areas. The existing methodologies have implemented an excellent performance on the behavior classification and prediction tasks with 100% accuracy, surpassing most current methods. In contrast, some problems [3] are also waiting to be tackled.
Fig. 1: Word cloud - action recognition in deep learning.
Such as only a small amount of data could be analyzed due to the large size of video data segments, or the complete video segments need to be sliced batch by batch due to the computation requirements being various according to the type of tasks. Also, video recognition tasks require a large amount of CPU/GPU running costs, while deep learning works require time-dependency analysis. This type of problem still needs to be reconsidered and improved. The above summary graph illustrates the significant topics mentioned in the action detection works, followed by detailed descriptions. Compared with most deep learning methodologies, Convolution Neural Network(CNN) involves the most common usage, and many derived methodologies are manipulated rather than any other deep learning methodologies, such as Graph Convolutional Network(GCN), the skeleton-based and modality data have occupied most usages of the existing datasets, and both of them work as separate branches of video-based recognition tasks.
The significant contributions of our research works are summarized as following; our work mainly focuses on video recognition tasks with the usage of deep neural networks, especially in deep learning-based methodologies. Due to the wide usage of video-type data, such as surveillance video, TV series, video streaming, or any other scenarios of video segments in real-life situations. The area of deep learning has received great attention and awareness from lots of research works. We summarized some previously posed surveys and existing methodologies in our review to provide an overview of current research situations.
## II Related Work and History
We investigated multiple previously completed research surveys(from 2013 to 2022) of human behavior recognition, including multiple tasks, such as human detection, activity analysis, and mathematics methodologies. Since color information does not play a critical role in most of the video related tasks, because there are multiple categories of data format, such as optical flow and gray-scale are widely used in most video recognition tasks, and the usage of a single type of these data are way less than the modality data. The work [4] in 2013 summarized both spatial and temporal features of recognition methods on multiple types of video images, such as silhouettes, feature representations, and RGB/RGB-D data. Still, it lacked descriptions of recognition methods. For example, the survey [5] from 2015 summarized the crowd(group) behavior recognition methodology with fixed direction movement and discussed the granularity of video content where each pedestrian is treated as a granularity. The survey also analyzed the automatic group people analysis methods called automatic recognition tasks in stream data. The methodologies mentioned in the above survey considered both temporal and spatial features extracted from video pixels and time-sensitive dependency relationships but the deep learning methods are missing from the above work. The temporal blocks for the frame dependency tasks are also considered during [6] this work for the temporal and spatial behavior recognition in the process of frame aggregation steps.
The survey work [7] from 2017 went through the vision tasks for human behavior recognition as a survey work describing the human-environment interaction tasks through utilizing the deep neural networks through stacked spatial and temporal blocks for the fusion of motion detection. The comprehensive review [8] from 2019 summarized vision-based tasks for the usage of deep learning-based methodologies with multiple series of datasets in the video-based tasks in the usage of deep learning methods. The latest work [9] from 2022 provided the data/datasets modality which is widely utilized in action recognition tasks, and also compared the accuracy of multiple measurement methodologies in multi-modality fusion methodologies.
The last work [10] also illustrated the overview of deep learning methods of human behavior recognition which compares the existing frameworks and their structures with labeled accuracy, but it lacked the theoretical description and the technique comparison of methodologies. Our work summarized existing works datasets, backbone frameworks, and deep learning-based works from both pixel and time-series levels. We will further describe the directions of video analysis task development on action recognition from human-human and human-object tasks.
### _Timeline of Related Researches_
We also analyzed some of the most famous works, such as ST-GCN or 2s-AGCN, which could represent the trend and novel status of previous video behavior recognition works. As shown in the timeline of Figure 2, from 2018 to 2022, the revolution of this type of task has changed a lot. The task-solving methodology moves from pixel-based static analysis to real-time stream analysis, from machine learning methods to more deep learning methods, and from individual human-based tasks to human-environment interaction tasks. Also, with the popularity of vision transformers, attention is involved in action recognition tasks. In research works that are related to LSTM [11, 12], they utilize Recurrent Neural Networks(RNN) for multiple stream action recognition and manipulate gated RNN to detect the time-series dependent tasks. Also, the double stream networks are famous for solving human action recognition tasks through utilizing attention-based LSTM in research work with recognition score fusion [13].
Meanwhile, most deep learning-based methods have been proposed during the last three years [14, 15, 16, 17, 18, 19, 20], which mention that both supervised and unsupervised deep learning methodologies could solve the recognition tasks according to various data formats and labeling methods. The above timeline shown as Figure 2[21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41] illustrates the methodology revolution process from 2018 to the most recent. The timeline says that more combinations of methodologies are developed rather than individual model usage. Meanwhile, the recognition tasks required more modality data in the steps of data fusion and feature extraction rather than the particular data type with the requirement of recent year works.
There are also plenty of works utilizing the principle of ST-GCN to accomplish the recognition tasks by graph convolutional networks since 2018. In addition, we found an
increasing number of works focusing on human motion recognition tasks, including gait, gestures, and skeleton behavior recognition [42, 43, 44]. Until recent years, some works have also mentioned unsupervised and supervised deep learning methods for behavior recognition tasks.
## III Datasets Overview
The data resources [45, 46, 47, 48, 49, 50, 51, 52, 53, 54] of action recognition datasets mostly came from institutions or websites containing many video segments. The scale of the datasets (YouTube, surveillance systems, universities, etc.) is also growing over time, reflecting the number of videos or tags.
### _Expanding History of Datasets_
The datasets published earlier in 2012 include Violent Flows-crowd violence [45] and UCF50, UCF101, from the expansion and development of datasets. Data sources for these three datasets are coming from YouTube websites. Violent Flows-crowd violence is a database of real crowd violence video clips containing 246 videos with an average video duration of 3.6s. The number of action categories in UCF50 and UCF101 is represented as the ending number of the dataset's name with 50 and 101 classes respectively. The main difference between these two datasets is the number of labels where UCF50 extended from the UCF11 and UCF101 extended from UCF50. However, the metadata of both are kept consistent, as each category is divided into 25 groups
### _Expanding History of Datasets_
The datasets published earlier in 2012 include Violent Flows-crowd violence [45] and UCF50, UCF101, from the expansion and development of datasets. Data sources for these three datasets are coming from YouTube websites. Violent Flows-crowd violence is a database of real crowd violence video clips containing 246 videos with an average video duration of 3.6s. The number of action categories in UCF50 and UCF101 is represented as the ending number of the dataset's name with 50 and 101 classes respectively. The main difference between these two datasets is the number of labels where UCF50 extended from the UCF11 and UCF101 extended from UCF50. However, the metadata of both are kept consistent, as each category is divided into 25 groups
Fig. 2: Research development trend of video human action identification.
with more than four videos in each group.
There were also some shortcomings for both action identification datasets, which included few labels and non-real environment recordings. And the videos are uploaded by real users of video websites. The video frame rate in UCF is 25 FPS, the resolution is 320px\(\times\)240px with the video format as.avi, and the average video clip duration is 7.21 seconds. The naming convention is vX_gY_cZ_avi, where X represents the category, Y represents the group, and Z Indicates the video number. Another scenario-specific dataset called UCF-Crime was proposed in 2018 and later, mainly composed of real surveillance videos aimed at abnormal behavior datasets, including 13 odd actions(out of threshold defined in normal constraints) and 1900 relatively long videos. Among them, 1610 videos of them are training sets, and 290 videos of them are test sets.
NTU RGB+D 60[46] was developed followed by NTU RGB+D 120 in 2016. This dataset contains 60 types of actions and a total of 56,880 samples, where 40 types are daily actions, 9 are healthcare-related, and 11 are interactions between humans. The dataset uses three cameras for few-shot videos coming from different angles to collect depth information, 3D bone information, RGB video, and infrared sequences. The resolution of RGB video is 1920px \(\times\) 1080px, and the depth map and infrared video are both 512px \(\times\) 424px, respectively. The 3D bone data is the 3D coordinates with 25 joint points. The training and test sets are labeled by person id and camera type.
The Kinetics-400 dataset was proposed in 2017. After that, Kinetics-600 and Kinetics-700 were proposed in 2018 and 2019. The dataset name is followed by the number of labels the video contains. Each actions category of Kinetics-400 has at least 400 video clips, and the length of the video clip averaged 10 seconds for each segment. All actions can be divided into 38 categories by different granularity. Expansion happened between the kinetics-600 dataset and the kinetics-400 dataset, including the number of labeled types and video numbers in each category. The same extension happened between Kinetics-700 and Kinetics-600 datasets as well.
Moments in Time (Mit) [49] was proposed in 2018, including 339 labels with more than 1,000,000 videos. The author of the datasets mentioned that the available human working memory length in neurons is 3 seconds on average, and video segments with 3 seconds contained enough information for neural analysis with the usage of deep neural networks. The unique feature of design in Mit is to make models learn the conceptions of actions with solid generalization ability. If a moment can also be heard in the video (e.g., clapping in the background), then we include it with the sound rather than pixels only.
The datasets proposed in recent years also include Countix Data (2020 CVPR) and Home Action Genome (2021 CVPR). The Countix [52] Data is a companion dataset of a new network model named RepNet. To increase the semantic diversity of the previous duplicate dataset (QUVA, a dataset including multiple sports activities) and expand the scale of the Countix dataset. It is a real-world duplication of a wild-collected video dataset. A wide range of semantic settings is covered, such as camera/object motion, and the changes in the speed of repetitive actions. Countix data consists of high-frequency activities in Kinetic. Thus it is a subset of Kinetic but 90 times larger than the previously posed QUVA dataset.
Home Action Genome (HOMAGE) [55] is a large-scale dataset proposed in 2021 CVPR, which has multi-view and multi-modality data for facilitating learning from multi-view and multi-modal data. It contains 1752 synchronized sequences and 5700 videos divided into 75 activities and 453 atomic actions. Each series has one high-level activity category. The property of atomic movement is that the duration of each action series is usually short.
We are not going to include all the details of datasets due to large amounts of information and their applications. The remaining detailed information could be checked in the references of datasets and their application scenarios according to the volumes, modalities, number of actions, and pixel sizes of each video/image segment.
## IV Deep Learning-Based Methodologies
This section illustrated some procedures on specific datasets, such as contrastive learning for both supervised and unsupervised scenarios. The critical drawback of supervised learning has been mentioned where the labeling data requires plenty of human labeling work and the large involvement of disciplines, which made the recognition work difficult and time-consuming, especially when two expressions or gestures have high similarity [100, 101] between frames by using weakly supervised learning. As mentioned in the above research, weakly supervised learning includes three critical properties incomplete, inexact, and inaccurate supervision. These three steps are accomplished before the classification tasks are finished.
### _Contrastive Learning_
Contrastive learning could be utilized in both supervised and unsupervised methodologies. This method is proposed under the circumstance of large batch datasets and more training steps are required when it worked as unsupervised learning for human action recognition. The initial proposal of this framework is to solve the visual representation in 2020 [102]. The contrastive domain adaption has been proposed in the research work to solve the additional temporal dimension on video analysis [103]. Contrastive learning could also be utilized in anomaly detection in surveillance video [104]. It is to make the samples with the same classes close to each other and make the margin distance larger with different classes, which are intra-class and inter-class. The framework proposed in the research solved the anomaly detection task by capturing discriminative semantic features [104]. They solved the problems when new samples appeared in the video, as the newly inputted samples should not be recognized as a disorder while the recognized behavior is within the threshold of defined normal constraints. Also, the discriminative semantic features are also extracted for fixed patterns and context-aware environment recognition by utilizing contrastive learning when the behavior analysis depends on the human interaction with the environment [104, 105].
### _Measurement/Evaluation Metrics_
There are multiple types of metrics are utilized for the measurement of video-based behavior recognition. The multiple types of measurements include the Average Precision(AP) and mean Average Precision(mAP), which are widely in the usage of behavior recognition tasks. It depends on which modality is suitable for the behavior recognition tasks, such as the cross-view, cross-subject, and cross-setup are widely utilized in NTU-RGB+D datasets. Whereas, some commonly utilized measurements also include top 1 accuracy or top 5 accuracy or accuracy for the behavior recognition in the specific category. Also, the evaluation of measurement is also depending on the selection of features and the defined classifiers according to the feature generation in supervised methodologies, such as precision, recall, and F1-score which are mostly utilized by other works. [106]The loss function could also be calculated through the designed loss function or mutual information, such as [107] research work defined the mutual loss function according to the stochastic variables, w.r.t the X and Y are mutual information as shown in equation 1.
\[R(Z)=I(Z;Y)-\lambda_{1}I(Z;X)-\lambda_{2}I(Z;X|Y), \tag{1}\]
The loss function could be designed according to different scenarios, such as reconstruction loss function, cross-entropy function, mean square errors, or the class-wise domain loss between categories and classifications. It varies according to the evaluation methodologies, such as the sensitive loss in [108], and L1/L2 loss in [109] which represents the distance between facial expressions and detected emotions. There are definitely multiple categories of loss functions that are defined according to the change in situations and behaviors.
### _3D and 2D Skeleton(Multi-modality) Behavior Recognition_
The 2D and 3D skeleton-based pose estimation has been utilized in multiple scenarios. The original designation of human joints pose estimation is to extract the skeleton data as one of the modality data within the range of multiple types of input images as mentioned above. The pose estimation tasks[110] are still popular during the current research situation. Deep learning-based methodologies include such as Graph Interactive Networks, Graph Attention Networks and other types of graph-based neural networks are the main methods of problem-solving. The proposed paper on datasets on NTU RGB+D and NTU RGB+D 120 is the first work that mentioned the measurement of cross-view and cross-subject according to the action category and the views. Similarly, the Human3.6M and 3DPW with the model SPML also extracted the skeletons in the format of 3D. Both 2D and 3D skeleton works are supported for human behavior recognition works in the 3D space. As one of the modalities in computer vision tasks, deep learning methodologies are focusing on feature extraction to finish the tasks of classification, regression, or predictions. The workflow of skeleton behavior recognitions is basically the end-to-end learning or machine learning pipelines for further behavior predictions or downstream tasks. Some of the works accomplished the embeddings of visualizations or contrastive learning to go through the process of comparisons between multiple behaviors.
### _Domain Adaptation_
Utilizing domain adaptations on both source and target datasets has provided a convenient way to recognize the classifications of datasets on human action recognition, from the perspective of from source to target or from target to source. Unsupervised research proposes the novel multiple data source included unsupervised domain adaption [111] for semantic recognition rather than single-source unsupervised domain adaptation(UDA). This research work utilized adversarial domain adaptation by generating the synthesis image and data for pattern recognition and recognized by discriminator in GAN to separate real data and synthesize data.
The research work [112, 113] from Tencent utilized domain adaption combined with GAN achieved to keep the intra-video content consistent and prevent video content distortion. According to the survey of unsupervised domain adaptation [114], the research work also illustrated generative adversarial DA during the competition of 2 networks to improve the recognition task accuracy. During the research work [111], the intrinsic discrimination features are extracted from the source domain, which could be utilized for behavior pattern recognition for the target domain. This conception is also mentioned in research work which works as a feature encoding technology in computer vision tasks [111]. Utilizing transfer learning, such as domain adaptation is to learn the specific patterns from source to target domain which has been verified with outstanding performance on action recognition tasks.
Another research work described the process of how to apply the source domain information from labeled data onto target unlabeled data for recognizable feature representation and recognition. The proposed architecture [115] utilized a multi-headed(two-headed) mechanism network and multi-branch structure to minimize the entire recognition loss of target domain recognition. In the research work [115], It manipulates the video domain adaptation onto large-scale video datasets where the domain adaptation neural network is considered as DANN rather than DCNN in research work [116] as well. There are two critical conceptions mentioned in the research works [117, 118]. One is domain shift operation which is to get the recognized information from one domain to another. Another conception is an adversarial adaptation which is mentioned in lots of combinations of GAN(Generative Adversarial Network) and unsupervised domain adaptation.
In the above research works, the source feature vectors and target feature vectors constructed the Pixel Correlation Discrepancy(PCD) and Pixel Correlation Matrix(PCM) to calculate the relationship between feature dimensions [118]. As above, one of the critical steps is to understand the correlation matrices for the source domain and target domain separately to determine the discrepancy of pixel correlations. Then this type of method, as mentioned in [118], is to input both feature vectors as PCM into Gradient Reverse Layers(GRL) and generate the PCD matrix. The target is to apply the domain adaptation operator from source to target and from target to source. And the accuracy of the first operation on PCD is 83.8%, while the second operation on PCM is 56.1%.
## V Application Scenarios And Methodologies
With the applications of video-based behavior recognition, the research focuses on dementia-suffering elders as the specific type of behaviors are frequently disordered and repeatable for investigating odd action prediction and prevention [119]. And the convolutional neural network also achieved excellent feature extraction performance in several scenarios. Same as above, the most popular YOLO series of applications is to accurately recognize human actions through a bounding box and has gained significant recognition percentages in several scenarios.
### _Scene Graphs_
In the beginning, the scene graph mentioned the relationships of humans and objects according to locations, events, time series, etc [120]. Scene graph refers that putting objects in a stereoscopic environment, such as in front of, behind of, on the side of, and other types which could mention the geographical information of existing objects. The events referred to the interaction between humans or sports, like picking up the cup, holding objects, and carrying stuff as shown in Figure 3. The relationship between humans and objects is composed of a Directed Acyclic Graph(DAG), representing the relationship between humans and the environment.
The time series is utilized for moving objects rather than static events, especially in video content analysis when mentioning the relationships of human-human and human objects using scene graphs. But the limitation of the scene graph is obvious when it does not consider synchronized human behavior, for example, holding the cups while listening to the smartphone. The synchronized behavior patterns shown above are more general than relationships between humans and objects, represented as two synchronized graphs with common vertices which are utilized for behavior recognition.
### _Extracted Behavior Recognition Descriptor_
The descriptor of behavior recognition means the features of learning and identifications when the type of video sequences includes both discrete and continuous human parts descriptors. The convolution neural network is the primary framework feature extraction procedure within the deep learning methodology. In the research works [121] they manipulated GRU to generate human captions based on video sequence frames, which could implement the recognition tasks on time series. As shown in Figure 4, which are some behaviors coming from multiple datasets are grouped together for the behavior classification tasks.
### _Multi-modality Behavior Recognition_
The multiple data modalities provide the context of video human action recognition. The results of the fusion score provide the classification results of video-based motion recognition. As mentioned in most research works, data fusion includes three fusion methods: late fusion, early fusion, and hybrid fusion. The data modality will decide the data processing methods and extract multiple types of features. The hybrid architectures of data fusion gave the final results of classification possibility. The modality data also include various input formats, such as optical flow and infrared data, RGB, and RGB+D data. From the information level, the audio, text, caption, and subtitle of the video are also great sources of motion recognition tasks. By fusing the multiple data types into one fusion score, we illustrated both motion and static behaviors that could be recognized, which provided improved recognition results.
The previous work summarized the skeleton behavior recognition[122] which could be traced to early 2016. As for the skeleton type of data segments, the features above are bones and joints, which could represent the human pose estimation percentage. Several types of skeleton-based poses could be analyzed when their scenario is under temporal consequences [123]. Skeleton-based methodology with 3D human pose estimation is a popular topic in several recent research works. The rotation-based skeleton poses analysis using lie algebra based on time series with spatial information to represent the angles and turnings of human parts [123]. Both ST-AGCN(spatial-temporal adaptive graph convolutional network)/T-AGCN(temporal adaptive graph convolution network) recognize the skeleton-based human action sequences [120] by using the fundamental theories of ST-GCN. The skeleton extraction algorithm is utilized for 3D skeleton-based behavior extraction in the preprocessing steps.
### _Action Recognition Tasks_
Feature ExtractionFeature engineering plays a critical role in the first stage of human behavior recognition tasks. Due to the specific properties of action data, most video data are composed of multiple formats rather than single formats. Such as both skeleton data and text caption data could be utilized together for modality behavior analysis. The feature extraction step could happen in the first stage or second stage of analysis rather than the pre-processing steps, especially when 2 frames of behavior have high similarity in the feature representations or we call them representation learning. There are multiple works that investigated the methodologies of facial expressions [124] as the emotional representations of humans have high similarity between the previous frame and the current frames. The feature classification [125] could also happen between the classes of pre-defined types, such as some researchers utilizing the Encoder-Decoder architecture for discriminative feature extraction targeting to enlarge the distance between intra-classes and reduce the distance between inter-classes. Also, the current research methods also utilized the Euclidean Distance to measure the similarity of two frames for classification comparison, which are mentioned in multiple research works. The classification accuracy will reach
Fig. 3: Human-scene interaction scenarios.
a better performance through the combination of multiple measurements.
Unsupervised Deep LearningThe currently proposed searches such as DBN(Deep Belief Networks) and SVM(Support Vector Machine) research are working on the limited size of data even though both of them have reached the highest identification results. The root cause is most datasets require heavy labeling work to classify the datasets before the tasks started. One of the challenge-solving methods is to reduce the size of datasets by slicing them into a batch format to run the tasks. Another way is to convert the RGB data into optical flow format or gray scales, which made the color information missing from the above works. But the color information occupies the vital rules in computer vision tasks, especially unsupervised deep learning works. The unsupervised methodology is to avoid the large required labeling works through enlarge the distance between different categories and reducing the margins between the same classes.
Attention-based methodologyThe attention-based methodologies are also a popular tool for behavior recognition which are categorized into multiple-head and single-head mechanisms with the architecture of query, key, and value. The proposed research works have commonly utilized the soft-attention mechanism for recognition tasks. Hard attention utilizes a stochastic sampling model rather than the weighted parameters in soft attention, where soft attention is more prevalent in human identification tasks. Currently, considerable research focuses on manipulating the attention-based mechanism with skeleton-based behavior recognition tasks. The skeleton data is view-invariant and motion-invariant, which makes the skeleton data suitable for human action recognition tasks [126, 127, 128]. The combination of attention-based deep neural networks and skeletons became a popular trend in vision-based studies.
Scene Graph(Indoor Activity) Recognition3D stereo pattern recognition has been recognized in several events, including object detection, human pose recognition, and video action analysis. Especially, humans and objects are recognized as vertices of graph nodes while the 'dynamic' relationship between each other is recognized as edges. This type of work utilized encoder-decoder or encoder-decoder-encoder mechanisms for discriminative feature extraction [129]. Also, with the popularity of multi-head attention mechanisms, some frameworks currently utilized transformers as their backbone to analyze video segments by indexing frames. The scene graph generation method is one of the most common graph relationship generation functions, which worked well between objects and humans. The scene graph is considered as the relationship between multiple objects in 3D space, where the video background could be simulated as a 3D environment. The research work of this paper also mentioned how scene graphs analyze video data. The model investigated here is combined with a transformer for generating the scene graph by inputting video data and exploring the relationship between humans, the environment(places), and objects. As the novel proposed research, a scene graph mentioned the relationship between objects in the 3D environment [130]. It built the 3DSSG on top of 3RScan, aiming to extract the 3D geometry and depth information of 3D space. Rather than the 2D scene graph, the involvement of a diagram is more densely connected and informative in the 3D scene graph. As mentioned in the work [130] of scene graph generation, it allows video analysis of moving objects and human behaviors in a stereoscopic environment.
Online Action Detection(OAD)Online action detection is also a popular topic with recognizing human behaviors from both spatiotemporal perspectives in the context of the environment. There are only a few works mentioning online action detection rather than online learning which is a widely mentioned methodology in machine learning. The recent years' works are widely combining transformers with online recognition tasks or timely prediction tasks with RNN, such as [131, 132, 133] research works. As the online detection is to predict the behaviors in time series especially in video streams with the upcoming frames through a defined temporal LSTM encoder-decoder structure. The research works [134, 135] mentioned the conception proposal from the research works with the real-time behavior prediction with single frame detection, such as TV series. Online action detection focuses on the tasks of prediction in the usage of backbone feature extraction frameworks.
## VI Deep Neural Network and Comparison
Many deep learning backbones and deep neural networks [136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 223, 224, 225, 226, 227, 228, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 252, 254, 255, 256, 257, 259, 261, 258, 259, 262, 263, 264, 265, 266, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 282, 285, 286, 287, 288, 289, 289, 291, 280, 287, 288, 289, 281, 282, 28, 284, 285, 286, 287, 288, 289, 289, 292, 281, 282, 286, 287, 288, 289, 293, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 33, 340, 341, 342, 343, 344,
The research work [155] discussed how unsupervised learning works better than supervised methods by manipulating contrastive learning on the vision representation data. As per our verification, most of the vision data could be converted into the format of embedding and compared through contrastive learning. Meanwhile, the research work [156, 157, 158, 159] discussed how deep learning works well in skeleton-based behavior recognition through the common frameworks with GCN and CNN. The double stream of combined score results has been verified its advantages in multiple behavior recognition tasks with double joints stream in research [159], and the combination of multiple models in deep feature extractions [156]. The multi-modality human behavior recognition has also proven its availability in the increased identification accuracy. The research works [157, 158] investigated the wide usage of modality data and its improved performance on human behavior recognition accuracy on multiple behavior categories. With recent years' research, due to the specification of muti-modality dataset being large-scale and data type varies, the deep models may require more parameters, and an increased number of neural network layers, which made the large-scale model training receive more attention. And the fusion methods are also varying according to the fusion methods which rely on feature extraction and feature representations. The research work [157] discussed how the deep encoder-decoder and multiple deep neural networks could work well together on multi-modality data recognition which performs better than the single-modality data.
Meanwhile, the time-dependency relationship of consistent behavior analysis also performs a critical role in current research, as human motion is composed of a series of atomic actions in a single behavior. As per the request of current time-related tasks, the temporal attention model accurately addresses the time-based feature extraction through the attention-based models [160, 161]. Then the next step is to optimize the correlation map and loss functions with more converged iterations for the modeling training on the video-based data. The research [162] also investigated the representation of the behavior such as "touch head" need to be represented as both physical connections and non-physical connections relationships. The above research analyzes the connections between local and non-local blocks where multiple individual blocks consider the behaviors. Most deep learning tasks of action representations and abnormal recognition before 2020 were summarized during research work [163]; it outlined most of the machine learning methodologies, some deep learning-based methods, and their application scenarios with the video-based data. But this research work should have noticed how the multi-modality data and multiple branches/streams provided higher accuracy than a single modality in identifying HAR(human action recognition).
### _Deep Belief Network and Boltzmann Machine(DBM)_
With the wide usage of feature engineering and representation learning, the multi-modality data could be mapped into low-dimensional spaces, as mentioned in [164], to extract the representative features from the fused data. It is also considered an area of methodology to keep the property of high accuracy and tackle obstacles of limited dataset size. After exploring the existing solutions on DBN and RBM, some of the current works focus on video tasks by applying these networks. The Deep Belief Network(DBN) architecture comprises the stacked Restricted Boltzmann Machine(RBMs) for analyzing three types of data: single-modality, multi-modality, and cross-modality which are mentioned in most research works. The entire structure mentioned in Figure 4 represents the structure of DBN and stacked RBM which are utilized in
the action recognition tasks in the application of deep neural networks. The top of Figure 4 represents that the DBN(Deep Belief Network) is a structured deep neural network that is composed of stacked RBM, where the input neurons of each layer are the output neurons of the last layer. The final layers of RBM are followed by a multilayer perceptron(MLP) for outputting the behavior detection probability during multiple classification results. The bottom part of Figure 4 represents the neuron structures of each layer in RBM, where the output of neural networks is calculated by inputting the information through a hidden layer combined with weights and bias defined on each hidden unit and hidden layers between the input layer and output layer.
The research work [165] compared the performance of the Deep Boltzmann Machine and Deep Belief Network, which said the Deep Boltzmann Machine cannot work individually when applying DBN models on large-scale datasets. The paper illustrated clearly the conceptions of Deep Belief Networks for activity recognition by defining stacked RBM to compose unsupervised DBN structure. The input images are also pre-processed by Gaussian transformation before inputting them into networks. The DBN structures, including stacked RBM for human action recognition, could be considered as following energy functions composing a size of \(m\) * \(n\) matrix called w\({}_{ij}\) of visible and hidden units
\[\begin{pmatrix}v_{11}&\cdots&v_{1m}\\ \vdots&\ddots&\vdots\\ h_{n1}&\cdots&h_{mn}\end{pmatrix} \tag{2}\]
where \(m\) is the number of visible units and \(n\) is the number of hidden units, \(i\) and \(j\) represents the index of hidden units and visible units, and b\({}_{i}\) and b\({}_{j}\) represent the offsets of visible neurons and hidden neurons respectively where the energy function is expressed as equation 2,
\[E(v,h;\Theta)=-\sum_{i=1}^{m}b_{i}v_{i}-\sum_{j=1}^{n}b_{j}h_{j}-\sum_{i=1}^{m }*\sum_{j=1}^{n}v_{i}h_{j}w_{ij} \tag{3}\]
and the value of hidden neurons and visible neurons are calculated according to Equation 3 and Equation 4,
\[P(h_{j})=P(h|v,\theta)=\sigma(b_{h_{j}}+\sum_{j=1}^{n}h_{j}w_{ij}) \tag{4}\]
\[P(v_{i})=P(v|h,\theta)=\sigma(b_{v_{i}}+\sum_{i=1}^{m}w_{ij}v_{i}) \tag{5}\]
where _w\({}_{ij}\)_ represents the weights allocated to the connections between hidden neurons and visible neurons. After the stack of RBMs, the DBN is ended by a multilayer perceptron being utilized for behavior classification result output. Where the MLP works as the foundation of multi-category classification with the structure of numerous inputs mapping to a single output result, and the production of MLP compares with a fixed threshold to determine whether it belongs to a specific category, which is represented as equation 5 and equation 6.
\[z=\sum_{k=1}^{m}w_{k}x_{k}+bias \tag{6}\]
\[\sigma(z)=\frac{1}{1+e^{(-z)}} \tag{7}\]
As mentioned above, the typical structure of DBN where multiple Restricted Boltzmann Machines are stacked, followed by the MLP layers, especially the proposed Convolution Deep Belief Network is to accomplish the human action recognition tasks. The research work presents DBN could achieve great classification accuracy compared with SVM. However, the limitation of this type of DBN is that it only recognizes single-modality activity rather than multi-modality activity. DBN has fewer variants focusing on various types of human actions when the accuracy is higher than other traditional methods.
The analysis of multi-modal or the composition of activities requires a huge amount of data fusion, and the format of data varies. Due to the limitation of DBN on limited datasets, the survey on multi-modal referred to the process of applying the DBN on multiple types of input data, including image and text, where the word tokenizers must be considered during the processing [164, 166]. With the usage of DBN-stacked RBM, the model can not only process the video data but also process the audio data by converting the audio information into tokenizers, especially the modality data, by fusing data as the category of late/early/hybrid. The hybrid model [165] is also utilized for extracting information from multiple dimension
Fig. 5: Deep belief network for action recognition.
video data, reaching the highest prediction accuracy during the comparisons of multiple schemes and deep neural networks.
The DBNs have been worked as unsupervised classification methods for recognizing temporal and spatial information of videos. It tackled the issue of recognizing most videos without labels or partially labeled. Active learning made the action prediction accuracy lower than other types of deep neural networks when applying this methodology on top of unlabeled or partially labeled data. DBM and DBN solved this problem with the deep neural networks methodology [167]. The performance of DBN architecture showed the recognition result had achieved high accuracy for action recognition through using Gibbs Sampling as an image pre-processing step. The application scenario of DBN also includes sports recognition. The proposed [168] time-space deep belief network recognizes human sports behaviors, resulting in higher accuracy than CNN. Furthermore, contrastive divergence(CD) is the optimization algorithm of prediction accuracy using a weights matrix under DBN.
### _Graph-Like Methodologies_
The graph theory could also be applied to recognition tasks as some of the video data could be modeled into the structure of a graph, including graph convolutional networks, scene graphs, and graph neural networks. We analyzed some standard arrangements for classification and regression deep learning tasks using a graph-like design.
#### Vi-B1 Graph Convolutional Network
The research works [169, 170] are composed of two graphs, one of which represents the body graph by using the matrix representing the bones relationship, where the relationships are represented as joints matrix. Furthermore, the temporal and spatial GCN is proposed for recognizing skeleton-based behaviors. RGB and RGB-D data are utilized during the research work to draw the skeleton-based human pose, Where spatial and temporal information is recognized by multiplying graph-like networks, which were investigated and certified that the GCN and its variants have achieved excellent identification performance [171], especially on the skeleton-based analysis. The graph-based convolutional network is also utilized for temporal and spatial human skeleton-based analysis. The geometry information is also extracted from the original images(frames in the video). The extrinsic and intrinsic connections of physical parts are recognized as the relationship analysis of human behaviors. The dense block is connected in the above research works in temporal sequences. Similar to other graph-based methodologies, the input features are encoded with parameter matrix W which is the weights parameters of each neural layer and the outputs are the results from parameterize operations.
#### Vi-B2 Graph Neural Network
The Graph Neural Network is also a type of neural network called MST-GNN framework for analyzing human skeleton body information based on NTU RGB+D datasets. MST-GNN is proposed to analyze the human skeleton graph, which combined the temporal and spatial elements of body information based on NTU RGB+D datasets [172]. The GNN scheme proposed here combined computational graph units for temporal graph analysis [172]. Data fusion is also utilized for both spatial and temporal feature extraction by utilizing the MST-GCU module, which worked as a part of the feature extractor.
#### Vi-B3 Graph Attention Networks
The Graph Attention Network(GAT) utilized the aggregation-concatenation methods for the message-passing algorithms. Rather than Graph-based networks and Attention-based networks, the graph-attention neural network could work on the edge predictions and node predictions based on the graph structures. For the video-based human behavior recognition tasks, the Graph Attention Network could be manipulated on skeleton-based and spatial-temporal event recognitions, which include video behavior-level and pixel-level recognition tasks. Both temporal and spatial elements could be extracted as features from deep neural networks, where each layer represents a feature generator and learned patterns from defined inputs. As an individual network in a bunch of graph neural networks, such as graph interactive networks, graph convolutional networks and etc, the GAT works as the pioneer of graph-based transformers as a type of visual transformer.
## VII Temporal Models on Behaviour Recognition Tasks
### _RNNs: Gated Recurrent Units(GRUs) and LSTM_
GRU is utilized for predicting the time sequential movements with multiple variants, including GRU-SVM and GRU-D models. The Gaussian and Kalman filters are the two most critical filters utilized for blurring images before the videos are inputted into networks. The input categories of videos could be split into two types, where one is the original video and another is processed by a Gaussian filter. Few works utilized individual Gated Recurrent Units for recognizing human actions. Therefore, the GRUs is nearly utilized individually instead of combined with other types of models. With the work [173] utilizing the encoder-decoder structure for the representation of temporal feature extractions. This work proposed the methodology of a temporal convolutional network only to solve the long-time dependency problem rather than image features only.
There are some works utilizing GRUs combined with CNN methodologies which proofed that the efficiency of hybrid modelings. Existing methods include the novel proposed LSTM and GRUs framework, which has reached a higher recognition accuracy which is higher than 90%. From this perspective, we could see that the recognition tasks could
Fig. 6: 3D-ConvNet schemes.
utilize the GRUs structure with the update gate and reset gate [174] for the recognition tasks. Furthermore, the gated recurrent unit could be utilized after the data comes from sensors. As the sensors' data are usually unstructured or semi-structured due to the application scenarios, such as health data, environment data, or climate change data which are largely impacted by outside aspects.
Mostly, the RNNs represent a great range of time-sensitive tasks, such as frame-based behavior predictions on the basis of video sequences or behavior sequences. Or the abnormal behavior recognitions according to the previous normal behavior detection. From this perspective, the RNNs could work pretty well on the basis of context-based tasks. Usually, the LSTM and GRUs could work well not only on traditional image recognition tasks, but it could also work with attention-based tasks as a sub-kernel when it works as individual components. The GRUs are widely utilized in the area of signal processing, wifi communications, or wearable sensors in the wide usage of applications.
### _Vision Transformers and Adaptive Modules_
As widely known by plenty of works, due to the flexibility of the attention mechanism, transformers are widely utilized in lots of NLP tasks for information extraction. The vision transformer such as Vision Transformer(ViT) is a major transformer module designed for vision information detection. As the specific information required by video information extractions, the transformer structures are proven in a wide usage of behavior recognition, such as skeleton-based tasks [175] or multi-modality tasks. In the research [176] built two blocks with local and global temporal for the time-dependency feature extraction. The adaptive kernel learns the features from video clips frame by frame followed by the activation functions to determine the final output scores. Through the local and global attention modules, the attention weights are calculated through the attention modules. During the research work in [176], the change of backbone determined recognition accuracy from the frame dependency relationships.
The vision transformer has been proven efficient in multiple types of video-based tasks, especially in spatial-temporal task processing. The ViT could also be utilized in the tasks of deep representation learning tasks in cross-modal related tasks [177]. Which is widely known by transformers, where the attention-based mechanism is applied to replace the convolution operation to accomplish the feature extraction in the behavior recognition tasks. There were multiple works proposed followed by [176] for the adaptive block on the structure of video-based behavior recognition, which is a temporal feature extraction block. The adaptive block always works with the aggregation operator to aggregate the multiple temporal features from multiple frames as shown in Figure 6. All the work we illustrated here is to capture consistent behaviors through a series of video frames and extract the features from a single frame, then make the comparison during the change of modality or temporal location. So the aggregation function could work on multiple dimensions of input frames, such as pixel-level(spatial), frame-level(temporal), or object(detection) level.
## VIII 3D Convolutional Neural Networks and Hybrid Architectures
### _3D CNN Methodologies_
There are plenty of derived 3D Convolutional Neural Networks that have achieved outstanding feature extraction performance on both spatial and temporal through double-stream or multi-stream architectures. Currently, we have collected various types of Convolutional Neural Networks having multiple architectures with both hybrid and individual streams on behavior classification tasks.
3D CNN with adaptive temporal feature resolutions specially outlined the temporal elements by using the bounding box with object detection(ROI, region of interests) through convolutional operation on pixel-wise video data to extract features on a temporal perspective [178]. The procedure mentioned above is to get the similarity for mapping the temporal feature map to a low-level space. We verified that the ROI methodology could not help improve behavior recognition accuracy but helps reduce detected areas, which may be available on the computation cost deduction for behavior recognition. This procedure is still waiting to be verified in the research works. After we summarized most of the existing frameworks for a few past years, the hybrid models are more popular than the unique architecture for human action recognition tasks. Nevertheless, some examples include C3D embedded with RNNs for movement recognition on time-sequential tasks. Most of these frameworks are composed of multiple components to improve classification accuracy.
During the proposed work [179], 3D-CNN is utilized to classify two types of human behaviors, including violent and non-violent, which could be utilized in surveillance videos for a series of video frames. The 3D-CNN is the stage of feature extractions, and the features are inputted by using attention-module to both spatial and temporal perspectives. As the property of terrorism events, the above recognition methodology classified the ordinarily polite and violent behaviors according to the temporal sequence of frames by the spatial and temporal order of 3D convolutional neural networks as the structure shown in Figure 5. The method mentioned above is combined with a Generative Adversarial Network(GAN) to separate abnormal and typical behaviors by defined constraints.
Fig. 7: Temporal adaptive block of feature aggregation functions.
### _3D-CNN and Hybrid Behavior Recognition Tasks_
#### Iv-B1 Multiple-stream Hybrid Architecture in 3D-CNN
Multiple double-stream tasks could be discussed in our following articles. We collected a series of related works and summarized the following categories include a. optical-flow stream and RGB-stream, b. spatial and temporal streams, LSTM(time-dependency) streams, CNN streams, optical-flow streams, and soft-attention streams. Especially the multi-modality recognition, as the modality analysis required multiple types of data format, such as the combination of audio, text, and video. The fusion of multiple modalities will be helpful in the improvement of identification accuracy as the context of human recognition is critical under specific conditions. The research works [160, 161, 162, 180] mention how modality data fusion is fused for human behavior recognition as the stream structure could be separated into time dependency, which is the change on individual parts from \(T\) to \(T+\tau\) as mentioned in above research work. The double stream [160] includes two types of branches as shown in Figure 6. From the methods of fusion, data fusion methods could be separated into late fusion, early fusion, and hybrid fusion, especially in the perspective of temporal data and modality data fusion in double-stream or hybrid stream architectures. The first stream is a human skeleton branch with convolutional neural networks followed by an LSTM on time dependency analysis. And the second stream is composed of multiple branches, which include various views from different angles, as each angle will provide a different perspective and features. Finally, the outputs of multiple components combine and calculate the final prediction results.
Besides the 3D convolutional neural networks, multiple types of hybrid architectures have implemented behavior recognition tasks, as mentioned in the article [181]. During their research, they said how the double-stream architectures are utilized in spatial and temporal behavior recognition tasks. Meanwhile, they discussed the behavior prediction on multi-modality slow-fast models using multi-stream structures. Furthermore, hybrid architectures exist in generative adversarial networks(GAN) for group activity recognition and attention-based LSTM models [160, 182]. A generator and a discriminator construct the generative adversarial network. The whole GAN pipeline extracted the features first and generated recognition scores through both the discriminator and the generator. Then GAN and graph-based networks could recognize the group activity through transfer learning or generative models. From this perspective, the recognition tasks are usually accomplished by combining LSTMs and other related models, especially the multiple streams and levels. The steps generally follow the steps combined by recognizing the bounding box, masking the objects by image segmentation, and finishing the classifications by the recognition procedure [183]. During the works mentioned above, the article utilized the correlation matrix to measure the performance of classification results. In summary, the hybrid architecture extracted various features through individual branches and then finalized the classification score by utilizing fully connected layers to get the results of recognition and classification.
The multiple streams will include the multi-channel processing of images, such as RGB, optical flow, and skeleton data. As mentioned in research [184], the human-robot interaction with deep neural networks detects behaviors by using the ResNet-101 and R3D-18 as backbones to see both spatial and temporal elements from both RGB and skeleton behavior patterns. The cross-subject and cross-view of this work have improved the performance on the behavior recognition tasks. But the above work has its limitation as it only utilized the 2D skeleton as input sequences with the standard methods to generate the probability of classification results. Similar to the above, the double stream attention-based LSTM methodology is provided in research work [185], with both spatial and temporal attention-aware for behavior recognition by extracting features from optical flow images.
#### Iv-B2 Attention-Aware Hybrid Architecture in 3D-CNN
The attention-based hybrid end-to-end deep learning methodologies are also standard in recent years. We know that attention-based and attention-aware methods are well-known in other tasks, and lots of research work have verified that attention-aware processes are efficient in human and action recognition tasks. From this perspective, our review summarized existing research, such as in research work [186, 187], Lei. Arguments video data based on four types of the video stream and the modality data through 3D ResNet-50 as a backbone during the feature extraction process. They fused the results of prediction outputs and calculated the prediction results into activation functions to simplify the output results. They utilized motion guided attention model to combine the development of 2 motion networks. It has reached better recognition results rather than others. As the modality data, such as videos and audio, could be utilized for modeling training, emotions/expressions, gestures, or gaits could also be extracted with the deep neural attention-based network. In research work [188], they built dual-stream models, including human action and visual attention trained based on UCF101 datasets. They made their backbone network for feature extraction and applied visual attention to behavior types, specifically facial expressions, and gestures. They utilized long-term engagement with the LSTM network on temporal dependency analysis. We mentioned that they considered spatial and temporal attention mechanisms with learnable parameters. The idea is to output the feature tensors through attention networks by replacing various backbones of models, which achieved higher recognition performance on gestures and emotions recognition based
Fig. 8: Double stream structure in spatial and temporal recognition.
on the human body parts.
Finally, we found that attention-based methodologies, especially soft attention, are effective in disorder recognition and correction. In research work [189], the soft-attention-based behavior recognition methodology is proposed for background and foreground operations extraction from video data. The results after attention steps are inputted into a pre-trained VGG model as the backbone to extract the human features from multiple views - front and side view. They also utilized the attention mechanism to store the generated driver behaviors segments from the original frames by fuzzing the backgrounds. And the hybrid model w/o backbones required multiple streams to work together and generate objective results rather than single-stream architectures.
## IX Remaining Obstacles in Action Recognition
### _Obstacles Solving in Spatial and Temporal_
One of the problems existing in current behavior identification tasks is splitting some confusing poses, such as climbing poles with pull-ups in the classification tasks of behavior recognition[190]. This problem is also found in the 3D skeleton pose estimation tasks. During the skeleton-based tasks as one of the modality data, the optical flow also has this type of problem for clearly extracting distinct behaviors from the high-similarity samples. As this problem first is addressed in facial expression recognition, then it was found in behavior recognition. Then the fusion of space and time is utilized for solving this type of problem. As consistent behavior has the property of the human part's location change, where long-range time analysis will heavily rely on the extracted temporal elements. We mentioned how the temporal elements are analyzed in the last section. While we also found that there are few works utilizing the background or context as features, most of the works only consider human motion pixels rather than background pixels which made the optical flow methods great fitting for temporal-related tasks.
The wide usage of end-to-end deep learning methodology[190, 12] is currently the most popular framework being utilized in behavior recognition tasks. But we found that there are few works to consider other types of architectures or software structures for spatial-temporal tasks, which may help in the improvement of training efficiency to reduce the computation costs and time spent. The vision Transformer broke and solved the situations from both space and time perspectives. The structure of the encoder-decoder also could accomplish the tasks of feature extractions with the downstream classification tasks.
While the popularity of video-based human behavior recognition, the transformer architecture is also widely utilized in solving behavior recognition tasks. Rather than utilizing the deep neural networks as the backbone to accomplish the feature extraction tasks, the transformer-based methodologies are popular for identifying the temporal information, such as the frame-by-frame extraction or time-sensitive tasks through its attention mechanism, for example, swin transformer and vision transformer, especially in the usage of video-based tasks. The purpose of using a transformer on vision tasks is to solve the modality and temporal issues such as MM-Vit [191]. To solve this type of problem, the vision transformer(ViT), video vision transformer(ViViT)[192], and video swin transformer are proposed to analyze 3D/4D information rather than the 2-D inputs only, the solution above is mainly converting the vision information into embeddings or tokens, but the usages or the applications of swin transformers on behavior recognitions are still limited. The solution is to stack multiple swin transformer blocks or ViT together for feature extraction on both temporal and modality perspectives.
### _Prediction of Futures and Usages_
The future of deep neural networks will be focusing on multi-modality such as images, videos, and sentences. During our investigations, deep neural networks not only focus on the usage of human behavior recognition but also be utilized in multiple disciplines, such as speech recognition. The application scenarios are switched from video recognition only to the multiple types of modality recognition. As the deep neural network has involved lots of layers for feature extractions, especially the pre-trained models such as ImageNet or ConceptNet for the conception generation based on the extracted features. Also with the wide usage of IoT devices, as well as edge computing, deep neural networks will be widely utilized in a wider range of situations for computation resources optimization as well as better feature extraction results for higher identification results generation even the auto modeling training. There are multiple perspectives that could be considered under the wide range of usages during deep neural networks with multiple scenarios in the application or theory impacts.
## X Conclusion And Further Work
Firstly, our work investigated the existing recent and latest deep learning methodologies for human action recognition from both spatial and temporal perspectives. During the past several years of research, data types, including RGB, RGB-D, optical flow, grayscale, and skeleton-based, are all manipulated for video content analysis. Rather than the spatial information only, the video information provides a context for human action recognition, primarily when video recognition is utilized for abnormal detection and analysis when the actions depend on the environment. The temporal information provides the context of behaviors rather than single frames. Then we also provided our summary of existing works in both mathematics and technique views for video behavior recognition. The multiple types of architectures and modalities of data are mentioned in the usage of deep neural networks, including both individual and hybrid networks. Finally, the investigation work of our review has provided an overview of existing networks and datasets and the future of this area.
## Acknowledgments
Ideas/reconstruction of the article, article writing, and drafting are contributed provided by Prof. Y. Yang and Prof. Y. Li. The research work is financially and physically supported by the Lab of the School of Information Science and Engineering, Shandong University(Qingdao), China. |
2302.08102 | Prompt Tuning of Deep Neural Networks for Speaker-adaptive Visual Speech
Recognition | Visual Speech Recognition (VSR) aims to infer speech into text depending on
lip movements alone. As it focuses on visual information to model the speech,
its performance is inherently sensitive to personal lip appearances and
movements, and this makes the VSR models show degraded performance when they
are applied to unseen speakers. In this paper, to remedy the performance
degradation of the VSR model on unseen speakers, we propose prompt tuning
methods of Deep Neural Networks (DNNs) for speaker-adaptive VSR. Specifically,
motivated by recent advances in Natural Language Processing (NLP), we finetune
prompts on adaptation data of target speakers instead of modifying the
pre-trained model parameters. Different from the previous prompt tuning methods
mainly limited to Transformer variant architecture, we explore different types
of prompts, the addition, the padding, and the concatenation form prompts that
can be applied to the VSR model which is composed of CNN and Transformer in
general. With the proposed prompt tuning, we show that the performance of the
pre-trained VSR model on unseen speakers can be largely improved by using a
small amount of adaptation data (e.g., less than 5 minutes), even if the
pre-trained model is already developed with large speaker variations. Moreover,
by analyzing the performance and parameters of different types of prompts, we
investigate when the prompt tuning is preferred over the finetuning methods.
The effectiveness of the proposed method is evaluated on both word- and
sentence-level VSR databases, LRW-ID and GRID. | Minsu Kim, Hyung-Il Kim, Yong Man Ro | 2023-02-16T06:01:31Z | http://arxiv.org/abs/2302.08102v1 | # Prompt Tuning of Deep Neural Networks for Speaker-adaptive Visual Speech Recognition
###### Abstract
Visual Speech Recognition (VSR) aims to infer speech into text depending on lip movements alone. As it focuses on visual information to model the speech, its performance is inherently sensitive to personal lip appearances and movements, and this makes the VSR models show degraded performance when they are applied to unseen speakers. In this paper, to remedy the performance degradation of the VSR model on unseen speakers, we propose prompt tuning methods of Deep Neural Networks (DNNs) for speaker-adaptive VSR. Specifically, motivated by recent advances in Natural Language Processing (NLP), we finetune prompts on adaptation data of target speakers instead of modifying the pre-trained model parameters. Different from the previous prompt tuning methods mainly limited to Transformer variant architecture, we explore different types of prompts, the addition, the padding, and the concatenation form prompts that can be applied to the VSR model which is composed of CNN and Transformer in general. With the proposed prompt tuning, we show that the performance of the pre-trained VSR model on unseen speakers can be largely improved by using a small amount of adaptation data (_e.g._, less than 5 minutes), even if the pre-trained model is already developed with large speaker variations. Moreover, by analyzing the performance and parameters of different types of prompts, we investigate when the prompt tuning is preferred over the finetuning methods. The effectiveness of the proposed method is evaluated on both word- and sentence-level VSR databases, LRW-ID and GRID.
Prompt tuning, visual speech recognition, lip reading, speaker adaptation, learnable padding, CNN prompting.
## 1 Introduction
Visual Speech Recognition (VSR) technology [1, 2, 3] is developed for recognizing speech into text solely depending on visual information (_e.g._, lip movements) from an input talking face video. It can be regarded as a counterpart of Audio-based automatic Speech Recognition (ASR) that utilizes speech sound as inputs, and is also called lip reading. Even though the technology has drawn big attention [4, 5, 6, 7, 8, 9, 10, 11] with its attractive applications [12, 13] such as recognizing speech without sound input thus robust to auditory noises and can be utilized in environments that need to be quiet, it is still hard to find that VSR technology is being employed in real-world applications. This is because the VSR system is inherently sensitive to personal lip appearances and movements [14], and it shows degraded performances when applied to unseen speakers even if it is developed using a large-scale database with large speaker variations [15, 16, 17, 18].
In order to mitigate this problem, a speaker-adaptive VSR system can be developed in analogy to speaker adaptation technologies developed for ASR systems [19, 20, 21, 22, 23]. Speaker adaptation method is for narrowing the gap between training and testing data distributions by fitting a trained model to unseen test speakers to improve performances during test time. The method attempts to optimize the speech recognition performance by transforming pre-trained models to well operate on one particular speaker or modifying the encoded features of the target speaker to match the pre-trained model, by using a small amount of adaptation data. The most intuitive way is finetuning the pre-trained model on the adaptation data of the target speaker. However, it is not feasible to store and handle each user-specific model since it yields the total number of parameters the same as that of one model multiplicationed by the number of speakers. In addition, it requires a relatively large number of adaptation data to achieve optimal performance for the target speaker as the method tunes a large number of parameters. Therefore, an efficient way to adapt the pre-trained VSR model to diverse speakers is necessary, with a small number of parameters and adaptation data.
Recently, Input transformation [24, 25, 26, 27, 28, 29] and Prompting [30, 31, 32, 33, 34, 35, 36, 37, 38] have shown the effectiveness of input-level modification of pre-trained models on adapting the models on different tasks or data distributions without modifying the learned weight parameters of pre-trained models. By introducing learnable external parameters to the input of Deep Neural Networks (DNNs), a pre-trained model can perform different tasks from training [24, 39] or be adapted to shifted data distribution [40]. Inspired by the recent advances in Input transformation and Prompting, we propose a novel speaker-adaptive VSR framework that utilizes prompt tuning. Specifically, we propose three different types of prompts, addition form, padding form, and concatenation form, which can be jointly utilized for VSR models. Different from the previous prompt tuning methods that are mainly developed with Transformer variant architecture [32, 33, 34, 35, 39], the proposed method can also be employed for CNN from the input level to the intermediate layer level.
Moreover, distinct from the previous speaker adaptation methods that modified the extracted feature by introducing additional layers [41, 42, 43, 44], the proposed prompt tuning does not introduce an additional adaptation network and does not require finetuning of the pre-trained model. It has the advantage of simplifying the adaptation steps so direct adaptation from a pre-trained model is possible, while the previous works need to train an adaptation network after attaching it to the pre-trained model. Finally, the prompt has a much less number of parameters compared to the pre-trained model, so it can be stored and easily handled for each user (_i.e._, speaker).
Specifically, we propose three different prompts, i) addition form prompt, ii) padding form prompt, and iii) concatenation form prompt. The addition form prompt is the input-level prompt for CNN and has the same shape as the input video frame. It is added to all input video frames consistently to transform the encoded visual feature to well operate on the target speaker. The padding form prompt is the intermediate feature-level prompt for CNN. It operates by replacing the padding of pre-trained CNN that usually has zero, constant, and reflect values. As the padding of CNN is also convolved with convolution kernels, the padding prompt can adapt the encoded visual feature at each intermediate CNN layer to the target speaker. Finally, the concatenation form prompt is similar to Prompting in NLP [32, 33, 34] and it is concatenated to the input of Transformer in the temporal dimension. We examine the effect of the combinations of the three different prompts and show that by just tuning the prompt, we can improve the performance of the pre-trained VSR model for unseen speakers. Moreover, by analyzing the performance and the number of parameters required for each speaker, we show that the proposed method is preferred over finetuning methods when only a small number of adaptation data is available. We extensively validate the effectiveness of the proposed method in both word- and sentence-level VSR databases, LRW [45] and GRID [46]. Especially, since there is no speaker-annotated VSR dataset obtained in the wild, we annotate the speaker information for LRW [45] dataset and utilize it to confirm the effectiveness of the proposed method in a real-world setting.
The major contributions of this paper are as follows.
* To the best of our knowledge, this is the first work to explore the effectiveness of prompt tuning in VSR. We show that just finetuning prompts can largely improve the performance of VSR model for unseen speakers.
* We propose and analyze different types of prompts, the addition prompt, the padding prompt, and the concatenation prompt, that can be jointly utilized for general VSR models which are composed of both CNN and Transformer.
* We evaluate the effectiveness of the proposed method with comprehensive experiments on different adaptation data sizes and different prompt types, and we show that the proposed method can even outperform the finetuning method by just introducing about 0.5% additional parameters compared to the full model.
## 2 Related Work
### _Visual Speech Recognition_
Visual Speech Recognition (VSR), also known as lip reading, is the task of predicting speech in the text by watching a silent talking face video [47, 48, 49]. Due to the insufficient information and homophene problems, it is regarded as a challenging problem. With the great development of deep learning [50, 51], the performance of VSR has improved significantly [11, 16, 52].
In word-level VSR, [53] proposed a model architecture with a combination of 3D CNN and 2D CNN for a visual feature encoder and Bi-LSTM as a temporal encoder. Following studies [54, 55] proposed to utilize dynamic information with two-stream CNN [56, 57, 58] that models the speech from RGB frames and optical flows. [59] proposed to model the temporal dynamics using temporal convolutions by proposing Multi-Scale Temporal Convolutional Network (MS-TCN) which has multi-scale temporal receptive fields. [60] proposed to use a cross-modal memory network to complement the insufficient information of the visual-only model. In sentence-level VSR, [14] firstly proposed an end-to-end VSR framework using Connectionist Temporal Classification (CTC) training objective [61] and evaluated their framework on GRID [46] dataset. [62] proposed a dataset, LRS2, constructed in the wild environment and Sequence-to-Sequence (Seq2Seq) architecture [63] which can model the language with its decoder. [6] significantly improved VSR performance by proposing to utilize Transformer [64] for the temporal encoder (_i.e._, back-end) and the decoder. Recently, [52] improved the temporal encoder with Conformer [65] which is shown improved performance in speech recognition with its local convolution and global self-attention mechanism. [66] focused on improving visual front-end architecture by introducing attention so that the model can extract the most salient region from the talking face video.
Some studies have improved VSR performances by focusing on designing training mechanisms for the models [67, 68, 69, 70, 10] proposed to use knowledge distillation [71] so the student VSR model can learn from the teacher ASR model or powerful VSR model. [8, 9] proposed a method that can learn visual-to-audio mapping functions and bring audio representations through the learned mapping using cross-modal memory networks. [7, 11, 72] proposed to pre-train the backbone models in a self-supervised way that can utilize large-scale audio-visual databases, and showed its effectiveness with the powerful VSR performances.
Even with the progress in VSR, speaker-adaptive VSR has not well been studied. Since VSR only utilizes visual information, especially lip movements, it is inherently sensitive to personal lip appearances and movements. It makes the pre-trained VSR model show degraded performances when it is applied to unseen speakers that do not appear in the training dataset [14, 18]. In this paper, we try to mitigate the speaker-dependency problem of VSR by developing a speaker-adaptive VSR method. Specifically, we introduce prompt tuning methods to VSR, so we can adapt a pre-trained VSR model to the target speaker without modifying the pre-trained model parameters but just tuning the proposed prompts.
### _Speaker Adaptation_
Speaker adaptation [43] is mainly developed for ASR. [73] propose to finetune different parts of the network to adapt the model to the target speakers. However, as the finetuning methods modify a large number of parameters, they can be suffered from overfitting problems when the adaptation data is not enough. In order to mitigate the overfitting problem, [74] proposed a regularization method for speaker adaptation. Some works [75, 76] proposed augmenting the pre-trained speech recognition model with additional speaker-dependent layers. [77] proposed to add a speaker-dependent vector to every pre-trained hidden layer which will be adapted to the test speaker. [41, 42, 44] proposed to use speaker codes which are additional inputs depending on speakers. They firstly train an adaptation network by attaching it to the pre-trained model on the train set, and then the speaker codes are optimized for each target speaker on the adaptation set. Therefore, the methods require speaker annotations for both training and adaptation data. Recently, meta-learning [22] and speech synthesis [23] based speaker adaptation methods were explored.
A few works handled speaker adaptation for VSR. [78] proposed to utilize traditional speaker adaptation methods of ASR [20, 79] into VSR. [80] proposed to bring the concept of i-vector [81] in VSR. [18] proposed to use user-dependent padding for each speaker. In this paper, we propose a novel speaker-adaptive VSR method that utilizes prompt tuning. Different from the previous methods, the proposed method does not need any additional network and has simpler adaptation steps. Moreover, the proposed method requires the speaker information for the target speaker only.
### _Input Transformation_
Recently, input transformation methods attract large attention [25, 26, 27, 28] with its potential to reprogram a pre-trained model to perform different tasks from training. By surrounding the input image with noises, [24] showed that the model originally trained to classify ImageNet [82] classes can perform hand-written digit classification. Recently, [29] showed that using the input transformation, defense against the adversarial perturbs can be achieved. In this paper, we utilize the input transformation concept to develop a speaker-adaptive VSR model, we add learnable parameters to the input video frames for transforming the encoded visual feature to represent well for the unseen target speaker.
### _Prompt Tuning_
Prompting is firstly introduced by [30] to prepend text instruction to the input to make a pre-trained language model understand a given task. By the prepended text, prompt, the performance of a pre-trained language model can be varied [83] and previous works [84, 85, 86] tried to find the better prompt formula. Recent works [31, 32, 33, 34, 38, 39] proposed learning the prompt by gradient descent which is named prompt tuning. The prompt tuning is developed mainly for Transformer-based language models and has much fewer parameters compared to the full model parameters but can achieve strong performances. Beyond the language model, the prompt is started to be utilized in visual applications [35, 87]. However, the visual prompt was limitedly explored for Visual Transformer (ViT) architecture [88] and input of CNN [87].
In this paper, we utilize prompt tuning to adapt a pre-trained VSR model to the target unseen speaker. Different from the previous works, we propose three different types of prompts, addition form, padding form, and concatenation form, which can be jointly utilized for DNNs composed of CNN and Transformer variant architecture. Especially, we introduce intermediate feature-level prompts for CNN through a padding form prompt and show its effectiveness becomes larger according to the model size of CNN.
Fig. 1: Illustration of the proposed prompt tuning for speaker-adaptive VSR. (a) The general architecture of VSR models. (b) Three different types of prompts that can be applied to VSR models: i) addition form, ii) padding form, and iii) concatenation form. They can be jointly utilized to adapt the pre-trained VSR model on the unseen target speaker. We only update the prompts while the pre-trained VSR model is kept frozen.
## 3 Prompt Tuning for Speaker-adaptive Visual Speech Recognition
As shown in Fig. 1(a), VSR models usually consist of a visual front-end \(\mathcal{F}\) which has CNN-based architecture, a back-end \(\mathcal{B}\) which has RNN- or Transformer-based architecture, and predictor \(\mathcal{P}\) that predicts the transcription. We denote all the learnable weight parameters of the VSR model including the visual front-end, the back-end, and the predictor as \(\theta\). With a large-scale training dataset \(\mathcal{T}=\{(\mathbf{X}^{t},\mathbf{Y}^{t})\}=\{(x_{i}^{t},y_{i}^{t})\}_{i=1 }^{N_{t}}\) composed of \(N_{t}\) pairs of input video \(x_{i}^{t}\) and ground-truth text \(y_{i}^{t}\), we can develop a VSR model having optimized parameter \(\theta^{*}\), as follows,
\[\theta^{*}=\operatorname*{argmin}_{\theta}\mathcal{L}(\mathbf{Y}^{t},\hat{ \mathbf{Y}}), \tag{1}\]
\[\hat{\mathbf{Y}}=(\mathcal{P}\circ\mathcal{B}\circ\mathcal{F})_{\theta}( \mathbf{X}^{t}), \tag{2}\]
where \(\mathcal{L}\) is the objective function of VSR such as Connectionist Temporal Classification (CTC) [61] and Cross Entropy, and \(\circ\) denotes function composition.
Our objective is to maximize the performance of the pre-trained VSR model, \(\theta^{*}\), on unseen speakers that did not appear in the training dataset \(\mathcal{T}\). To this end, we utilize a small number of adaptation data \(\mathcal{A}_{s}=\{(\mathbf{X}^{a_{s}},\mathbf{Y}^{a_{s}})\}=\{(x_{i}^{a_{s}}, y_{i}^{a_{s}})\}_{i=1}^{N_{a_{s}}}\) to adapt the pre-trained VSR model on the target speaker \(s\). Here, we assume the amount of adaptation data is much smaller (_i.e._, 1\(\sim\)5 minutes) than the data used during training, \(N_{a_{s}}\ll N_{t}\), which is the feasible data amount to obtain for each speaker in the real-world scenario. Specifically, we introduce prompts, \(p\), and only optimize them to adapt the pre-trained model on unseen speakers instead of updating the learned weight parameters of the model. The proposed prompts have three different forms, addition, padding, and concatenation, which are illustrated in Fig. 1(b). In the following subsections, we will describe details of how each prompt can be incorporated for developing speaker-adaptive VSR systems.
### _Add Prompt: Addition to the input of CNN_
By adding some perturbations to the input of CNN, we can reprogram a pre-trained model to perform different tasks from those originally trained for. Motivated by these prior works [24, 25, 26, 27, 28, 29], we try to adapt a pre-trained VSR model to perform well on unseen speakers by adding a prompt to the input video. Therefore, only the prompt to be added to the input video is optimized by using a small amount of adaptation data of the target speaker. To this end, the prompt has the same size as the input video frame and it can be written as \(p\in\mathbb{R}^{H\times W\times C}\), where the input video with \(T\) frames can be written as \(x_{i}^{a_{s}}\in\mathbb{R}^{T\times H\times W\times C}\). The prompt \(p\) is added to the entire \(T\) frames consistently and optimized to transform the input video of the target speaker to be operated well on the pre-trained VSR model. The optimization of the addition formula prompt for target speaker \(s\) can be written as follows,
\[p_{s}^{*}=\operatorname*{argmin}_{p_{s}}\mathcal{L}(\mathbf{Y}^{a_{s}},\hat{ \mathbf{Y}}), \tag{3}\]
\[\hat{\mathbf{Y}}=(\mathcal{P}\circ\mathcal{B}\circ\mathcal{F})_{\theta^{*}}( \mathbf{X}^{a_{s}}+p_{s}), \tag{4}\]
where the input of the pre-trained VSR model with parameter \(\theta^{*}\) is now the video added with the prompt \(p_{s}\) and the optimization is performed for the prompt while the model parameter \(\theta^{*}\) is kept frozen. Please note that different from the adversarial examples [89], the addition form prompt has no constraints with its value so that it can have a value out of range of the image similar to [87]. The addition form prompt is illustrated in Fig. 2(a). Since the learned weight parameters are maintained, we can naturally have the regularization effects [74, 90] and it has a low risk for the model to be overfitted.
### _Replace Prompt: Padding of CNN_
Since the addition form prompt is applied at the input level, it can have insufficient representation power to transform the entire pre-trained model, especially if the model has large architecture. This problem is also shown in prompting in NLP, and previous work [34] inserts the prompts into all layers of Transformer to enhance the representation power of the prompt. Taking inspiration from the prompting in NLP [34], we try to make the prompt can affect the intermediate layers of CNN. To this end, we propose to utilize the padding region of a convolution layer. Usually, padding is utilized in CNN to control the size of the output
Fig. 2: Detailed illustrations of different types of prompt methods. (a) Addition form prompt is for being added to input video frames. (b) Padding form prompt is for being replaced the original padding region in the CNN. (c) Concatenation form prompt is for being concatenated to the input of Transformer-based module in the temporal dimension. Only prompts (_i.e._, green in the figure) are tuned during adaptation while maintaining the weight parameters of the pre-trained model.
feature map, and to fill the padding region, zero, constant, and reflect values are employed. As the padding is also convolved with the learned convolution kernel, we replace these padding regions with the prompts to affect the encoded visual feature, and adapt the pre-trained VSR model to the unseen target speaker. Specifically, the prompts have the same size as the pre-defined padding size of each convolution layer. For example, the prompt for target speaker \(s\) at \(l\)-th convolution layer can be written as \(p_{s}^{l}\in\mathbb{R}^{S^{l}\times C^{l}}\), where \(S^{l}=H^{l}(L^{l}+R^{l})+W^{l}(U^{l}+B^{l})+(L^{l}+R^{l})(U^{l}+B^{l})\) is the size of padding region, \(L^{l}\), \(R^{l}\), \(B^{l}\), \(U^{l}\) represent the padding size of left, right, bottom, and top, \(H^{l}\), \(W^{l}\), and \(C^{l}\) are the height, width, and channel size of feature map of \(l\)-th layer. Then, the prompt tuning for the padding form can be written as follows,
\[\mathbf{P}_{s}^{*}=\operatorname*{argmin}_{\mathbf{P}_{s}}\mathcal{L}( \mathbf{Y}^{a_{s}},\hat{\mathbf{Y}}), \tag{5}\]
\[\hat{\mathbf{Y}}=(\mathcal{P}\circ\mathcal{B}\circ\mathcal{F})_{\theta^{*}}( \mathbf{X}^{a_{s}},\mathcal{R}(\mathbf{P}_{s})), \tag{6}\]
where \(\mathbf{P}_{s}=\{p_{s}^{l}\}_{1}^{N_{l}}\) is the set of prompts for \(N_{l}\) convolution layers and \(\mathcal{R}(\cdot)\) is the operation of replacement the padding of visual front-end \(\mathcal{F}\) with the prompts. Applying the padding form prompt is illustrated in Fig. 2(b). Even if the padding is applied to the outer area of the feature map, it can affect the inner area of the feature map and even the entire feature map when the CNN layers are deep enough, with their enlarged receptive fields.
### _Concatenate Prompt: Concatenation to Transformer input_
Recently, Transformer-based networks [64, 65] are proven to be more effective to model temporal data than RNN-based networks [92, 93, 91]. Therefore, recently developed VSR models and state-of-the-art models [94, 11, 52, 6] are also utilizing Transformer-based back-end modules, and this makes it possible to utilize the prompting methods developed in NLP [33, 39] which are mainly for the Transformer-based architecture. Specifically, we concatenate the prompt to the input of back-end \(\mathcal{B}\), the feature encoded from visual front-end \(\mathcal{F}\), in the temporal dimension. The prompt has the same dimension as that of the encoded feature and can be denoted as \(p_{s}\in\mathbb{R}^{N_{p}\times D}\), where \(N_{p}\) is the length of prompts and \(D\) is the dimension of the encoded feature. Then, through the self-attention layers in the back-end, the prompts can affect the features encoded by the back-end. The optimization of concatenation form prompts can be written as follows,
\[p_{s}^{*}=\operatorname*{argmin}_{p_{s}}\mathcal{L}(\mathbf{Y}^{a_{s}},\hat{ \mathbf{Y}}), \tag{7}\]
\[\hat{\mathbf{Y}}=(\mathcal{P}\circ\mathcal{B})_{\theta^{*}}(\mathcal{F}( \mathbf{X}^{a_{s}})_{\theta^{*}}\oplus p_{s}), \tag{8}\]
where \(\oplus\) represents concatenation in the temporal dimension. The concatenation form prompt is illustrated in Fig. 2(c).
The three types of prompts, addition form, padding form, and concatenation form, can be jointly utilized for a pre-trained VSR model to adapt the model to an unseen target speaker. We evaluate the effectiveness of each prompt method and the combination of them through extensive experiments in the following sections.
## 4 Experimental Setup
For the experiments, we utilize both word- and sentence-level audio-visual datasets, LRW [45] and GRID [46]. We
\begin{table}
\begin{tabular}{c c c c c c c c c c c c} \hline \hline
evaluate the proposed method by adapting a pre-trained model on unseen speakers by using different amounts of adaptation data. The following subsections describe the details of data settings and implementation details for each dataset.
### _Dataset_
#### 4.1.1 Grid
GRID [46] is a sentence-level audio-visual corpus dataset following a fixed grammar. It consists of videos uttered by 33 speakers. Each speaker contains about 1,000 videos and each video is 3 seconds long. We follow the unseen speaker split of [14] so that speakers 1, 2, 20, and 22 are used for testing while the remainder is utilized for training. The data of the test speakers are divided into half to build adaptation sets and test sets. Since each video is 3 seconds long, every 20 videos from the adaptation set compose 1 minute of adaptation data. For the pre-processing and data augmentation, the lip ROI is cropped and resized into a size of 64\(\times\)128 similar to [14], and random horizontal flipping, time masking, and random spatial region erasing are employed similar to [9]. For the performance metric, Word Error Rate (WER, %) which measures the prediction error compared to ground truth is utilized for the dataset. Therefore, the lower WER values indicate better VSR performances.
#### 4.1.2 Lkw-Id
LRW [45] is a word-level audio-visual corpus dataset captured in television programs, thus datasets have large pose and illumination variations that are close to the real-world setting. It consists of 500 word classes and each word class contains 1,000 training videos. Since the dataset doesn't contain the speaker information, we annotate the speaker by using face recognition [95, 96] technology with a pre-trained model [97]. The annotated speakers are 17,580 which is very large compared to the GRID dataset. To build the speaker-adaptation setting, we set 20 speakers for testing while the remainder is utilized for training, and named the modified split as LRW-ID. The data information of 20 test speakers of LRW-ID is shown in Table I. \(\mathsf{S\#}\) represents the speaker index and the number in the blanket represents the speaker ID. Compared to GRID dataset, each speaker has different amounts of adaptation data, and the adaptation set may not cover the entire word classes in the test set, which is closer to a real-world scenario. Since each video is 1.16 seconds long, 52, 155, and 259 videos compose 1, 3, and 5 minutes of adaptation data. The lip ROI is cropped and resized into 112\(\times\)112, and the cropped frames are converted into grayscale. The same data augmentation is applied as GRID dataset. For the performance metric, word accuracy (ACC, %) is employed so the large value of ACC refers to better VSR performance.
### _Implementation details_
The basic architecture of VSR models is similar as illustrated in Fig. 1(a). For the GRID dataset, we modify the architecture of [14]. The visual front-end is composed of three 3D convolutions and two 2D convolutions, the back-end is composed of a 4-layered Transformer [64] with a hidden dimension size of 256, and the predictor is composed of a linear layer (Table II). For training, we use word-level CTC [61] loss function. The size of the addition prompt for GRID dataset is 64\(\times\)124\(\times\)3 which is the same as the input frame, and it is added to all input frames consistently. To insert the padding prompt, all paddings in 5 convolution layers are changed from zero paddings to the padding prompts. The detailed size of convolution layers and prompts are shown in Table II. The padding prompt size is represented as [left and right padding size, top and bottom padding size] \(\times\) channel size. The concatenation prompt is concatenated with the encoded visual feature before passing the back-end module in the time dimension. We use 5 for the length of the concatenation prompt (_i.e._, \(N_{p}=5\)).
For the LRW-ID dataset, we use the ResNet-18 architecture [98] whose first convolution layer is changed with 3D convolution following [5, 8]. For the back-end module,
\begin{table}
\begin{tabular}{c c c c} \hline \hline & **Input size:** 75 \(\times\) 64 \(\times\) 128 \(\times\) 3 (T \(\times\) H \(\times\) W \(\times\) C) \\ \hline
**Layer** & **Filter size / number / stride** & **Padding Prompt** & **Output dimensions** \\ \hline Conv 3D & 3 \(\times\) 5 \(\times\) 5 / 32 / [1, 2, 2] & [2, 2] \(\times\) 3 & 75\(\times\)32\(\times\)64\(\times\)32 \\ \hline Maxpool & 2 \(\times\) 2 / - / [2, 2] & - & 75\(\times\)16\(\times\)32\(\times\)32 \\ \hline Conv 3D & 3 \(\times\) 5 \(\times\) 5 / 64 / [1, 1] & [2, 2] \(\times\) 32 & 75\(\times\)8\(\times\)16\(\times\)64 \\ \hline Maxpool & 2 \(\times\) 2 / - / [2, 2] & - & 75\(\times\)4\(\times\)8\(\times\)64 \\ \hline Conv 3D & 3 \(\times\) 3 / 96 / [1, 1] & [1, 1] \(\times\) 64 & 75\(\times\)4\(\times\)8\(\times\)96 \\ \hline Maxpool & 2 \(\times\) 2 / - / [2, 2] & - & 75\(\times\)2\(\times\)4\(\times\)96 \\ \hline Conv 2D & 3 \(\times\) 3 / 32 / [2, 2] & [1, 1] \(\times\) 96 & 75\(\times\)2\(\times\)4\(\times\)32 \\ \hline Conv 2D & 3 \(\times\) 3 / 64 / [2, 2] & [1, 1] \(\times\) 32 & 75\(\times\)1\(\times\)2\(\times\)64 \\ \hline Flatten & - & - & 75\(\times\)128 \\ \hline Linear & 128 \(\times\) 256 & - & 75\(\times\)256 \\ \hline Transformer & 256 / 4 layers & - & 75\(\times\)256 \\ \hline Linear & 256 \(\times\) Num\_class & - & 75\(\times\)Num\_class \\ \hline \hline \end{tabular}
\end{table} TABLE II: Network architecture for GRID dataset.
\begin{table}
\begin{tabular}{c c c c} \hline \hline & **Input size:** 75 \(\times\) 64 \(\times\) 128 \(\times\) 3 (T \(\times\) H \(\times\) W \(\times\) C) \\ \hline
**Layer** & **Filter size / number / stride** & **Padding Prompt** & **Output dimensions** \\ \hline Conv 3D & 3 \(\times\) 5 \(\times\) 5 / 32 / [1, 2, 2] & [2, 2] \(\times\) 3 & 75\(\times\)32\(\times\)64\(\times\)32 \\ \hline Maxpool & 2 \(\times\) 2 / - / [2, 2] & - & 75\(\times\)16\(\times\)32\(\times\)32 \\ \hline Conv 3D & 3 \(\times\) 5 \(\times\) 5 / 64 / [1, 1] & [2, 2] \(\times\) 32 & 75\(\times\)8\(\times\)16\(\times\)64 \\ \hline Maxpool & 2 \(\times\) 2 / - / [2, 2] & - & 75\(\times\)4\(\times\)8\(\times\)64 \\ \hline Conv 3D & 3 \(\times\) 3 / 96 / [1, 1] & [1, 1] \(\times\) 64 & 75\(\times\)4\(\times\)8\(\times\)96 \\ \hline Maxpool & 2 \(\times\) 2 / - / [2, 2] & - & 75\(\times\)2\(\times\)4\(\times\)96 \\ \hline Conv 3D & 3 \(\times\) 3 / 96 / [1, 1] & [1, 1] \(\times\) 64 & 75\(\times\)4\(\times\)8\(\times\)96 \\ \hline Maxpool & 2 \(\times\) 2 / - / [2, 2] & - & 75\(\times\)2\(\times\)4\(\times\)96 \\ \hline Conv 2D & 3 \(\times\) 3 / 32 / [2, 2] & [1, 1] \(\times\) 96 & 75\(\times\)2\(\times\)4\(\times\)32 \\ \hline Conv 2D & 3 \(\times\) 3 / 64 / [2, 2] & [1, 1] \(\times\) 32 & 75\(\times\)1\(\times\)2\(\times\)64 \\ \hline Flatten & - & - & 75\(\times\)128 \\ \hline Linear & 128 \(\times\) 256 & - & 75\(\times\)256 \\ \hline Transformer & 256 / 4 layers & - & 75\(\times\)256 \\ \hline Linear & 256 \(\times\) Num\_class & - & 75\(\times\)Num\_class \\ \hline \hline \end{tabular}
\end{table} TABLE III: Network architecture for LRW-ID dataset.
we employ a 6-layered Transformer with a hidden size of 512, and a linear layer is utilized for the predictor (Table III). For the loss function, Cross Entropy loss is applied for 500 word classes. For the LRW-ID, the addition prompt has the size of 112\(\times\)112\(\times\)1, the padding prompt is inserted for all convolution layers (_i.e._, 17 layers) in the visual front-end as described in Table III, and the concatenation prompt is inserted before the back-end, with 5 lengths.
For evaluating the proposed prompt tuning method in speaker adaptation, we pre-train the VSR models on the training set whose subjects are not overlapped with the testing set. For pre-training on GRID, a batch size of 112 and a maximum learning rate of 0.008 with 5,000 warmup [64] steps are used. The training data consists of about 29,000 videos. For LRW-ID, a batch size of 400 and a maximum learning rate of 0.004 with 10,000 warmup steps are used, and the training data consists of 480,378 videos. With the pre-trained model, \(\theta^{*}\), we only optimize the prompts to adapt the model to unseen speakers while the model parameter is kept frozen. During adaptation, we use a learning rate of 0.01 for the addition and padding prompts and a learning rate of 0.1 for the concatenation prompt, with a batch size of 112 and 55 for GRID and LRW-ID, respectively. For the optimization, we use AdamW [99], [100] and TITAN RTX GPUs for both pre-train and adaptation. The number of parameters for each prompt method for adapting one target speaker is shown in Table IV. The percentage refers to the relative number of parameters compared to the full model parameters. It shows that the prompt has a very small number of parameters compared to the full model.
### _Baselines for comparisons_
In order to validate the effectiveness of the proposed prompt tuning methods in speaker-adaptive VSR, we set comparison methods including previous speaker-adaptive methods and finetuning methods.
_Baseline_ is a pre-trained VSR model without performing the adaptation to the unseen speakers so it shows the lower bound performance. _Speaker-invariant_[101] and _Speaker code_[42] are the speaker-invariant and -adaptive methods developed for ASR models. We directly apply these methods to the VSR model to compare the effectiveness of the proposed method with previous speaker-invariant and -adaptive methods. Specifically, for the speaker-invariant model, we additionally attach a speaker identity classifier after the visual front-end. During training, the speaker identity classifier is guided to classify the subject identity from the encoded visual feature while the visual front-end module is guided to deceive the speaker identity classifier. Therefore, with the adversarial training [102], the visual front-end is eventually become a speaker-invariant model by not encoding the speaker identity information into the visual feature. For the speaker-adaptive model, _Speaker code_, we additionally train the Adaptation Network and speaker code of [42] on training dataset \(\mathcal{T}\) by attaching them to the pre-trained VSR model. After training, adaptation is performed by only training the speaker code on the adaptation dataset of target speaker \(s\), \(\mathcal{A}_{s}\). We use 128, 64, and 32 dimensions of speaker code for each MLP layer in the Adaptation Network for GRID dataset, and 256, 128, and 64 dimensions for LRW-ID dataset.
Moreover, to compare with the finetuning methods [73], we set three types of finetuning methods by differing the trainable parts of the pre-trained model. Firstly, _FineTune-C_ is the method that only the last linear layer (_i.e._, predictor) is tuned on the adaptation data, so it has the smallest number of parameters compared to other finetuning methods. _FineTune-B_ is the method for finetuning both the back-end and the predictor. Finally, _FineTune-F_ is the method that the whole pre-trained VSR model is finetuned on the adaptation dataset, hence it requires the largest parameters. For finetuning the pre-trained model, a learning rate of 1e-5 is utilized.
## 5 Experimental Results
We evaluate the effectiveness of the proposed prompt tuning in speaker-adaptive VSR by using different amounts of adaptation data. Firstly, we explore the effectiveness of the proposed method in a situation where only a small number of adaptation data is available. To this end, we use 1, 3, and 5 minutes of adaptation data for each unseen speaker to adapt the pre-trained model to the target speaker. Then, we
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Method** & **GRID** & **LRW-ID** \\ \hline Add & 24.58K (0.687\%) & 12.54K (0.036\%) \\ Pad & 15.54K (0.434\%) & 153.19K (0.443\%) \\ Cat & 1.28K (0.036\%) & 2.56K (0.007\%) \\ Add + Pad & 40.11K (1.121\%) & 165.73K (0.480\%) \\ Add + Cat & 25.86K (0.722\%) & 15.10K (0.044\%) \\ Pad + Cat & 16.82K (0.470\%) & 155.75K (0.451\%) \\ Add + Pad + Cat & 41.39K (1.156\%) & 168.29K (0.487\%) \\ \hline \hline \end{tabular}
\end{table} TABLE IV: Number of parameters of each prompt for one speaker
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Method** &
\begin{tabular}{c} **Adap.** \\ **min** \\ \end{tabular} & **S1** & **S2** & **S20** & **S22** & **Mean** \\ \hline Baseline [14] & - & 16.40 & 9.42 & 11.23 & 11.57 & 12.04 \\ \hline \multirow{3}{*}{Add} & 1 & 8.08 & 3.74 & 5.83 & 4.33 & 5.48 \\ & 3 & 6.36 & 2.54 & 5.23 & 3.17 & 4.31 \\ & 5 & 5.99 & 2.47 & 5.27 & 3.03 & 4.17 \\ \hline \multirow{3}{*}{Pad} & 1 & 10.61 & 3.84 & 6.83 & 4.53 & 6.41 \\ & 3 & 8.72 & 2.57 & 6.03 & 3.87 & 5.28 \\ & 5 & 7.81 & 2.61 & 5.87 & 3.77 & 4.99 \\ \hline \multirow{3}{*}{Cat} & 1 & 11.79 & 4.14 & 7.63 & 5.37 & 7.21 \\ & 3 & 9.43 & 3.94 & 6.47 & 3.90 & 5.92 \\ & 5 & 8.25 & 3.37 & 6.27 & 3.50 & 5.32 \\ \hline \multirow{3}{*}{Add} & 1 & 8.65 & 4.01 & 6.43 & 4.93 & 5.97 \\ & 3 & 7.10 & 2.57 & 5.60 & 3.37 & 4.65 \\ & 5 & 5.79 & 2.37 & **5.13** & 2.97 & 4.04 \\ \hline \multirow{3}{*}{Add} & 1 & 7.81 & 3.24 & 5.80 & 4.20 & 5.25 \\ & 3 & 6.16 & 2.34 & 5.63 & 3.27 & 4.33 \\ & 5 & 5.12 & 2.31 & 5.63 & 2.97 & 4.00 \\ \hline \multirow{3}{*}{Pad} & 1 & 11.45 & 3.77 & 6.97 & 4.27 & 6.60 \\ & 3 & 8.52 & 2.74 & 6.20 & 3.80 & 5.31 \\ \cline{1-1} & 5 & 6.30 & 2.47 & 5.53 & 3.27 & 4.37 \\ \hline \multirow{3}{*}{Add} & 1 & 7.91 & 3.81 & 6.07 & 4.43 & 5.53 \\ \cline{1-1} & 3 & 6.43 & **2.14** & 5.63 & 3.07 & 4.31 \\ \cline{1-1} & 5 & **5.08** & 2.24 & **5.13** & **2.80** & **3.80** \\ \hline \hline \end{tabular}
\end{table} TABLE V: Adaptation results (WER) on GRID using 1, 3, and 5 minutes of adaptation data
examine the speaker adaptation performances by using different ratios of adaptation data, to validate when the prompt tuning has benefits compared to finetuning methods.
### _Speaker adaptation results with small data_
In this experiment, we evaluate the effectiveness of the different types of prompts, the addition, the padding, the concatenation, and their combinations by using a small number of adaptation data with 1, 3, and 5 minutes in length. Therefore, a total of 7 methods are evaluated including, addition prompt only (Add), padding prompt only (Pad), concatenation prompt only (Cat), and their combinations (Add Pad), (Add Cat), (Pad Cat), and (Add Pad Cat). The speaker adaptation results by using the proposed prompt tuning for 4 unseen test speakers (_i.e._, S1, S2, S20, and S22) on GRID are shown in Table V. Compared to the performance of Baseline which is the results obtained by using the pre-trained VSR model on unseen target speakers directly, all prompt methods largely improve the performance on the target unseen speaker. The results indicate that the VSR model is inherently sensitive to personal lip appearances and a pre-trained VSR model can show degraded performance when they are applied to unseen speakers directly. In contrast, by applying the proposed speaker adaptation method, we can improve the VSR performances with a small number of adaptation data, so that it achieves the almost similar performance obtained in seen speaker settings [14, 62]. Please note that just using \(1\) minute of adaptation data improves 56% relative mean performance with the addition and concatenation prompts (Add Cat), compared to Baseline.
By comparing the different prompt methods, the addition prompt achieves the best performance among the single prompt methods for the GRID dataset. It is related to the used network architecture of GRID shown in Table II, which has a shallow visual front-end so the input-level prompt can affect the visual feature largely, while the padding prompt has fewer effects on the inner feature map due to the shallow layers and small receptive fields. Moreover, by additionally utilizing the concatenation prompt with the prompts for CNN (_i.e._, addition and padding prompts), the VSR performances on the target unseen speakers are improved overall. Analyzing the effect of each prompt, the addition and the padding prompts are mainly for improving the encoded visual feature so that the target speaker's personal lip appearances can be adaptively modeled, and the concatenation prompts improves the temporal encoding availability of the back-end module so that the personal lip movements can be accounted. We find that the combination of the addition and the padding prompts, (Add Pad), does not further improve the performance over the addition-only prompts (Add). A similar tendency can be seen in that the (Add Cat) and (Add Pad Cat) have similar performances.
The speaker adaptation results for 20 unseen test speakers on LRW-ID are shown in Table VII. Even if the pre-trained VSR model is trained with large speaker variations with 17,560 speakers, we can still improve the VSR performance by adapting the pre-trained model on the target unseen speaker. Especially, when the model has less competency for a test speaker, the effectiveness of adaptation is bigger. For example, the pre-trained VSR model shows 75.95% ACC on speaker 11 (S11) which is lower compared to the mean ACC of 87.54% over all speakers. In this case, by using the proposed prompt tuning (Pad Cat) with 3 minutes of adaptation data, we can improve the performance to 84.45% ACC which is a large improvement of about 8.5% ACC. On the other hand, we find that when the pre-trained model has enough competency, the effectiveness of the adaptation is small. This can be seen from the results of speaker 9 (S9), where the performance gain is smaller compared to the other speakers. As the purpose of the speaker adaptation is to transform a pre-trained model that cannot capture the lip appearances and movements of the target unseen speaker, the obtained tendency is natural and the effect of the adaptation can be small for the already well-captured lip appearances and movements.
The mean word accuracies for 20 speakers according to different types of prompts are shown in Table VI. The mean word accuracy of the Baseline model is 87.54% and by utilizing just 1 minute of adaptation data with (Pad Cat, P+C) prompts, the performance is improved to 88.53%. With more adaptation data of 3 and 5 minutes lengths, we can achieve the best word accuracies of 89.45% and 89.99% respectively. By comparing the different prompt methods, the padding prompt achieves the best performance among the single prompt methods for the LRW-ID dataset. This is the different tendency with that the addition prompt shows the best performance for the GRID dataset. It is related to that the visual front-end of LRW-ID is much deeper and has large parameters so that the addition prompt cannot largely affect the final encoded visual feature. On the other hand, the padding prompts can largely affect the visual feature at all intermediate CNN layers, with the large receptive field of deep CNN. Moreover, we observe similar results with GRID, the prompt (Add Pad, A+P) does not improve the performance over the padding-only prompt (Pad), and the prompt (Add Pad Cat, A+P+C) shows similar performance as (Pad Cat, P+C). The results indicate that the addition and the padding prompts have a similar role, improving the visual feature representations, thus the combination of them does not improve the performance further. However, the combination of prompts for CNN with the concatenation prompt which is for improving the temporal feature representations can boost the performance.
From the experimental results on both GRID and LRW-ID, we highlight that regardless of the types of prompts, we can improve the performance of pre-trained VSR models on the unseen target speaker by just finetuning prompts with a small number of adaptation data (_i.e._, less than 5 minutes long). In addition, when the pre-trained CNN model has deep architecture, the padding prompt has more effectiveness than the addition prompt, while the addition prompt can achieve better performance for shallow CNN. Finally, the combination of prompts for CNN and for Transformer can further improve the performance.
### _Comparison with previous methods_
In this experiment, we compare the effectiveness of the proposed prompt tuning methods with previous speaker-invariant and -adaptive methods including different finetuning methods. Especially, we also compare the total
number of parameters required for inferring for all target speakers. For example, the method finetuning of the whole pre-trained model (_i.e._, FineTune-F) requires the parameters of the pre-trained model multiplied by the number of pre-trained model (_i.e._, FineTune-F) requires the parameters of the pre-trained model multiplied by the number of pre-trained model (_i.e._, FineTune-F).
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline
**Adapt. min** & **Add** & **Pad** & **Cat** & **A+P** & **A+C** & **P+C** & **A+P+C** & **Baseline** \\ \hline
1 & 87.75 & 88.42 & 88.28 & 88.19 & 88.06 & 88.53 & 88.55 & \\
3 & 87.89 & 89.32 & 88.92 & 88.88 & 88.69 & **89.45** & 89.39 & 87.5 \\
5 & 88.03 & 89.62 & 89.33 & 89.14 & 89.02 & **89.99** & 89.75 & \\ \hline \hline \end{tabular}
\end{table} TABLE VI: Mean adaptation results (ACC) of 20 speakers on LRW-ID using 1, 3, and 5 minutes of adaptation data
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \hline
**Method** & **Adapt. min** & **S1** & **S2** & **S3** & **S4** & **S5** & **S6** & **S7** & **S8** & **S9** & **S10** \\ & **min** & (\#4243) & (\#5125) & (\#6003) & (\#7184) & (\#9335) & (\#9368) & (\#9438) & (\#9653) & (\#10209) & (\#10293) \\ \hline Baseline & - & 75.75 & 84.39 & 84.80 & 90.40 & 82.70 & 84.74 & 91.95 & 82.18 & 90.60 & 81.46 \\ \hline \multirow{3}{*}{Add} & 1 & 76.81 & 84.39 & 84.80 & 90.40 & 82.70 & 84.74 & 91.95 & 82.18 & 90.60 & 82.41 \\ & 3 & 77.52 & 84.93 & 84.80 & 90.65 & 82.70 & 84.74 & 91.95 & 82.18 & 90.60 & 83.09 \\ & 5 & 78.94 & 85.33 & 85.14 & 90.49 & 82.99 & 84.74 & 92.06 & 82.99 & 90.60 & 83.63 \\ \hline \multirow{3}{*}{Pad} & 1 & 79.47 & 85.60 & 86.06 & 90.40 & 85.33 & 84.74 & 92.04 & 82.83 & 90.60 & 84.84 \\ & 3 & 82.30 & 86.03 & 86.48 & 90.83 & 85.52 & 86.21 & 93.31 & 84.59 & 90.60 & 84.57 \\ & 5 & 82.30 & 86.68 & **87.74** & 90.98 & 86.59 & 86.32 & 93.18 & 84.43 & 90.60 & 86.33 \\ \hline \multirow{3}{*}{Cat} & 1 & 78.94 & 84.39 & 84.80 & 90.49 & 85.47 & 84.74 & 91.95 & 82.34 & 90.60 & 82.41 \\ & 3 & 82.30 & 84.93 & 86.06 & 90.58 & 85.33 & 85.05 & 92.58 & 83.31 & **90.87** & 84.44 \\ & 5 & 82.48 & 86.14 & 86.57 & 90.74 & 86.25 & 85.58 & 93.02 & **85.39** & **90.87** & 85.12 \\ \hline \multirow{3}{*}{Add} & 1 & 79.82 & 84.79 & 84.80 & 90.40 & 84.79 & 84.74 & 91.95 & 82.83 & 90.60 & 83.36 \\ & 3 & 81.59 & 86.54 & 86.06 & 90.86 & 84.60 & 85.68 & 92.65 & 83.95 & 90.60 & 84.71 \\ & 5 & 82.12 & 86.95 & 87.07 & 90.89 & 85.47 & 86.63 & 92.67 & 83.95 & 90.60 & 86.20 \\ \hline \multirow{3}{*}{Add} & 1 & 79.29 & 84.52 & 84.80 & 90.40 & 84.11 & 84.74 & 91.95 & 82.99 & 90.60 & 81.87 \\ & 3 & 81.95 & 85.33 & 86.40 & 90.40 & 84.16 & 85.37 & 92.68 & 82.99 & 90.60 & 83.90 \\ & 5 & 82.66 & 85.20 & 86.57 & 90.55 & 84.94 & 86.53 & 93.04 & 84.59 & 90.60 & 84.84 \\ \hline \multirow{3}{*}{Pad} & 1 & 80.53 & 84.79 & 85.39 & 90.40 & 86.05 & 84.74 & 92.07 & 83.15 & **90.87** & 84.98 \\ & 3 & 82.66 & 86.41 & 86.90 & 90.68 & 86.40 & 86.00 & 93.19 & 84.91 & **90.87** & 85.25 \\ & 5 & 84.07 & 86.95 & 87.49 & **91.17** & **87.12** & 86.84 & **93.56** & **85.39** & 90.74 & **87.28** \\ \hline \multirow{3}{*}{Add} & 1 & 81.06 & 85.73 & 85.31 & 90.40 & 85.81 & 84.74 & 92.00 & 83.95 & **90.87** & 84.03 \\ & 3 & 82.66 & 86.14 & 87.41 & 90.52 & 86.40 & 86.21 & 92.83 & 85.71 & 90.60 & 85.25 \\ \cline{1-1} & 5 & **84.78** & **87.21** & 87.32 & 91.10 & **87.12** & **87.68** & 93.26 & **85.39** & 90.60 & 86.33 \\ \hline \hline \multirow{3}{*}{**Method**} & **Adapt. min** & **511** & **S12** & **S13** & **S14** & **S15** & **S16** & **S17** & **518** & **S19** & **520** \\ & **min** & (\#10587) & (\#10141) & (\#11777) & (\#11875) & (\#11910) & (\#13287) & (\#13786) & (\#15545) & (\#15769) & (\#17378) \\ \hline Baseline & - & 75.95 & 88.03 & 88.30 & 89.71 & 75.21 & 75.78 & 82.59 & 89.77 & 90.60 & 89.91 \\ \hline \multirow{3}{*}{Add} & 1 & 77.76 & 88.29 & 88.88 & 90.21 & 75.21 & 76.94 & 82.59 & 89.82 & 90.60 & 89.52 \\ & 3 & 78.12 & 88.07 & 89.11 & 92.00 & 75.21 & 76.77 & 82.59 & 89.87 & 90.60 & 89.52 \\ & 5 & 78.84 & 88.25 & 88.99 & 91.14 & 75.21 & 77.43 & 82.59 & 89.87 & 90.60 & 89.96 \\ \hline \multirow{3}{*}{Pad} & 1 & 81.56 & 88.03 & 89.11 & 92.00 & 76.89 & 80.56 & 83.92 & 89.77 & 90.60 & 89.52 \\ & 3 & 85.35 & 89.75 & 90.48 & 92.50 & 78.57 & 79.74 & 84.89 & 89.77 & 90.81 & 90.18 \\ & 5 & 83.73 & **90.58** & 90.60 & 92.64 & 77
target speakers. The comparison results on GRID dataset are shown in Table VIII. We only represent the best two prompt types (Add Cat, A+C) and (Add Pad Cat, A+P+C), and the smallest prompt type, (Cat, C), in the table. Surprisingly, in the adaptation setting of using under 5 minutes of adaptation data, the proposed methods even outperform the finetuning methods on LRW-ID. This result is also related to the previous studies [35, 39] that when the pre-trained model size becomes larger, the effectiveness of the prompt is getting bigger and even outperforms the finetuning methods. Please note that the model used for LRW-ID is about 10 times larger than that for GRID. Moreover, compared to the methods that require less than 15% additional parameters for 20 target speakers, Speaker-invariant, Speaker code, FineTune-C, and the proposed methods, the proposed methods achieve the best performances. It is notable that using the concatenation prompt which increases just 0.15% of model parameters outperforms the previous speaker-adaptive method, Speaker code.
### _Speaker adaptation results according to the data size_
In this experiment, we explore the performances of the proposed method without limiting the adaptation data size. Therefore, we analyze when the proposed prompt tuning is preferred over the finetuning methods. To this end, we utilize the different amounts of adaptation data proportionally to the entire adaptation set, 10%, 30%, 50%, 70%, and 100%. Fig. 3 shows the adaptation results of the proposed prompt tuning and the three different finetuning methods on GRID. In overall, FineTune-B (FT-B) and FineTune-F (FT-F) outperform the other methods. Especially, the performance gap between FT-B (and FT-F) and the proposed prompt tuning
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**Method** & **1min** & **3min** & **5min** & **Total Params** \\ \hline Baseline [14] & 12.04 & 12.04 & 12.04 & 3.58M (+ 0\%) \\ Speaker-invariant [10] & 11.28 & 11.28 & 11.28 & 3.58M (+ 0\%) \\ Speaker code [42] & 5.56 & 4.83 & 4.68 & 3.66M (+ 221\%) \\ FineTune-C & 10.01 & 8.81 & 8.46 & 3.62M (+ 1.14\%) \\ FineTune-B & 5.57 & 4.28 & 3.95 & 13.10M (+ 265.9\%) \\ FineTune-F & **5.07** & **4.04** & **3.60** & 14.32M (+ 300\%) \\ \hline
**Proposed Method (A+C)** & 5.25 & 4.33 & 4.00 & 3.68M (+ 2.89\%) \\
**Proposed Method (A+P+C)** & 5.53 & 4.31 & 3.80 & 3.75M (+ 4.63\%) \\ \hline \hline \end{tabular}
\end{table} TABLE VIII: Performance (WER) comparisons with previous methods on GRID
Fig. 4: ACC comparisons between proposed prompt tuning and different finetuning methods according to adaptation data ratio on LRW-ID.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**Method** & **1min** & **3min** & **5min** & **Total Params** \\ \hline Baseline [14] & 87.54 & 87.54 & 87.54 & 34.56M (+ 0\%) \\ Speaker-invariant [10] & 88.22 & 88.22 & 88.22 & 34.56M (+ 0\%) \\ Speaker code [42] & 88.09 & 88.76 & 89.08 & 35.99M (+ 2.97\%) \\ FineTune-C & 87.71 & 87.79 & 87.88 & 39.43M (+ 141\%) \\ FineTune-B & 88.39 & 88.96 & 89.52 & 398.83M (+ 1054\%) \\ FineTune-F & 88.41 & 89.15 & 89.91 & 691.21M (+ 1900\%) \\ \hline
**Proposed Method (C)** & 88.28 & 88.92 & 89.33 & 34.61M (+ 0.15\%) \\
**Proposed Method (P+C)** & 88.53 & **89.45** & **89.99** & 37.68M (+ 9.01\%) \\
**Proposed Method (A+P+C)** & **88.55** & 89.39 & 89.75 & 37.93M (+ 9.74\%) \\ \hline \hline \end{tabular}
\end{table} TABLE IX: Performance on different amounts of adaptation data on GRID
Fig. 3: WER comparisons between proposed prompt tuning and different finetuning methods according to adaptation data ratio on GRID.
gets bigger when more than 50% adaptation data is utilized. The performance of the proposed prompt tuning (Add Pad Cat, A+P+C) with 100% adaptation data is 3.13% WER and the performance of the full finetuning, FineTune-F, is 2.34% WER. Therefore, when the pre-trained model size is small, a better method can be selected by considering the model parameters and performance trade-off, based on the fact that the finetuning methods perform better while the proposed prompt tuning methods have much fewer parameters (_i.e._, 14.32M vs. 3.75M).
Fig. 4 shows the adaptation results on LRW-ID dataset. Compared to the finetuning methods, the proposed prompt tuning methods, (Add Pad Cat, A+P+C), (Pad Cat, P+C), (Pad, P), (Cat, C), show better results when the adaptation data is small so below 30%. However, the finetuning methods, FineTune-B and FineTune-F outperform the prompt tuning methods when the adaptation data size becomes larger, and the performance gaps become larger according to the data size. Therefore, when the pre-trained model is large and there exist a few numbers of adaptation data, the prompt tuning method is preferred over the finetuning methods in the aspects of both performance and parameter size. When large-scale adaptation data is available for the target speaker, the trade-off between performance and additional parameters would be considered to select the appropriate speaker-adaptive method. Please note that the (A+P+C) prompt has 0.487% parameters compared to the full model, for each speaker.
### _Ablation study_
There are two hyperparameters that can be controlled, the number of CNN layers to insert the padding prompt and the length of the concatenation prompt. In order to confirm the effects of hyperparameters, we evaluate the adaptation performances by differing the number of layers (_i.e._, \(N_{l}\in\{5,11,17\}\)) that the padding prompt inserted and by differing the length of the concatenation prompt (_i.e._, \(N_{p}\in\{1,3,5\}\)). For the padding prompt ablation study, we only utilize the padding prompt without combination with other prompts to focus on the effects of the number of padding layers. Similarly, we only utilize the concatenation prompt for checking the effects of its length. The ablation results according to different padding prompt layers are shown in Table XVIII. When all padding layers are changed with the padding prompts, we can achieve the best performances for the three adaptation data settings (_i.e._, 1, 3, and 5 minutes). Moreover, we can find that the performance gains by using more layers become larger when more adaptation data is utilized. Therefore, we could further reduce the parameters for the padding prompt by inserting it for only the part of CNN layers, if the available adaptation data is less than 3 minutes. From the experimental result, we utilize all layers of CNN to insert the padding prompts in other experiments, as it shows the best results. The ablation results according to different concatenation prompt lengths are shown in Table XI. For the concatenation prompt, we find that the length of the prompt does not affect the performance largely. We set the length of the concatenation prompt to 5 for other experiments, as it shows better performances overall.
## 6 Conclusion
In this paper, we proposed prompt tuning methods of DNNs for developing speaker-adaptive VSR models. With different types of prompts, the addition form, the padding form, and the concatenation form, we can adapt a pre-trained model composed of both CNN and Transformer variant architectures to the target unseen speaker. The proposed prompt has much fewer parameters compared to the full model, so it is more practical in storing and handling data for many users. Through comprehensive experimental evaluations, we showed that the padding prompt can improve the visual representations of CNN when the CNN is deep while the addition prompt is effective for shallow CNN. Moreover, by using the combination of the addition prompt and the padding prompt with the concatenation prompt, we can further increase the performance of VSR on unseen speakers. Finally, we compared the proposed method with several finetuning methods in the aspects of both the performance and the number of parameters, and showed that the proposed prompt tuning can achieve comparable VSR performances with much fewer additional parameters.
## Acknowledgments
This work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government(MSIT) (No.2022-0-00124, Development of Artificial Intelligence Technology for Self-Improving Competency-Aware Learning Capabilities)
|
2301.06331 | Hybrid quantum-classical convolutional neural networks to improve
molecular protein binding affinity predictions | One of the main challenges in drug discovery is to find molecules that bind
specifically and strongly to their target protein while having minimal binding
to other proteins. By predicting binding affinity, it is possible to identify
the most promising candidates from a large pool of potential compounds,
reducing the number of compounds that need to be tested experimentally.
Recently, deep learning methods have shown superior performance than
traditional computational methods for making accurate predictions on large
datasets. However, the complexity and time-consuming nature of these methods
have limited their usage and development. Quantum machine learning is an
emerging technology that has the potential to improve many classical machine
learning algorithms. In this work we present a hybrid quantum-classical
convolutional neural network, which is able to reduce by 20% the complexity of
the classical network while maintaining optimal performance in the predictions.
Additionally, it results in a significant time savings of up to 40% in the
training process, which means a meaningful speed up of the drug discovery
process. | L. Domingo, M. Djukic, C. Johnson, F. Borondo | 2023-01-16T09:53:26Z | http://arxiv.org/abs/2301.06331v2 | Hybrid quantum-classical convolutional neural networks to improve molecular protein binding affinity predictions
###### Abstract
One of the main challenges in drug discovery is to find molecules that bind specifically and strongly to their target protein while having minimal binding to other proteins. By predicting binding affinity, it is possible to identify the most promising candidates from a large pool of potential compounds, reducing the number of compounds that need to be tested experimentally. Recently, deep learning methods have shown superior performance than traditional computational methods for making accurate predictions on large datasets. However, the complexity and time-consuming nature of these methods have limited their usage and development. Quantum machine learning is an emerging technology that has the potential to improve many classical machine learning algorithms. In this work we present a hybrid quantum-classical convolutional neural network, which is able to reduce by 20% the complexity of the classical network while maintaining optimal performance in the predictions. Additionally, it results in a significant time savings of up to 40% in the training process, which means a meaningful speed up of the drug discovery process.
+
Footnote †: preprint: Version #1
## I Introduction
The ability to predict the binding affinity between a potential drug and its target protein is crucial for the success of drug discovery at the early stages. Experimentally determining the binding affinity for a large number of small molecules and their targets is time-consuming and expensive. As a result, computational methods that can predict binding affinity for multiple molecules with high accuracy have become widely used. Machine learning techniques, specifically deep learning methods, have recently gained attention for their ability to improve upon traditional physics-based methods. Unlike traditional machine learning, deep learning can learn directly from the atomic structure of the protein-ligand pair without relying on pre-determined, fixed-length features.
A commonly-used deep learning approach for binding affinity prediction is the three-dimensional convolutional neural network (3D CNN) [1, 2, 3, 4, 5, 6]. These networks represent atoms and their properties in a 3D space and take into account the local 3D structure of molecules and the relationship between atoms. The 3D representations used as input of the 3D CNN are high-dimensional matrices since millions of parameters are required to describe only one data sample. Because of the high-dimensionality of the data, a complex deep learning model is required to uncover all the hidden patterns that can help predict the target value. Training such a model means finding the optimal value for parameters that minimize a loss function. More complex models have more training parameters, requiring longer execution times which limits the exploration of different architectures or hyperparameters. This training process can be heavily accelerated using powerful GPUs. However, GPUs are costly computational resources and may not always be available.
The complexity of a machine learning model also affects its generalisation capacity. According to Hoeffding's theorem [7], models with high complexity require a large amount of data to reduce the variance of the model predictions, as stated by Hoeffding's inequality
\[E_{out}\leq E_{in}+\mathcal{O}\Big{(}\sqrt{\frac{K}{N_{\text{samples}}}} \Big{)}, \tag{1}\]
where \(E_{out}\) is the error in the test set, \(E_{in}\) is the error in the training set, \(K\) is a notion of complexity and \(N_{\text{samples}}\) is the number of data samples. The number of samples should be at least comparable to the complexity of the model to guarantee low errors in the predictions of new data. In some cases, when the test data is similar enough to the training data, a smaller training set can still allow for good performance. Nonetheless, increasing the complexity of a model always increases its chances of producing overfitting, and thus it is convenient to find simpler machine learning models.
Quantum machine learning (QML) methods have the potential to solve numerical problems exponentially faster than classical methods. Although fault-tolerant quantum computers are still not available, a new trend of quantum algorithms, called noisy intermediate-scale quantum (NISQ) era is devoted to design quantum algorithms that provide quantum advantage with the quantum computers available today. Because of the exponential scaling of the Hilbert space dimension, quantum computers can process large amounts of data with few qubits. For this reason, combining quantum algorithms with machine learning allows to reduce the complexity of the classical machine learning methods, while maintaining its accuracy. In this project, we propose a hybrid quantum-classical 3D CNN, which replaces the first convolutional layer by a quantum circuit, effectively reducing the number of training parameters of the model. The results of our work show that as long as the quantum
circuit is properly designed, the hybrid CNN maintains the performance of the classical CNN. Moreover, the hybrid CNN has 20% less training parameters, and the training times are reduced by 20-40%, depending on the hardware where they are trained. All the quantum circuits used in this work have been executed using quantum simulation due to the current limitations of quantum hardware. However, we provide performance benchmarks considering different noise models and error probabilities. Our results show that with error probabilities lower than \(p=0.01\) and circuits with 300 gates, a common error mitigation algorithm, namely _data regression error mitigation_[8], can accurately mitigate the errors produced by the quantum hardware. For this reason, we believe that hybrid quantum-classical machine learning methods have the potential to speed-up the training process of classical machine learning methods and reduce the computational resources needed to train them.
The organization of this paper is as follows. In Sect. II we present the protein-ligand data used for this study, the processing techniques used to prepare the data for the CNN models, and the details of the classic and hybrid CNN approaches. The results and comparison of the classical and hybrid CNNs are presented in Sect. III. Finally, Sect. IV ends the paper by summarizing the main conclusions of the present work, and presenting an outlook for future work.
## II Methods
This section provides an overview of the classical and quantum machine learning algorithms that were examined in this study. It starts by discussing the PDBBind dataset and the methods used to preprocess the data for use in neural network models. The architecture of a classical 3D CNN is then described. All the processing algorithms and the architecture of the classical 3D CNN are kept equal to the ones in Ref. [1] to support a reproducible and comparable pipeline. Finally, the design of the hybrid quantum-classical CNN is outlined.
### Data
The data used for this study is sourced from the PDBBind database [9]. It contains a collection of protein-ligand biomolecular complexes, manually collected from their associated publications. For each protein-ligand complex, the data files contain information about their 3D morphology, types of bonds between their constituent atoms, together with the binding affinity between the protein and the ligand. All the binding affinities are experimentally obtained by measuring the equilibrium dissociation constant between protein-ligand (\(k_{d}\)) and the inhibition constant (\(k_{I}\)). Then, the binding affinity is defined as \(-\log\left(\frac{k_{d}}{k_{I}}\right)\). Because of its completeness and extension, the PDBBind dataset has recently become a common benchmark for binding affinity prediction with both physics-based and machine learning methods [10; 11; 12]. The PDBBind dataset is already split into two non-overlapping sets, the general set and the refined set. The refined set is compiled to contain higher-quality complexes based on several filters regarding binding data (e.g complexes with only \(IC_{50}\) measurement), crystal structures (e.g low crystal resolution or missing fragments in the complex), as well as the nature of the complexes (e.g ligand-protein covalent bond binding). A subset from the refined set, called _core set_ is separated to provide a small, high-quality set for testing purposes.
The 2020 version of the PDBBind dataset is used for this study. The general set (excluding the refined set) contains 14127 complexes, while the refined set contains 5316 complexes. The core set is significantly smaller, with only 290 data samples.
### Data processing
In order to train classical and hybrid CNNs, the raw PDB-Bind data has to be transformed into an appropriate input format for the convolutional layers. Before reshaping the data into the appropriate format, a common processing protocol was applied, following the same process as Ref. [1; 13]. Hydrogens were added to all protein-ligand complexes according to each atom's valence. The partial charge of all the bonds is solved based on Amber/GAFF atom types, using Chimera22 with the default settings. This protocol converts the pdb files to Mol2 files. A 3D spatial representation was then used to represent the features of the data. This method uses 3D volume grids to capture the atomic relationships in a voxelized space. That is, each data sample has size \((C,N,N,N)\), where \(N\) is the size of each dimension in space, and \(C\) is the number of features extracted from the protein-ligand pair. For this project, we set \(N=48\), so that each side of the volume space has a size of 48A, and a voxel size of 1A. This size allows covering all the pocket region without having too large input sizes for the CNN models. Having set the dimension of the space, 19 features were extracted from each protein-ligand pair (\(C=19\)). The selected features are the following:
* **Atom type:** One-hot encoding of the elements B, C, N, O, P, S, Se, halogen, or metal.
* **Atom hybridization:** Gives information about the number of \(\sigma\) and \(\pi\) bonds connecting a particular atom to a neighboring atom. Takes values 1, 2 and 3 for sp1, sp2 and sp3 hybridizations, respectively.
* **Number of heavy atom bonds:** Heavy atoms are all atoms except for hydrogen.
* **Number of bonds with other heteroatoms:** Heteroatoms are those atoms different from hydrogen or carbon.
* **Structural properties:** one-hot encoding of hydrophobic, aromatic, acceptor, donor, and ring properties.
* **Partial charge:** Distribution of charge of an atom as a result of its chemical environment.
* **Molecule type:** Indicates whether it is a protein atom or a ligand atom (-1 for protein, 1 for ligand).
The feature extraction process was done with the OpenBabel tool (version 3.1.1.1). The Van der Waals radius was used to determine the size of each atom in the voxelized space. In this way, an atom could occupy one or more voxels depending on its Van der Waals radius. For atom collisions, the features were added element-wise. The resulting 3D representations of the features resulted in sparse 3D matrices. Sparse data samples may make the training of neural networks harder since the input samples are too similar to each other, and to a zero-valued sample. Therefore, neural networks can have difficulties differentiating useful information from noise. For this reason, a Gaussian blur with \(\sigma=1\) is applied to the voxelized features, populating the neighbouring atoms and thus reducing the number of zero-values voxels. Fig. 1 shows a representation of the initial protein-ligand pair and the two main processing steps. Note that these 3D volume representations are very high-dimensional since more than 2 million real numbers are needed to represent only one data sample. For this reason, large amounts of data samples and complex neural network models are needed to make accurate predictions without overfitting the training data.
The data processing is done independently for each of the datasets considered in this study. Apart from the general, refined and core sets, we further partitioned the general and refined sets into training and validation sets. This split is done to maintain the probability distribution of the binding affinities in both training and validation sets. For this reason, the binding affinities were separated in quintiles. Then for each quintile, we randomly selected 10% of the data for the validation set, and kept the rest for the training set. In this way, we obtained training and validation sets for both the general and refined sets.
### Classical CNN
CNNs have been proven to produce very successful results in deep learning applications. This type of network is specialised in processing high-dimensional data in the form of spatial arrays, such as time series (in 1D), images (in 2D) or volumes (3D). The name stems from the fact that instead of general matrix multiplication it employs a mathematical convolution in at least one of its layers. The output of the convolution is another array that represents some information that was present in the initial array in a very subtle way. In this way, a filter of a convolutional neural network is responsible for detecting one feature of the network input. The kernel matrices are free parameters that must be learned to perform the optimal feature extraction. The convolutional operation is followed by a nonlinear activation function which adds non-linearity to the system. Following the convolutional layers, a pooling layer is added in order to progressively reduce the spatial size of the array. After a series of convolutional and pooling layers, a flattening layer and some feed-forward layers are used to combine the extracted features and predict the final output.
A representation of the layers of a 3D CNN is shown in Fig. 2 (Top). 3D CNNs have been used for multiple applications such as volume image segmentation [14], medical imaging classification [15] and human action recognition [16]. A diagram of the 3D classical CNN used in this work is shown in Fig. 2 (Bottom). The architecture is the same as the one proposed in Ref. [1], again for comparison purposes. The network contains five 3D convolutional layers, with 64,64,64,128 and 256 filters respectively. The kernel size is 7 for all the layers except for the last one, which has kernel 5. The CNN contains two residual connections, as proposed in ResNet [17], which allow passing gradients to the next layers without a nonlinear activation function. Batch normalization is used after each convolutional layer, and we use Rectified Linear Unit (ReLU) as an activation function. The network contains two pooling layers and two fully-connected layers with 10 and 1 neurons respectively.
### Hybrid quantum-classical CNN
In this work, we propose a hybrid (quantum-classical 3D) CNN, which is designed to reduce the complexity of the classical 3D CNN, while maintaining its prediction performance. The hybrid CNN replaces the first convolutional layer with a quantum convolutional layer [18; 19; 20]. That is, each classical convolutional filter is replaced by a quantum circuit, which acts as a quantum filter. These quantum circuits should have significantly fewer training parameters than the classical convolutional layer, in order to reduce the overall complexity of the network. Each quantum circuit is divided into two blocks: the _data encoding_, which maps the input data into a quantum circuit, and the _quantum transformation_, where quantum operations are applied to retrieve information from the encoded data. The final architecture of the hybrid CNN is depicted in Fig. 3. The processed protein-ligand data is fed to both a classical and a quantum convolutional layer. The outputs are aggregated by using a residual connection and then fed to the subsequent classical convolutional and pooling layers. The rest of the network is the same as its classical version. With this architecture, the first convolutional layer has been replaced by its quantum counterpart, while leaving the rest of the network unchanged.
#### ii.4.1 Data encoding
The quantum convolutional layer aims to extract local features from the input data, just as the classical convolutional layer would do. For this reason, we split the input data into \((n\times n\times n),n\in\mathbb{N}\) blocks and process each block individually. Given a block \(B\), the data encoding process converts \(B\) to a quantum state \(|B\rangle\). Because of the high dimensionality of our data, we need to find a data encoding method that minimizes the number of qubits of the resulting quantum circuit. A suitable encoding should scale logarithmically with the dimen
sion of the blocks. A popular data encoding mechanism that fulfills this property is called amplitude encoding [21], which requires \(\left\lceil\log_{2}(n^{3})\right\rceil\) qubits to encode a block. However, the amplitude encoding scheme normalizes each block independently to produce a normalized quantum state. Therefore, the different blocks of the data would have different normalization constants and would not be comparable with each other. For this reason, we decided to choose the Flexible Representation of Quantum Images [22] (FRQI) method, which normalizes the _whole_ image before the encoding, avoiding this problem, and uses only \(\left\lceil\log_{2}(n^{3})\right\rceil+1\) qubits. FRQI was proposed to provide a normalized quantum state which encodes both the value (colour) of a pixel and its position in an image. Given an image with \(\theta=(\theta_{0},\theta_{1},\cdots,\theta_{2^{n-1}})\) pixels, where the pixels have been normalized such that \(\theta_{i}\in[0,2\pi),\forall i\), the encoded state is given by Eq. 2.
\[\left|I(\theta)\right\rangle=\frac{1}{2^{n}}\sum_{i=0}^{2^{2n}-1}(\cos\theta_ {i}\left|0\right\rangle+\sin\theta_{i}\left|1\right\rangle)\otimes\left|i\right\rangle \tag{2}\]
where \(\left|i\right\rangle,i=0,1,\cdots 2^{2n-1}\) are the basis computational states. For each \(\theta_{i}\), the FRQI is composed of two parts: \(\cos\theta_{i}\left|0\right\rangle+\sin\theta_{i}\left|1\right\rangle\) encodes the color of the pixel, and \(\left|i\right\rangle\) encodes the position of the pixel in the image. As a simple example, a \((2\times 2)\) image and its representation are displayed in Eq. 3.
\[\begin{array}{|c|c|}\hline\theta_{0},(00)&\theta_{1},(01)\\ \hline\theta_{2},(10)&\theta_{3},(11)\\ \hline\end{array} \tag{3}\] \[\begin{array}{|c|}\hline\theta_{0},(00)&\theta_{1},(01)\\ \hline\theta_{2},(10)&\theta_{3},(11)\\ \hline\end{array}\]
Figure 1: Example of a data sample and further processing from the PDBBind dataset. The protein-ligand pair corresponds to the 1br6 sample from the refined set. The first step of the processing is the feature extraction, where 19 features are voxelized in a 3D space. Then, a Gaussian blur is applied to produce a dense representation of the data.
Figure 2: (TOP) Schematic representation of the components of a 3D convolutional neural network. (BOTTOM) Architecture of the 3D CNN used for this study, proposed in Ref. [1].
The state in Eq. 2 is normalized, since
\[||\,|I(\theta)\rangle=\frac{1}{2^{n}}\sqrt{\sum_{i=0}^{2^{2n}-1}(\cos^{2}(\theta_ {i})+\sin^{2}(\theta_{i}))}=1. \tag{4}\]
The number of qubits needed to construct the FRQI state increases logarithmically with the number of pixels (angles) of the image, since the dimension of the computational basis increases exponentially with the number of qubits of the Hilbert space. In Ref. [22], it is proven that the FRQI state can be implemented with simple quantum gates (Hadamard gates, CNOTs and \(R_{y}\) rotations). The number of quantum gates is polynomial with \(2^{2n}\), the number of pixels of the image. Even though the FRQI was designed for 2D colour images, the generalization to 3D blocks is straightforward. Let \(B\) be a \((n\times n\times n)\) block, with normalized values \((\theta_{0},\theta_{1},\cdots,\theta_{n^{3}-1}),\theta_{i}\in[0,2\pi),\forall i\). The FRQI state would then be given by Eq. 5.
\[|B\rangle=\frac{1}{n^{3}}\sum_{i=0}^{n^{3}-1}(\cos\theta_{i}\,|0\rangle+\sin \theta_{i}\,|1\rangle)\otimes|i\rangle \tag{5}\]
Notice that the only difference between Eq. 2 and Eq. 5 is the number of angles of the quantum state. When \(n^{3}\) is a power of 2 (i.e \(n^{3}=2^{l},l\in\mathbb{N}\)), the state in Eq. 5 has non-zero components in all the states of the computational basis. Therefore, choosing \(n^{3}\) as a power of 2 mostly exploits the use of the Hilbert space. For this reason, we set \(n=4\) for our experiments. Fig. 4 shows an example of the scaling of the number of qubits and the number of gates with the block size \(n\). The number of qubits needed for the FRQI encoding is \(\lceil\log_{2}(n^{3})\rceil+1\), so it scales logarithmically with the dimension of the block.
On the other hand, we have calculated the number of gates needed to implement the FRQI on a real quantum device. The number of gates depends on the values of the block \(\theta_{i}\). If there are some angles with the same value, the quantum circuit can be compressed to reduce the number of gates. In Ref. [22], the authors show how the FRQI quantum circuits can be simplified by minimizing boolean expressions. As an example of how the number of gates can scale with the block size, we have considered blocks from our data with the highest mean absolute sum, to ensure that we chose blocks with highly different angles. Fig. 4 shows that the number of gates of a general FRQI encoding scales linearly with the dimension of the block, \(n^{3}\).
#### iii.2.2 Quantum transformation
After the data has been encoded in a quantum circuit, a set of quantum gates is applied to perform the quantum transformation, followed by a set of measurements that convert the data back to a classical representation. In a quantum convolutional layer, the quantum transformation is usually a parameterized quantum circuit (PQC), where the optimal parameters of the circuit need to be learned.
Figure 3: Schematic representation of the hybrid quantum-classical (3D) CNN. The original data is processed by both a classical convolutional layer and a quantum convolutional layer. The outputs of both layers are then aggregated. The result is then fed to a set of convolutional and pooling layers, following the same architecture as the classical 3D CNN.
However, because of the challenging dimensionality of our data, many quantum complex circuits are needed to process a single data sample. By splitting the image in \((4\times 4\times 4)\) blocks, 32832 quantum circuits are required to span the whole sample. The current hardware has limitations not only on the number of qubits and quantum gates but also on the number of quantum circuits that can be executed. For this reason, it is not possible to train the whole neural network model with the current hardware. Another option would be to run the PQC on quantum simulation. Even though the quantum layer would still have fewer training parameters than the classical convolutional layer, the training of quantum neural networks is not prepared to be run on GPUs, as compared with classical convolutional layers. For this reason, even though the hybrid model with PQCs has lower complexity, in our experiments the training time was longer than the classical CNN.
The use of quantum reservoirs (QR) is an emerging approach in quantum machine learning (QML), which has provided promising results in multiple tasks [23, 24, 25]. It exploits the quantumness of a physical system to extract useful properties of the data that are then used to feed a machine learning model. In gate-based quantum computation, a QR is a _random_ quantum circuit applied to an initial state, which encodes the input data, followed by measurements of local operators. These measurements are the features extracted by the model, which are then fed to a classical machine learning algorithm to predict the desired output. The main advantage of using QRs is the low complexity of the model, and thus, its easy training strategy. Instead of using PQC and finding its optimal parameters, QRs use carefully selected quantum systems with no training parameters to transform the input data. QRs have been used for temporal tasks (quantum reservoir computing [24, 26]) and also to predict the excited properties of molecular data [27, 28]. The design of the random quantum circuit is crucial to determine the performance of the QML model. Complex quantum circuits are the ones which better exploit the quantum properties of the system, and thus provide useful features for learning the target. In a recent work [28], it was shown that the majorization principle [29] is a good indicator of both complexity [30] and performance [28] of a QR. That is, the QRs with higher complexity according to the majorization principle are the ones which give better results in the QML tasks. In particular, seven families of quantum circuits, with different complexity, were used as QRs. For a given family, a quantum circuit is built by adding a fixed number of random quantum gates from such family. The G3={CNOT,H,T} family, where CNOT is the controlled-NOT gate, H stands for Hadamard, and T is the \(\pi/8\) phase gate, provided the best results when training the QML algorithm. Moreover, the performance of the QR increased with the number of gates of the circuit, until the performance reached its optimal value, and then it remained constant even if the number of gates increased. In this work, the quantum transformation consists of a quantum circuit randomly generated with gates from the G3 family. Then, the qubits are measured in the computational basis, providing the output of the quantum convolutional layer. The hybrid CNN is trained with QRs with 20, 50, 100, 200, 300, 400, 500 and 600 quantum gates. In this way, we can evaluate how the depth of the QR influences the performance of the model. Fig. 5 shows an example of the output of the quantum convolutional layer.
We see that with a low number of gates, the quantum layer extracts simpler quantum features than with a higher number of gates. Another widely used QR is the Ising model [23, 24, 27, 31]. In this case, the quantum circuit performs the time evolution of a quantum state under the random transverse-field Ising Hamiltonian
\[H_{\text{Ising}}=\sum_{i,j=0}^{N-1}J_{ij}Z_{i}Z_{j}+\sum_{i}^{N-1}h_{i}X_{i}, \tag{6}\]
where \(X_{i}\) and \(Z_{j}\) are Pauli operators acting on the site \(i,j\)-th qubit. The coefficients \(J_{ij}\) and \(h_{i}\) are chosen according to Ref. [26], which provides a state-of-the-art method to select optimal parameters of the Ising model for quantum reservoir computing. In this case, \(J_{ij}\) are sampled from the uniform distribution \(U(-J_{s}/2,J_{s}/2)\) and \(h_{i}=h\) are constant. The optimal parameters in Ref. [26] fulfill \(h/J_{s}=0.1\). The system is evolved until time \(T=10\). We will compare the performance of the hybrid CNNs trained with QRs generated from the G3 family as well as the performance of the models with QRs generated from the Ising model. Since the current quantum computers have limitted availability and high access queue times, which limit the number of iterative runs we can do for training, the hybrid CNNs are run using quantum simulation on classical hardware. The code has been optimized using Qiskit and PyTorch, and adapted so that it could be trained on GPUs, just like the classical CNN.
Figure 4: Example of scaling of the Flexible Representation of Quantum Images (FRQI). The number of qubits scales logarithmically with the dimension of the block \(n^{3}\). The number of gates of the quantum circuit scales linearly with the dimension of the block \(n^{3}\).
### Error mitigation
One of the biggest challenges of the current quantum devices is the presence of noise. They perform noisy quantum operations with limited coherence time, which affects the performance of quantum algorithms. Even though the quantum circuits used for this study are run using quantum simulation, we have also evaluated the performance of the noisy quantum circuits using three different noise models for a small set of samples. The first noise model is the _amplitude damping channel_, which reproduces the effect of energy dissipation, that is, the loss of energy of a quantum state to its environment. The second noise model is described by the _phase damping channel_, which models the loss of quantum information without loss of energy. The last error model is described by the _depolarizing channel_. In this case, a Pauli error \(X\), \(Y\) or \(Z\) occurs with the same probability \(p\). For more information about the error models see Ref. [32]. We perform noisy simulations with error probabilities \(p=0.03,0.01,0.008,0.005,0.003,0.001\). Error mitigation methods aim to reduce the noise of the outputs after the quantum algorithm has been executed. In this work, the _data regression error mitigation_ (DRER) algorithm is used to mitigate the noise of the quantum circuits. The DRER algorithm trains a machine learning model to correct the errors of noisy quantum circuits. To obtain the training set, random quantum circuits with 300 gates sampled from the G3 family are executed with both noisy and noiseless simulation. Thus, the training set consists of pairs \((X_{i},y_{i})\) where \(X_{i}\) contains the counts of the noisy distribution and \(y_{i}\) contains the counts of the noiseless distribution. In this case, the machine learning model we used is ridge regression, a regularized linear model
Figure 5: Example of an output of the quantum convolutional layer, together with its input. The quantum convolutional layer is composed by a FRQI encoding layer followed by a quantum transformation generated with a random quantum circuit from the G3 family. Examples of the output are given for circuits with 20, 100 and 1000 quantum gates.
which minimizes the mean squared error:
\[\text{MSE}_{R}=\frac{1}{N_{s}}\sum_{i=0}^{N_{i}}\left[W\cdot X_{i}-y_{i}\right]^{ 2}+\alpha||W||^{2} \tag{7}\]
where \(N_{s}\) is the number of samples in the training set, \(W\) is the matrix of the linear model, \(\alpha\) is the regularization parameter, and \(||\cdot||\) is the \(L^{2}\) norm. The DRER is trained with 1000 samples. Then, the performance is tested with 500 noisy quantum circuits used in the quantum convolutional layer. In this case, the 3D volumetric space is divided in blocks of size \(n=8\), leading to quantum circuits of 9 qubits and 300 gates. The DRER algorithm is suitable for this task since, once the machine learning model is trained, it can be used to mitigate multiple quantum circuits requiring very few classical computational resources. This makes it practical for use with large datasets.
## III Results and Discussion
In this section, we present the results of the classical CNN and the variations of the hybrid CNN. The performance of the models is evaluated against the core set of the 2020 PDBBind dataset. The training and validation steps are done with the refined set (separated into training and validation sets) for all the models. In order to reduce overfitting of the training data, we use an early stopping procedure, finishing the training step when the performance in the validation set has converged. To evaluate the convergence of the training process, we evaluate five error metrics:
* **Root mean squared error (RMSE)**
* **Mean absolute error (MAE)**
* **Coefficient of determination R squared (R2)**: proportion of the variation of the dependent variable (binding affinity) that is predictable from the independent variable (prediction of the model).
* **Pearson correlation coefficient (Pearson)**: Linear correlation between two variables (binding affinity and prediction of the model). It ranges between \(-1\) and \(+1\).
* **Spearman coefficient**: Monotonic correlation coefficient. It ranges between \(-1\) and \(+1\). A Spearman correlation of \(+1\) or \(-1\) occurs when a variable is a perfect monotone function of the other.
Fig. 6 shows the evolution of the five error metrics in the validation set, for the classical CNN and the hybrid CNN with 300 quantum gates. We see that for both cases, all the error metrics have stabilized after 50 epochs. Further training the models with the same data could lead to overfitting of the training data, decreasing the generalization capacity. For this reason, we stopped the training of all the models at 50 epochs. Fig. 6 also shows that the Pearson and Spearman coefficients oscillate more than the other error metrics, even when the training has converged. For this reason, we believe that, in this case, the RMSE, MAE and R2 are better measurements of convergence of the models.
Once the models have been trained, we evaluate their performance on the test set. Fig. 7 shows the five error metrics, evaluated on the test set, for all the CNN models studied in this work. We compare the performance of the hybrid models with 20 - 600 quantum gates, with the performance of the classical CNN. The results show that, in general, the performance of the hybrid CNN models increases with the number of quantum gates until it reaches roughly the same performance as the classical CNN, at 300 quantum gates. From that point, the models with 400, 500 and 600 gates oscillate around the classical performance and do not improve significantly with the number of quantum gates. Therefore, we conclude that the number of quantum gates does affect the performance of the model, for shallow quantum circuits, and it stabilized when the quantum circuits achieve a certain depth. The minimal number of gates needed to achieve classical performance, in this case, would be around 300 quantum gates. Thus, for a certain choice of quantum circuits, decreasing the complexity of the CNN does not decrease its predictive performance. Fig. 7 also shows the performance of the hybrid model constructed from the Ising model. We see that the performance is a bit worse than the optimal G3 hybrid model.
The main motivation for designing a hybrid CNN model was to reduce the complexity and thus the training time of the neural network. A measure of complexity that does not depend on the hardware where the model is trained is the number of training parameters. Table 1 shows the training parameters of the classical CNN and all the hybrid CNN models.
The classical CNN has around 10 million parameters, while the hybrid CNNs have around 8 million parameters, proving a 20% reduction in model complexity. The training times of the models depend on the hardware where the training is executed. Training the CNNs with only CPUs requires much longer execution times. Using GPUs highly accelerates the training process, reducing the training time from many days to hours. In our experiments, the models have been trained using only CPUs and with two types of GPUs. The details of the used hardware is shown in Table 2.
The improvement in training times with the hybrid model over the classical varies from 26% to 42%. Using more powerful GPUs reduces the difference in training times, but the hardware is also more costly. The difference in training times
\begin{table}
\begin{tabular}{c c c c} Hardware & Hybrid CNN & Classical CNN & Difference \\ \hline Azure CPU & 10.7 days & 18.3days & 42\% \\ Azure GPU & 24h & 39h & 38\% \\ Purdue Anvil GPU & 16.3h & 22.1h & 26\% \\ \hline \hline Training parameters & & & \\ \hline All hardware & 8088499 & 10137129 & 20\% \\ \hline \end{tabular}
\end{table}
Table 1: Training times and training parameters
in all cases is limited by the difference in training parameters, which is a hardware-agnostic measure of complexity.
After analyzing the results from the models trained with the refined set, we repeated the experiments training the models with the general set. The general set has almost three times more data than the refined set, and thus the training takes more time and computational resources. We observed that the models required more epochs for the performance to converge in the validation set. The performance metrics evaluated in the test set are displayed in Fig. 8. We see that the performance results are equivalent to the ones from the models trained with the refined set. The performance of the hybrid G3 models increases with the number of quantum gates until it converges at around 300 gates. Then, the performance oscillates around the classical performance. The Ising model has suboptimal performance compared to the classical CNN or the hybrid G3 CNN with 300 gates. We conclude that training the models with the general set leads to equivalent results to training the models with the refined set, but it requires longer training times and having more computational resources.
CNNs are widely used models to learn from data such as time series, images or volumetric representations. Their goal is to unravel hidden patterns from the input data and use them to predict the target. Thus, the complexity of a CNN model highly depends on the complexity of the data. Hybrid quantum-classical CNN models can help reduce the number of parameters of the neural network while maintaining its prediction capacity. One natural question would be how this reduction of training parameters scales with the size of the data. Let's consider that each sample has size \((C,N,N,N)\), where \(C\) is the number of features and \(N\) is the size of the volume side. The reduction of model complexity corresponds to the number of parameters of the first layer of the network. Therefore, the reduction of training parameters scales linearly with the number of features \(C\). The number of training parameters does not explicitly depend on \(N\), because each filter is applied locally to a portion of the data, as many times as needed to cover the whole sample. However, when the dimensionality of the data increases, usually more filters are needed for the CNN to converge. As the data complexity increases, more complex models are needed to learn useful information from it.
After analyzing the results of the noiseless quantum circuits, we perform noisy simulations for three different quantum channels and analyze the corresponding performance. An example of the output of a quantum circuit for different noise models and different error probabilities is shown in Fig. 9. We see that the three noise models reduce the probability amplitude of the circuits' outputs. The main difference between the behavior of the noise models is that the phase damping channel reduces the probability amplitude slower than the other two models. In all cases, when the error probability reaches \(p=0.03\), the quantum information is lost, since the amplitude peaks can no longer be distinguished. On the other hand,
\begin{table}
\begin{tabular}{l|l|l|l} \hline \hline & Azure CPU & Azure GPU & Purdue Anvil GPU \\ \hline CPU & Intel Xeon E5-2673 v3 2.4 GHz & Intel Xeon E5-2690 v3 & 2 x 3rd Gen AMD EPYC 7763 \\ Cores & 4 & 6 & 128 \\ GPU & - & 1 x NVIDIA Tesla K80 & 4 x NVIDIA A100 \\ Memory & 14GB & 56GB & 512GB \\ \hline \hline \end{tabular}
\end{table}
Table 2: Hardware specification
Figure 6: Validation error metrics (RMSE, MAE, R2, Pearson and Spearman) evaluated in the validation set as a function of the training epochs. The left plot shows the training of the classical CNN and the right plot the hybrid CNN with 300 quantum gates. Both models converged after 50 epochs.
when the error rate is smaller than \(p=0.03\), the DRER algorithm can successfully mitigate the noisy outputs. An example of the performance of the DRER algorithm for \(p=0.01\) is shown in Fig. 10. Even though the noise of the quantum device significantly reduces the amplitudes of the distribution, the DRER algorithm can recover the original amplitudes with significant accuracy. For every noise model and error rate \(p\), we performed a hyperparameter optimization to obtain the best linear model to mitigate the quantum errors. The results are shown in Table 3. Apart from evaluating the mean squared error (MSE), we also evaluate the tendency accuracy, that is, the proportion of times the DRER algorithm modifies the output in the correct direction. Let \(y_{\text{noisy}}\), \(y_{\text{noiseless}}\), \(y_{\text{mitigated}}\) be the noisy, noiseless and mitigated counts respectively. Then, the tendency accuracy measures the proportion of times \(|y_{\text{mitigated}}-y_{\text{noiseless}}|<|y_{\text{noisy}}-y_{\text{noiseless}}|\). Table 3 shows that when \(p\leq 0.01\) the MSE of the mitigated circuits is smaller than the MSE of the noisy circuits. On the other hand, the MSE of the mitigated circuits is smaller than the MSE of the noisy circuits.
Figure 7: Evaluation of the five error metrics (RMSE, MAE, R2, Pearson and Spearman) in the core set. Comparison of the hybrid CNN models constructed from the Ising model and the G3 family with 20,50,100,200,300,400,500 and 600 gates, together with the classical CNN. All the models have been trained with the refined set.
hand, for \(p=0.03\) the MSE of the mitigated circuits is similar or even larger than the MSE of the noisy circuits, and the tendency accuracy is barely better than random guessing. This result agrees with Fig. 9, since the noisy simulations with \(p=0.03\) are basically a constant value. Table 3 also shows that the tendency accuracy increases when the error probability \(p\) decreases. For the depolarizing quantum channel, the tendency accuracy reaches the value \(0.8\) when \(p=0.008\), and increases to \(0.89\) with \(p=0.001\). The amplitude damping noise seems to be the hardest to mitigate, since the tendency accuracy increases slower than the tendency accuracy of the other noise models.
\(p=0.01\). All in all, these results show that as long as the error rates are smaller than \(p=0.01\), the DRER algorithm can successfully mitigate the errors introduced by the quantum device on the quantum convolutional layer with 300 gates.
## IV Conclusions
Understanding the binding affinity of a drug candidate can provide valuable insights into its potential efficacy and help to identify potential side effects. Additionally, predicting the binding affinity can help with the design of new molecules that bind more strongly to their target protein. This is especially important in the drug development process when dealing with new treatments that work on previously unexplored biological mechanisms. For this reason, designing efficient computational methods that can accurately predict the binding affinity between a molecule and a target protein is essential to speed up the drug discovery process. Deep learning methods such as 3D CNNs provide promising results in this aspect, since they learn directly from the atomic structure of the protein-ligand pair. However, one of the biggest challenges of deep learning methods is the high complexity of the networks, which require learning millions of training parameters. This fact makes the training process long and costly, limiting the exploration of different network architectures. Quantum machine learning is a field that seeks to leverage the advantages of quantum computing to improve machine learning al
Figure 10: Example of an output of the quantum circuit used in the quantum convolutional layer for three noise models and error rate \(p=0.01\), together with the output of the error mitigation algorithm. For easier visualization, only the first 20 outputs are displayed in the figure.
Figure 9: Example of an output of the quantum circuit used in the quantum convolutional layer for three noise models and different error rates. For easier visualization, only the first 20 outputs are displayed in the figure.
gorithms. Because of the exponential scaling of the Hilbert space, quantum computers have the ability to handle large and high-dimensional datasets and speed up machine learning algorithms. In this work we present a hybrid quantum-classical 3D CNN, which reduces the complexity of the classical 3D CNN while maintaining an optimal prediction performance. With proper design of quantum circuits, the hybrid CNN reduces the number of training parameters by 20%, which implies a reduction of training times of 20%-40%, depending on the hardware where the algorithm is executed. Apart from testing the performance of the algorithm in classical hardware, our work also proves the potential effectiveness of the method with noisy real hardware aided with a simple error mitigation technique. Our results show that if the error probability is smaller than 0.01, a commonly-used error mitigation technique can accurately recover the noiseless outputs of the quantum circuit. All in all, this work shows how quantum machine learning offers the potential to reduce the complexity and long training times of classical neural networks by leveraging the advantages of quantum computing to handle large and high-dimensional datasets and speed up machine learning algorithms.
###### Acknowledgements.
The project that gave rise to these results received the support of a fellowship from la Caixa Foundation (ID 100010434). The fellowship code is LCF/BQ/DR20/11790028.
|
2302.02407 | HyPHEN: A Hybrid Packing Method and Optimizations for Homomorphic
Encryption-Based Neural Networks | Convolutional neural network (CNN) inference using fully homomorphic
encryption (FHE) is a promising private inference (PI) solution due to the
capability of FHE that enables offloading the whole computation process to the
server while protecting the privacy of sensitive user data. Prior FHE-based CNN
(HCNN) work has demonstrated the feasibility of constructing deep neural
network architectures such as ResNet using FHE. Despite these advancements,
HCNN still faces significant challenges in practicality due to the high
computational and memory overhead. To overcome these limitations, we present
HyPHEN, a deep HCNN construction that incorporates novel convolution algorithms
(RAConv and CAConv), data packing methods (2D gap packing and PRCR scheme), and
optimization techniques tailored to HCNN construction. Such enhancements enable
HyPHEN to substantially reduce the memory footprint and the number of expensive
homomorphic operations, such as ciphertext rotation and bootstrapping. As a
result, HyPHEN brings the latency of HCNN CIFAR-10 inference down to a
practical level at 1.4 seconds (ResNet-20) and demonstrates HCNN ImageNet
inference for the first time at 14.7 seconds (ResNet-18). | Donghwan Kim, Jaiyoung Park, Jongmin Kim, Sangpyo Kim, Jung Ho Ahn | 2023-02-05T15:36:51Z | http://arxiv.org/abs/2302.02407v2 | # HyPHEN: A Hybrid Packing Method and Optimizations for Homomorphic Encryption-Based Neural Networks
###### Abstract
Convolutional neural network (CNN) inference using fully homomorphic encryption (FHE) is a promising private inference (PI) solution due to the capability of FHE that enables offloading the whole computation process to the server while protecting the privacy of sensitive user data. However, prior FHE-based CNN (HCNN) implementations are far from being practical due to the high computational and memory overheads of FHE. To overcome this limitation, we present HyPHEN, a deep HCNN construction that features an efficient FHE convolution algorithm, data packing methods (hybrid packing and image slicing), and FHE-specific optimizations. Such enhancements enable HyPHEN to substantially reduce the memory footprint and the number of expensive homomorphic operations, such as ciphertext rotation and bootstrapping. As a result, HyPHEN brings the latency of HCNN CIFAR-10 inference down to a practical level at 1.40s (ResNet20) and demonstrates HCNN imageNet inference for the first time at 16.87s (ResNet18).
Private Inference, Fully Homomorphic Encryption
## I Introduction
Private inference (PI) has recently gained the spotlight in the MLaaS domain as cloud companies should comply with privacy regulations such as GDPR Regulation [1] and HIPAA Act [2]. PI enables inference services at the cloud server while protecting the privacy of the client and the intellectual properties of the service provider. For instance, hospitals can provide a private medical diagnosis of diseases, and security companies can provide private surveillance systems without accessing client's sensitive data [3, 4].
Fully homomorphic encryption (FHE) [5] is a cryptographic primitive that enables direct evaluation of a rich set of functions on encrypted data, making it especially suited for PI among other cryptographic candidates [6, 7]. FHE-based PI solution uniquely features 1) full offloading of the computation process to the server, 2) succinct data communication requirement, and 3) non-disclosure of any information about the model except the inference result. Such benefits have driven researchers to investigate FHE-based PI of convolutional neural networks (HCNN) [8, 9, 10, 11, 12].
However, FHE incurs high computational and memory overheads, hindering the adoption of HCNN for real-world services. Several prior works has tried to mitigate the overheads [11, 13], but HCNN implementations still stay at a proof-of-concept level and target elementary problems such as MNIST and CIFAR-10. Gazelle [14] proposes a convolution algorithm for homomorphic encryption, which is widely adopted and by the following studies [11, 15, 16]. Gazelle and most prior PI CNN implementations have avoided the high overheads of FHE by restricting the CNN models to shallow ones [8], or mixing use of other cryptographic primitives, such as multi-party computation (MPC) [15, 17, 18]; however, MPC solutions require extra user intervention and the associated data communication overheads. The state-of-the-art FHE solution [11] modifies Gazelle's algorithm by densely mapping data into a ciphertext before entering the next convolution layer, and optimizes it for FHE circumstances. Despite such efforts, prior implementations take tens of minutes [11] to even hours [13] to perform a single HCNN inference of CIFAR-10 (ResNet20).
We propose HyPHEN, which mitigates the huge overhead of HCNN with an optimized convolution algorithm and packing methods. We observe that, in HCNN, rotation operations take up the majority of the runtime (see Appendix A). Most of the rotations are performed to maintain the data organization inside an encrypted tensor uniformly throughout HCNN inference. _Lazy-SISO_ algorithm and _hybrid packing (HP)_ method of HyPHEN allow switching between multiple data organizations to get rid of excessive rotation operations. Also, our _image slicing_ method tackles the data expansion problem of HCNN by enhancing data reuse with only 2.5% runtime computational overhead, and saves hundreds of GBs of memory space. By these means, our GPU implementation of HyPHEN achieves 1.40s of execution time for encrypted CIFAR-10 inference with the ResNet20 model, reaching a real-time level. Also, HyPHEN shows HCNN inference for ImageNet inference with the ResNet18 model for the first time, which takes 16.87s. The key contributions of the paper are as follows:
* We devise an optimized algorithm for convolution, which alternates between two methods, _channel-aligned convolution (CAConv)_ and _replication-aligned convolution (RAConv)_, where such composition is enabled by our novel lazy-SISO algorithm.
* We propose a _hybrid packing (HP)_ method that can utilize the entire slots of a ciphertext with a marginal increase in the number of rotations.
* We show the huge memory footprint of HCNN deteriorates its performance and propose an _image slicing_ method that can save hundreds of GBs of memory space with negligible overhead.
## II Background
### _Fully Homomorphic Encryption (FHE)_
FHE is a set of public key encryption schemes that enable computation on encrypted data. Among several popular FHE schemes, RNS-CKKS [19] has been broadly adopted in the PI domain as it supports fixed-point numbers and _slot batching_. A _plaintext_ in RNS-CKKS is an unencrypted degree-\((N{-}1)\) polynomial in a cyclotomic polynomial ring, \(R_{Q}\!=\!\mathbb{Z}_{Q}[X]/(X^{N}{+}1)\). A plaintext maps to a message which is a vector of \(N/2\) real (or complex) numbers, i.e., a single plaintext batches \(N/2\)_slots_, each of which can store a complex or real number. CKKS encrypts a plaintext into a _ciphertext_ in \(R_{Q}^{2}\). \(Q\) is a ring modulus represented by a set of prime moduli obtained by the Chinese Remainder Theorem (CRT) as \(Q=\prod_{i=0}^{l}q_{i}\;(0\leq l\leq L)\). \(L\) and \(l\) denote the initial and current _level_ of a ciphertext, which is an HE-specific resource that determines the number of multiplications applicable to a given ciphertext. We also denote the associated level of ring modulus using subscript as \(Q_{L}\) or \(Q_{l}\). We denote the plaintext and ciphertext of a message **a** as \(\langle\textbf{a}\rangle\) and **[a]**. HE operations of addition, multiplication, and rotation can be described as follows:
* HE.Eval([**a**],[**b**],\(f_{l}\)) = HE.Eval([**a**],\(\langle\textbf{b}\rangle\),\(f_{l}\)) = [\(f_{l}\)(**a**,**b**)]
* HE.Rotate([**a**],\(r\)) = [rot(**a**,\(r\))]
\(f_{l}\) denotes linear operations, either addition or (Hadamard) multiplication. rot(**a**,\(r\)) represents cyclically shifting vector **a** by \(r\) to the left. Unlike addition and rotation, multiplication in RNS-CKKS requires additional rescale operation, which consumes a level by dividing \(ct\in R_{Q_{l}}\) into \(ct^{\prime}\in R_{Q_{l-1}}\). If a ciphertext has no level left after a series of multiplications, _bootstrapping_[20] is needed to reconcile the levels and allow further operation. Bootstrapping, the most costly operation in HE, consists of multiple HE operations including rescale operations. After bootstrapping, the level of the resulting ciphertext becomes \(L^{\prime}\!=\!(L-L_{b})\) where \(L_{b}\) is the depth of rescale operations in the bootstrapping circuit. As it is beneficial to perform many operations before bootstrapping, \(L\) should be sufficiently larger than \(L_{b}\). However, large \(L\) decreases the _security level_, which should be high enough to tolerate cryptographic attacks. The security level is roughly proportional to \(N/L\). Considering the security requirement of HE, large \(L\) requires large \(N\) (\(\geq 2^{15}\)). Thus, prior works on FHE [21, 22, 23] target \(N=2^{15}\) to \(2^{17}\).
Table I shows the execution time of HE operations on a system specified in Section IV-A. We measured the execution time of each operation at the initial level (\(L^{\prime}\)) of a ciphertext and thus the execution time may decrease for ciphertexts with lower levels. Bootstrapping consumes over two orders of magnitude longer runtime than other operations, but boostrapping does not occur as frequently as others. Except for bootstrapping, Rotate and MulCt are the most time-consuming operations in HE, which is due to the expensive key-switching procedure.
### _Convolution on Homomorphic Encryption_
In this subsection, we describe the baseline HCNN algorithms. We represent input and output tensors with tuples \(\{w_{i},h_{i},c_{i}\}\) and \(\{w_{o},h_{o},c_{o}\}\), and convolution layers with \(\{f_{w},f_{h},c_{i},c_{o}\}\). We denote the stride of convolution as \(s\) and assume \(padding=1\) for simplicity.
**Single-input, single-output channel convolution (SISO):** Gazele [14] proposes efficient convolution algorithms on HE, referred to as SISO. Figure 1 shows SISO for \(s=1,2\). Filter elements are separated into \(f_{w}f_{h}\) plaintexts. Each slot in \(i\)-th plaintext stores \(k_{i}\) or 0 (punctured) depending on whether \(k_{i}\) is involved in the computation of the output pixel at the same slot. SISO operates as follows: 1) rotate an encrypted input image with different indexes according to plaintext filter, 2) perform multiplication, and 3) accumulate the multiplication results to obtain the output.
**Channel-aligned convolution (CAConv):** For multiple channels, convolution is performed in a SIMD manner. If the size of a channel (\(w_{i}h_{i}\) or \(w_{o}h_{o}\)) is smaller than the number of slots in a ciphertext, multiple channels can be placed in a ciphertext. For example, if \(slot=2^{15}\) and an input channel is sized \(w_{i}h_{i}=32\times 32\) as in the input image of the CIFAR-10 dataset, \(\frac{slot}{w_{i}h_{i}}=32\) channels can be placed in a single ciphertext in an aligned manner (i.e., _channel-aligned_). Suppose we perform convolution on a channel-aligned input ciphertext with \(c_{i}=\frac{slot}{w_{i}h_{i}}\) channels. First, SISO is performed on \(c_{i}\) input
\begin{table}
\begin{tabular}{c|c c c c c c c} \hline \hline Operation & AddPt & AddCt & MulPt & MulCt & Rescale & Rotate & Boot \\ Time (ms) & 0.572 & 0.202 & 0.506 & 17.3 & 3.90 & 15.5 & 2160 \\ \hline \hline \end{tabular}
\end{table} TABLE I: Benchmark of HE operations averaged over 100 iterations on CPU (64 threads). Pt and Ct postfixes each represents ciphertext-plaintext and ciphertext-ciphertext operation, respectively.
Fig. 1: Single-input and single-output channel convolution (SISO). Ciphertexts and plaintexts are illustrated as 2D matrices, but are actually stored in 1D manner with each matrix row concatenated.
channels in a SIMD manner (see Figure 1(a)), which produces \(c_{i}c_{o}\) convolution outputs \(MK^{(i,j)}\) (\(1\leq i\leq c_{i}\), \(1\leq j\leq c_{o}\)). To compute the result for the \(k\)-th output channel, \(\sum_{i=1}^{c_{i}}MK^{(i,k)}\) is accumulated by _RaS (Rotate and Sum)_. RaS is repeated for each of \(c_{o}\) output channels. Throughout this paper, we refer to this convolution that takes a channel-aligned ciphertext as the input as _channel-aligned convolution (CAConv)_.
**Input repetition:** CAConv can be further optimized when the input tensor is smaller than the number of slots in a ciphertext. \(\frac{slot}{w_{i}h_{i}c_{i}}\) copies of the input tensor can be placed in a ciphertext, then \(\frac{slot}{w_{i}h_{i}c_{i}}\) output channels can be computed together with a single input ciphertext [11].
**Replication-aligned convolution (RAConv):** The output ciphertexts of CAConv holds multiple replications of a single output channel (i.e., _replication-aligned_). To perform another convolution, additional operations are required to realign the ciphertexts into the channel-aligned form, which we refer to as _IR (Image Realignment_). [12] constructs a framework supporting a broad scope of convolution algorithms for ciphertexts with various alignments. We build on the framework and design _replication-aligned convolution (RAConv)_. In RAConv, each replication-aligned ciphertext is multiplied with plaintexts containing filter weights for different output channels, whose results can be accumulated to produce a single channel-aligned ciphertext (see Figure 1(b)). Therefore, CAConv and RAConv can be performed in an alternative manner to get rid of costly IR computation, when performing multiple convolutions.
**Packing:** Strided convolution (\(s>1\)) using SISO creates a gap between valid values (see Figure 0(c)). A ciphertext with a gap underutilizes its slots, leading to throughput degradation. While Gazelle removes the gap by a client-aided re-encryption process, non-interactive PI should remove the gap through masking and rotation, which incur additional computation and level consumption. [11] propose a _multiplexed packing_ method that can be combined with CAConv (_MP-CAConv_) to mitigate the overheads. In the IR stage of MP-CAConv, the gap is filled with other channels (see Figure 2(b)), which we refer to as the _repacking process_. Therefore, IR incorporates both realigning and repacking processes in MP-CAConv.
There are many variants of the convolution algorithm for HE [11, 13, 15, 16, 24], but most of them resort to SISO. [12] devise a convolution algorithm based on tile tensoring, and not SISO, which can be an efficient alternative to SISO for specific HE parameter settings and image sizes. However, we find that tile tensoring incurs an excessive amount of bootstrapping for FHE circumstances in general, so we exclude it from analysis and mainly focus on SISO convolution in the paper.
### _Activation Function on Homomorphic Encryption_
Non-linear activation functions, such as ReLU, cannot be used directly in HCNN. They must be replaced by polynomial functions as HE only supports addition and multiplication operations. Direct replacement of non-linear functions with approximate polynomials requires the approximation error to be kept low over a wide range, to retain the accuracy of a CNN model. [25] approximate ReLU while keeping \(l_{1}\) norm of approximation error lower than \(2^{-13}\) in the range of [-50, 50], using a composition of 15-, 15-, and 27-degree polynomials. Approximation-based approach has a benefit that it can be applied to pre-trained neural networks. However, evaluation of high-degree polynomials imposes a significant runtime overhead in HCNN inference.
Another approach is to train neural networks with low-degree polynomial activation functions as in [26, 27, 28, 29, 30]. While this approach requires retraining, operational cost is much cheaper compared to high-degree polynomials. Recently, AESPA [31] has demonstrated that CNN trained with low-degree polynomials can achieve equivalent accuracy to ReLU-based networks across various CNN networks and image datasets. AESPA replaces ReLU and batch normalization (BN) with the
Fig. 3: Comparison of gap packing methods to fill gap induced by downsampling layers. \(a_{i}^{(j)}\) denotes the \(i\)-th element in the \(j\)-th channel of a tensor \(a\).
Fig. 2: CAConv and RAConv. The single superscript denotes channel and the superscript pair denotes (input channel, output channel). We simplify the notation of \(M^{(a)}K^{(a,b)}\) as \(MK^{(a,b)}\).
composition of orthogonal basis polynomials and basis-wise BN. During inference, the composition turns into a second-degree polynomial, so AESPA can drastically reduce runtime cost for activation. Therefore, we adopt AESPA for our HCNN implementation.
### _Threat Model_
We adopt the same threat model as prior PI works on HE [11, 8]. A client sends encrypted data to an untrusted server. The server performs CNN inference using HE operations and returns inference results to the client. The client decrypts the resulting ciphertext to obtain the private result. The server only holds the client's public keys and cannot decrypt any intermediate ciphertexts in the inference process. The client does not know any information about the processing at the server other than the result.
## III HyPHEN Construction
We introduce HyPHEN, our HCNN solution which focuses on reducing the memory footprint and the number of expensive homomorphic operations, including rotation and bootstrapping. We implement ResNet [32] models for CIFAR-10 and ImageNet inference tasks with HyPHEN to demonstrate its effectiveness.
### _RAConv and Lazy-SISO_
The main performance bottleneck of CAConv is the massive number of rotations. CAConv requires an enormous number of rotations for RaS and IR, which take up most of the time in CAConv. For example, rotations for RaS and IR respectively account for 49% and 43% of the total rotations in an MP-CAConv implementation of ResNet20. Furthermore, IR consumes an additional level for masking to extract the values. RAConv does not require RaS and reduces the overheads for IR when alternatively performed with CAConv, but it requires more rotations for SISO. While CAConv requires \((f_{w}f_{h}-1)\) rotations for SISO, RAConv requires \((f_{w}f_{h}-1)\) rotations for each of \(c_{i}\) input ciphertexts.
We devise lazy-SISO algorithm, which reduces the number of rotations for RAConv. With lazy-SISO, rotations are delayed to be performed after multiplications with weight plaintexts. Then, the accumulation step in RAConv transforms into an RaS step (see Figure 1(c)), which only requires \((f_{w}f_{h}-1)\) rotations. Without Lazy-SISO, simply alternating between CAConv and RAConv does not have much performance benefit due to additional SISO overheads. See Appendix A for more details of lazy-SISO.
### _Hybrid Packing_
SISO convolution suffers from low slot utilization in ciphertexts due to the gap created by strided convolutions, which leads to severe throughput degradation in HCNN. Prior state-of-the-art HCNN implementation (MP-CAConv) mitigates the underutilization of slots by packing input ciphertexts as dense as possible using multiplexed packing and input repetition (see Section II-B). However, such dense packing causes a lot of additional rotations to adjust the data organization because SISO still produces a void-packed ciphertext (see Figure 2(a)) even if the input ciphertext is densely packed. MP-CAConv has to manually restore the dense data organization by masking off invalid values and filling the gap with the values from different channels during the IR process. As a result, MP-CAConv ends up requiring additional \(c_{o}\) rotations to maintain multiplexed packing and \(\log\frac{slot}{w_{o}h_{o}c_{o}}\) rotations to maintain input repetition.
To reduce the repacking overhead between convolutional layers, we propose a novel _hybrid packing (HP)_ method. HP fills the gap with duplicates of multiple channels (See Figure 2(d)). We design HP based on two key observations. First, applying convolution over a duplicate-packed ciphertext (Figure 2(c)) produces a multiplex-packed output ciphertext as in Figure 2(b). Second, converting a void-packed ciphertext into a duplicate-packed ciphertext requires fewer rotations than converting it into a multiplex-packed ciphertext. Duplicate packing only needs O(log(\(gap_{size}\))) rotations while multiplexed packing requires O(\(gap_{size}\)) rotations.
We represent a hybrid-packed ciphertext by a pair of \((m,d)\), where \(m\) is the number of multiplexed channels, and \(d\) is the number of duplicates. For example, Figure 2(d) shows \((m,d)\!=\!(2,2)\) HP. Data organization for HP can differ even for the same \((m,d)\) parameter, and depends on the convolution algorithm. HP parameter \((m,d)\) changes when CAConv or RAConv is performed as shown in Figure 4. We denote \((m,d)\) of the input and output ciphertexts as (\(m_{\text{in}}\), \(d_{\text{in}}\)) and (\(m_{\text{out}}\), \(d_{\text{out}}\)). Input repetition is no longer required as HP with larger \(d_{\text{in}}\) can be used instead. During convolution, duplicates of the same element produce output elements in different channels (\(c^{(1)}\)-\(c^{(4)}\) in Figure 3(a), and \(c^{(1)}\) and \(c^{(17)}\) in Figure 3(b)). Therefore, after the IR process, we get output ciphertexts with the HP parameter set to \((m_{out},d_{out})\!=\!(d_{in},m_{in})\). So, after a series of CAConv and RAConv, the HP parameter returns to the initial \((m_{in},d_{in})\). The complete procedures of RAConv and CAConv with HP are described more in detail in Appendix A.
Compared to MP-CAConv, HP significantly reduces the rotations in RaS and IR. The rotation counts of MP-CAConv [11] and our HP convolutions are shown in Table II. For both HP convolution methods, the product of the numbers of input and output ciphertexts remains constant (\(ct_{in}\cdot ct_{out}=\frac{vt_{in}c_{c}c_{o}}{slot_{o}}\)). Compared to MP-CAConv, the number of rotations for RaS is reduced by about \(ct_{in}\) times for both HP-CAConv and HP-RAConv. Also, the number of rotations required for repacking decreases from \(c_{o}\) of MP-CAConv to mere \(\log m_{in}\) per output ciphertext in our HP convolutions. Although HP requires slightly more rotations for SISO (\(ct_{min}\) of HP convolutions are usually higher than \(ct_{in}\) of MP-CAConv), HP reduces the overall number of rotations required for convolution. We also explore various combinations for the \((m,d)\) pair to minimize the total number of rotations. The choice of \((m,d)\) decides \(ct_{in}\) and \(ct_{out}\) values and creates a trade-off between SISO, RaS, and IR costs, and also affects the number of ciphertexts we have to perform bootstrapping with. We provide an in-depth performance analysis with regard to the choice of \((m,d)\) in Appendix A.
### _Image Slicing_
In HCNN inference, weight plaintexts take the majority of the memory footprint. In the ResNet models in our experiment, plaintexts take tens to hundreds of GBs of memory space, occupying the majority of the memory footprint. The plaintext packing method of SISO replicates each filter element \(w_{i}h_{i}\) times to form a weight plaintext (see Figure 0(b)), which requires a total of \(w_{i}h_{i}f_{w}f_{h}c_{i}c_{o}\) plaintext slots for a weight tensor. Such plaintext data expansion increases the cost of HCNN inference in terms of storage capacity and memory bandwidth. For instance, plaintexts for running ResNet18 on ImageNet take up 379GB of memory space (see Table III), which even exceeds the memory capacity of the latest GPUs, limited by the current HBM technology.
We find a method to reduce data expansion of plaintexts, by organizing the weights in a circular way such that they can be permuted at runtime when being multiplied with the input ciphertexts. This results in a packing method for ciphertexts shown in Figure 5, which we refer to as _image slicing_. In CAConv, the input image is divided into \(s_{i}\) (2 in the figure) slices, and different slices are chosen from each channel to be placed in a single ciphertext. This organization enables each plaintext to be reused by \(s_{i}-1\) permutations. After accumulation, the output ciphertext follows a simple replication-aligned organization. After modified RAConv with image slicing shown in Figure 4(b), the organization returns to the image-sliced one.
Image slicing can be orthogonally applied along with HP, and reduce the memory footprint for plaintexts roughly by a factor of \(s_{i}\). Extra computation of plaintext permutation during imageliced convolution is not significant to the overall performance because plaintext permutation is much less complex than other homomorphic operations; plaintext permutation only involves moving data elements. We use \(s_{i}=8\) for ImageNet implementations of HyPHEN, but do not apply image slicing to CIFAR-10. Image slicing requires an explicit zero padding inside a ciphertext, to which the small image size of CIFAR-10 (e.g., 16\(\times\)16 space is required for a 8\(\times\)8 feature map) is not well suited. Zero padding overhead outweighs the benefit of image slicing, so it is not applied to CIFAR-10.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Method** & \(ct_{in}\) & \(ct_{out}\) & **SISO** & **RaS** & **IR** \\ \hline
**[11]** & \(\frac{\pi i_{h}c_{i}}{slot}\) & \(\frac{\pi w_{h}aca_{i}}{slot}\) & \(ct_{in}(f_{w}f_{h}-1)\) & \(\frac{w_{i}h_{i}c_{i}}{slot}\log c_{i}\) & \(c_{o}\)+\(\log\frac{slot}{w_{o}hc_{o}}\) \\ \hline
**HP-CAConv** & \(\frac{w_{h}c_{i}d_{i}}{slot}\) & \(\frac{c_{o}}{slot}\) & \(ct_{min}(f_{w}f_{h}-1)\) & \(ct_{out}\log\frac{c_{i}}{ct_{in}}\) & \(c_{out}\log m_{in}\) \\ \hline
**HP-RAConv** & \(\frac{c_{i}}{m_{in}}\) & \(\frac{w_{i}h_{i}c_{i}m_{in}}{slot}\) & \(ct_{min}(f_{w}f_{h}-1)\) & \(ct_{out}\log m_{in}\) & \(c_{out}\log m_{in}\) \\ \hline \hline \end{tabular}
\end{table} TABLE II: The rotation complexity of the convolutions. We denote the numbers of input and output ciphertexts as \(ct_{in},ct_{out}\). Then, \(ct_{min}=\min(ct_{in},ct_{out})\) considering SISO and lazy-SISO.
Fig. 4: The procedure of CAConv and RAConv with hybrid packing
Fig. 5: Simplified convolution processes with image slicing. Ciphertexts are filled with image slices each represented with a channel number and a postfix: upper half of image as \(a\) and lower half as \(b\) (e.g. _3b_ represents the lower half of the 3\({}^{\text{rd}}\) channel). Ciphertexts are multiplied with plaintexts holding weight values, where \(i,j\) on each slice represents weight values for \(i\)-th input channel, and \(j\)-th output channel. Black arrow represents plaintext permutation.
### _ResNet Construction with HyPHEN_
HyPHEN combines RAConv, HP and Image Slicing to build the entire CNN model. Figure 6 shows the basic block of ResNet implemented on HyPHEN. There are three more considerations when deciding the order of operations. First, bootstrapping is cheaper when placed after RAConv, and not CAConv, because the number of ciphertexts is smaller for the former. Second, to match the level between the shortcut path and the main CAConv-RAConv path, bootstrapping should be placed either before residual connections diverge or after they converge. Last, it is beneficial to perform convolutional layers at the lowest level possible because the complexity of FHE operations such as rotation is proportional to the level \(l\) of the ciphertext.
All things put together, our ResNet basic block consumes a total of 6 levels. The level consumption of each layer is represented in the parenthesis of each block. CAConv and RAConv use HP and consume one level for each of SISO and IR. Activation uses AESPA and consumes one level. AESPA is a quadratic polynomial having different coefficients for each channel while training. During inference, we fuse the coefficients into nearby layers, then the activation becomes a simple square function \(x^{2}\). We set the ciphertext level after bootstrapping (\(L^{\prime}\) in Section II-A) to six and perform bootstrapping when the level becomes zero.
## IV Evaluation
### _Experimental Setup_
We ran HCNN inference on CPU and GPU environments using the RNS-CKKS library, HEaaN [33]. The CPU system is equipped with two AMD EPYC 7452 CPUs running at 2.35GHz (32 cores per socket) and 480GB DRAM. GPU experiments were conducted at the same system with an additional NVIDIA A100 GPU with 80GB HBM. Our HCNN inference experiments used the CIFAR-10 [34] and ImageNet [35] datasets. We evaluated ResNet20/32/44/18 trained with AESPA on PyTorch and applied the fusion technique to all the networks. Our RNS-CKKS parameters satisfy 128-bit security level [36] with polynomial degree \(N\!=\!2^{16}\) and hamming weight 192. The bootstrapping implementation consumes 17 levels.
### _Performance_
We first compare HyPHEN against Cheetah [18], one of the state-of-the-art HE-MPC hybrid PI protocol. As it is hard to make a fair comparison between the HCNN and HE-MPC as the preferred system settings are different, we set out to use the best result presented in the paper. Here, HyPHEN is 11.3\(\times\) faster than Cheetah on ResNet32 considering the end-to-end runtime and 5.5\(\times\) efficient than Cheetah in terms of communication. Although the HE-MPC hybrid PI protocol is generally accepted as faster than HCNN, equipped with efficent packing scheme and hardware acceleration, we show that the performance of HCNN inference is comparable.
In FHE domain, [11, 13] has priorly shown ResNet on CIFAR-10, while [11] reported single thread implementation of ResNet20 in 2271s. HyPHEN shows significant latency reduction on image classification. Our work conduct the same task in 37.6 seconds in 64 thread CPU environment, and further reduces within 1.4 seconds in GPU.
### _Sensitivity Study_
To analyze the effectiveness of each component of HyPHEN, we started from the HCNN implementation of [11] augmented with AESPA (Baseline), and gradually applied alternation between CAConv and RAConv with lazy-SISO (+ L-SISO), HP (+ HP), and image slicing (HyPHEN). We tested with CPU/GPU implementations of ResNet18 for ImageNet. The results are shown in Figure 7.
weight plaintexts by roughly \(s_{i}=8\) times, it does not lead to reduction in latency for our CPU system. However, for the GPU system, image slicing enables the whole working set to fit in the 80GB GPU memory, so also reduces the latency by removing the runtime overhead of copying data from the host to the GPU (Memory). Overall, image slicing has benefits of substantial memory footprint reduction and also latency reduction for memory-constrained systems.
### _Execution Time Breakdown_
Table IV shows the runtimes of ResNet20/32/44 for the inference of a single CIFAR-10 image and ResNet18 for a single ImageNet image. Our ResNet20/32/44 implementations on GPU take merely a few seconds to complete. While the majority of inference time for ResNet20/32/44 is spent on bootstrapping, for ResNet18, 51.1% to 57.0% of inference time is spent on convolution as ResNet18 has four times more channels than ResNet20/32/44. Table IV also demonstrates that RAConv effectively reduces the overall runtime of the convolution layers, following the operation count analysis in Table VIII. We provide more detailed comparison with [11] in Appendix A.
### _Accuracy_
In Table V, we measured inference accuracies for CIFAR-10 running ResNet models with HyPHEN.1 Near-zero accuracy degradation (\(\leq\) 0.01%) is observed for ResNet20/32/44. HyPHEN is more robust to accuracy degradation than [11], which shows 0.09% to 0.21% accuracy degradation for ResNet20/32/44 on CIFAR-10. The difference in accuracy drop can be explained by whether the original network is executed as is (using AESPA) or an approximation has been made (using ReLU approximation).
Footnote 1: We are still experimenting on the ImageNet accuracy, and will report the accuracy of the ResNet18 model for ImageNet in the revised version.
## V Future Work
We have shown the effectiveness of HyPHEN on HCNN using ResNet as representative CNN models. Exploring application of HyPHEN to a broader scope, such as application to various models as in [37], is needed as future work.
## VI Conclusion
In this paper, we proposed HyPHEN, an efficient private inference construction of FHE-based CNN (HCNN). Mixing two convolution methods with lazy-SISO and hybrid packing (HP) enables fast inference by significantly reducing the number of homomorphic rotations in convolution. Also, image slicing enables HyPHEN to reduce the memory footprint for high resolution image classification tasks, which is especially beneficial for memory-constrained devices. Our experiments with HyPHEN on CPU systems show 1.87\(\times\) lower latency compared to the prior state-of-the-art implementation. Using GPU acceleration, HyPHEN achieves 1.40s/2.17s/2.96s execution time for running ResNet20/32/44 for CIFAR-10, and we also showed HCNN inference of ResNet18 for ImageNet in 16.87s for the first time.
|
2303.11020 | DS-TDNN: Dual-stream Time-delay Neural Network with Global-aware Filter
for Speaker Verification | Conventional time-delay neural networks (TDNNs) struggle to handle long-range
context, their ability to represent speaker information is therefore limited in
long utterances. Existing solutions either depend on increasing model
complexity or try to balance between local features and global context to
address this issue. To effectively leverage the long-term dependencies of audio
signals and constrain model complexity, we introduce a novel module called
Global-aware Filter layer (GF layer) in this work, which employs a set of
learnable transform-domain filters between a 1D discrete Fourier transform and
its inverse transform to capture global context. Additionally, we develop a
dynamic filtering strategy and a sparse regularization method to enhance the
performance of the GF layer and prevent overfitting. Based on the GF layer, we
present a dual-stream TDNN architecture called DS-TDNN for automatic speaker
verification (ASV), which utilizes two unique branches to extract both local
and global features in parallel and employs an efficient strategy to fuse
different-scale information. Experiments on the Voxceleb and SITW databases
demonstrate that the DS-TDNN achieves a relative improvement of 10\% together
with a relative decline of 20\% in computational cost over the ECAPA-TDNN in
speaker verification task. This improvement will become more evident as the
utterance's duration grows. Furthermore, the DS-TDNN also beats popular deep
residual models and attention-based systems on utterances of arbitrary length. | Yangfu Li, Jiapan Gan, Xiaodan Lin | 2023-03-20T10:58:12Z | http://arxiv.org/abs/2303.11020v3 | # Efficient Dual-stream Time-delay Neural Network with Global-aware Filter for Speaker Verification
###### Abstract
Conventional time-delay neural networks (TDNNs) face a challenge in handling long-range context, which affects their ability to represent speaker information, particularly for long-duration utterances. Existing solutions either significantly increase model complexity or achieve a poor trade-off between local features and global context. In this paper, we introduce a novel module called Global-aware Filter layer (GF layer), which employs a set of learnable transform-domain filters between a 1D discrete Fourier transform and its inverse transform to efficiently capture global context. Additionally, we innovatively design a dynamic filtering strategy and a sparse regularization method to enhance the performance of the GF layer and prevent overfitting. Based on the GF layer, we present a dual-stream TDNN architecture called DS-TDNN for automatic speaker verification (ASV). It utilizes two unique branches to extract both local and global features in parallel and employs a straightforward yet efficient strategy to fuse different-scale information. Experiments on the Voxceleb and SITV databases demonstrate that the DS-TDNN achieves a relative 10% improvement together with a relative 20% decline in computational cost over the ECAPA-TDNN in speaker verification. This improvement will become more significant with the growth of utterance durations. Furthermore, the DS-TDNN also beats popular deep residual models and attention-based baseline systems for arbitrary-duration utterances.
Time-delay neural network, dual-stream network, text-independent speaker verification, global context.
## I Introduction
Automatic Speaker Verification (ASV) systems aim to determine whether a given utterance is from an enrolled speaker, which has been widely applied in user authentication, access control, multimedia forensics, and other areas [1, 2, 3]. Typically, an ASV system consists of two main components: a front-end that extracts low-dimensional discriminative speaker embeddings from variable-length utterances and a back-end that determines whether two embeddings are from the same speaker. With the development of deep neural networks (DNNs), the front-end has undergone a significant transformation from conventional probabilistic models [4, 5, 6] to DNN-based methods [7, 8, 9]. In particular, x-vector [9] is a state-of-the-art architecture for speaker embedding. The x-vector is based on the Time-Delay Neural Network (TDNN) [10] layers designed for end-to-end speech processing, where share-weight filters are employed to pay attention on the whole frequency axis and a time-delay strategy is applied to capture context between consecutive frames.
Despite its great success, TDNN still has limitations. Due to the short-term stationarity of human speech, both local features and global context are essential for extracting robust speaker representation. However, typical TDNN mainly focuses on local features but lacks global context fusion because of the small receptive field in each hidden layer. As a result, TDNN-based models are less resistant to abrupt noise and exhibit suboptimal performance in real-life applications, especially when facing long-duration utterances. A natural idea to strengthen global context is to extend the depth of TDNN. [11] introduces the residual connection [12] to construct an extremely deep TDNN that captures context information over a longer time span than the standard TDNN. In [13], the authors extend TDNN by inserting dense layers between each pair of its hidden layers. To prevent overfitting, [14] introduce dropouts [15] to TDNN and propose a carefully designed filter with varying temporal resolution for more powerful context representation. To further deep TDNN within affordable parameters, [16] makes a trade-off between the width and the depth. However, thin TDNN may have poor speaker representations. To solve this problem, [17] proposes a split-transform-merge structure, which can help the deep TDNN with limited width learn more discriminative speaker representations. These improvements make remarkable progress in performance but also significantly increase complexity. Additionally, they do not consider the fusion of local and global features. To solve these problems, many techniques have been proposed, which are divided into two categories in this paper.
One technique is to enhance the filters with multi-scale information. [18] introduces the Res2Net [19] structure into TDNN, which inserts skip connections between each grouped filter to construct a different-scale receptive field and proposes the multi-layer feature aggregation to fuse local and global features, achieving state-of-the-art performance. To enhance the Res2Net, [20] proposes a context-aware filter. This method further divides each grouped filter applied in Res2Net into two different-scale filters: a normal receptive field filter for local features and a large receptive field filter for global context, which are combined using the dynamic integration strategy. Authors in [21] propose another type of dynamic filter, whose value is determined by an element-wise summation between the local features calculated using moving average and the global context calculated using global average pooling. Recently, [22] presents a variable-resolution filter, which employs a kernel selection mechanism to find the optimal receptive field. Although these enhanced filters can dynamically choose to focus on local features or global context as required, none |
2310.18882 | Differentiable Learning of Generalized Structured Matrices for Efficient
Deep Neural Networks | This paper investigates efficient deep neural networks (DNNs) to replace
dense unstructured weight matrices with structured ones that possess desired
properties. The challenge arises because the optimal weight matrix structure in
popular neural network models is obscure in most cases and may vary from layer
to layer even in the same network. Prior structured matrices proposed for
efficient DNNs were mostly hand-crafted without a generalized framework to
systematically learn them. To address this issue, we propose a generalized and
differentiable framework to learn efficient structures of weight matrices by
gradient descent. We first define a new class of structured matrices that
covers a wide range of structured matrices in the literature by adjusting the
structural parameters. Then, the frequency-domain differentiable
parameterization scheme based on the Gaussian-Dirichlet kernel is adopted to
learn the structural parameters by proximal gradient descent. On the image and
language tasks, our method learns efficient DNNs with structured matrices,
achieving lower complexity and/or higher performance than prior approaches that
employ low-rank, block-sparse, or block-low-rank matrices. | Changwoo Lee, Hun-Seok Kim | 2023-10-29T03:07:30Z | http://arxiv.org/abs/2310.18882v2 | # Differentiable Learning of Generalized Structured Matrices for Efficient Deep Neural Networks
###### Abstract
This paper investigates efficient deep neural networks (DNNs) to replace dense unstructured weight matrices with structured ones that possess desired properties. The challenge arises because the optimal weight matrix structure in popular neural network models is obscure in most cases and may vary from layer to layer even in the same network. Prior structured matrices proposed for efficient DNNs were mostly hand-crafted without a generalized framework to systematically learn them. To address this issue, we propose a generalized and differentiable framework to learn efficient structures of weight matrices by gradient descent. We first define a new class of structured matrices that covers a wide range of structured matrices in the literature by adjusting the structural parameters. Then, the frequency-domain differentiable parameterization scheme based on the Gaussian-Dirichlet kernel is adopted to learn the structural parameters by proximal gradient descent. Finally, we introduce an effective initialization method for the proposed scheme. Our method learns efficient DNNs with structured matrices, achieving lower complexity and/or higher performance than prior approaches that employ low-rank, block-sparse, or block-low-rank matrices.
## 1 Introduction
Deep Neural Networks (DNNs) for large language models (LLMs) (Vaswani et al., 2017; Devlin et al., 2018; Radford et al., 2019; Brown et al., 2020) and vision tasks (Dosovitskiy et al., 2020; Touvron et al., 2021) have shown great success in various domains in recent years. The size of the DNNs, however, has increased on an extraordinary scale - up to 70 Billion parameters in the single model Zhao et al. (2023) - requiring an unprecedented amount of computing resources and energy to deploy the DNN-based services. Fortunately, under certain conditions, the weight matrices of DNNs trained by stochastic gradient descent (SGD) naturally have well-defined preferred structures, such as low-rank matrices (Yaras et al., 2023; Huh et al., 2021; Gunasekar et al., 2017). However, identifying such structures to lower the effective complexity of weight matrices in recent models such as Transformers Vaswani et al. (2017) remains a challenging problem. Also, it mostly relies on the existing human-designed / hand-crafted structured matrices without a unified systematic approach. Hence prior works have focused on investigating new classes of structured matrices for DNNs (Li et al., 2015; Dao et al., 2022; Chen et al., 2022). Notably, each structured matrix is defined disjointedly from other formats. For example, neither the block-sparse matrix format nor the low-rank matrix format of the same number of parameters is a subset of another, and yet there is no unified representation that describes both well. Moreover, the structure description and the implementation complexity of the structured matrix are in non-differentiable discrete spaces.
In this paper, we investigate (locally) optimal structures of the weight matrices as well as a differentiable training method to learn them, attempting to answer the following two questions:
1. _Is there a universal format that represents a wide range of structured matrices?_
2. _Can the structure of such matrices be learned efficiently, if it exists?_
**Contributions.** Tackling the above two questions, we introduce a _generalized_ and _differentiable_ structured matrix format. The main contributions of this work can be summarized as follows.
1) We propose a **Generalized Block-low-rank** (GBLR) matrix format, which includes many important structures such as Low-Rank (LR), Block Sparse (BSP), and Block-low-rank (BLR) matrices under some practical conditions. The new structured matrix format consists of two types of parameters: one guiding the _structure_ of the matrix and the other specifying the _content_ or the values of matrix elements. We show that the LR, BSP and BLR formats are special cases of the GBLR matrix. We also show that the GBLR format is closed under the interpolation between existing GBLR matrices in the structural parameter space, which we believe is a strong evidence that the GBLR format is able to capture undiscovered structured matrix formats.
2) We introduce a _differentiable_ parameterization of the structural parameters - widths and locations of blocks - of the GBLR format. The structural parameters are defined in the _frequency_ domain, and are processed by the proposed **Gaussian-Dirichlet** (Gaudi) function followed by inverse Fast Fourier Transform (IFFT) to the format named _Gaudi-GBLR_. We show that the derivatives of the Gaudi function with respect to the structural parameters exist almost everywhere, even when the width is zero.
3) We propose a practical learning algorithm based on the proximal gradient descent to train compact neural networks with Gaudi-GBLR matrices. The proposed method is extensively evaluated on the Vision Transformers (ViT) (Dosovitskiy et al., 2020) and MLP-Mixer (Tolstikhin et al., 2021), outperforming prior approaches using hand-designed structured matrices.
## 2 Preliminaries
**Notation.** We use \(\odot\) to indicate the elementwise (Hadamard) product of two matrices. The imaginary unit is denoted by \(\imath=\sqrt{-1}\). The normalized \(\operatorname{sinc}\) function is defined by \(\operatorname{sinc}(x)=\frac{\sin\pi x}{\pi x}\) where \(\operatorname{sinc}(0):=1\). The (element-wise) floor function is denoted by \([\cdot]\). The index of elements in a matrix or vector starts from zero (instead of one), following the convention of Cooley & Tukey (1965). Also, \(\mathcal{I}_{n}=\{0,1,\ldots,n\}\) denotes the index set from 0 to \(n\).
**Assumption.** For simplicity, we assume the weights are square matrices. Extension to rectangular matrix cases is discussed in Appendix A.3.
### Block-related Matrices.
**A block \(\mathbf{B}\in\mathbb{R}^{|R|\times|C|}\)** of a matrix \(\mathbf{W}\in\mathbb{R}^{n\times n}\) is a submatrix of \(\mathbf{W}\) with _consecutive cyclic_ row and column indices. For example, \(R\) can be \(\{n-2,n-1,0,1\}\) if \(|R|=4\). Also, we say two blocks \(\mathbf{B}_{1}\) and \(\mathbf{B}_{2}\)**overlap** if they have shared elements of \(\mathbf{W}\).
Two well-known block-structured matrices are a _block sparse_ (BSP) matrix and a _block-low-rank_ (BLR)(Amestoy et al., 2015) matrix. Informally speaking, a matrix is block-sparse when non-zero elements are gathered in blocks and such blocks are sparse in the matrix. A block-low-rank matrix is composed of non-overlapping equally-partitioned low-rank blocks. Figure 1 illustrates the BSP and BLR matrices as well as our proposed generalized block-low-rank matrix, which we introduce in Section 3.1. We present the formal definitions of the block-sparse and block-low-rank matrices in Appendix A.1.
### Structured Matrix
We say a matrix \(\mathbf{W}\in\mathbb{R}^{n\times n}\) is **structured** if, for any \(\mathbf{x}\in\mathbb{R}^{n}\), the matrix-vector product (MVP) \(\mathbf{y}=\mathbf{W}\mathbf{x}\) requires significantly less number of multiplications than \(n^{2}\). For instance, LR, BSP,
Figure 1: Comparison of block-sparse, block-low-rank, and our proposed Generalized block-low-rank matrices.
and BLR matrices are structured because the number of multiplications for MVP is determined by their (low) rank or sparsity. Although a general _sparse_ matrix reduces the complexity of MVP to be a sub-quadratic function of \(n\), it is excluded in our discussion of structured matrices because processing moderately sparse matrices (10\(\sim\)50% density) in conventional hardware such as GPUs does not reduce the actual run-time due to its unstructured positions of non-zero elements (Chen et al., 2022).
## 3 Proposed Method
We are interested in training a deep neural network (DNN) \(f\) under a _computational cost_ constraint:
\[\min_{f}\sum_{\mathbf{x},\mathbf{y}\sim\mathcal{D}}\mathcal{L}(f(\mathbf{x}),\mathbf{y})\quad \text{s.t.}\quad\mathrm{cost}(f)\leq B, \tag{1}\]
where the first term is the cross-entropy loss for a classification task at a data point \(\mathbf{x}\) and a label \(\mathbf{y}\), and the constraint \(\mathrm{cost}(f)\) is the number of multiplications to compute the neural network output \(f(\mathbf{x})\). Our method learns weight matrices of DNNs in a _generalized_ structured matrix format in a _differentiable_, end-to-end manner.
### Generalized Block-Low-Rank (GBLR) Matrix
We introduce a concept of a generalized structured matrix format that explains multiple popular structure matrices used in DNN training. The idea is that a block in a matrix can be expressed by a **sum of rank-1 blocks**, i.e., a rank-\(r\) block is a sum of \(r\) rank-1 blocks at the same position. In this manner, LR, BSP, and BLR matrices can be expressed under a unified framework (Theorem 1).
To be specific, our proposed structured matrix format is a generalized version of Block-Low-Rank (BLR) matrices. An \(n\)-by-\(n\)_Generalized Block-Low-Rank_ (GBLR) matrix \(\mathbf{W}\) is obtained by _overlapping_ multiple rank-1 blocks of _different_ sizes at _arbitrary_ locations, as depicted in Figure 2 (Left). The locations of the blocks as well as their element values are learned _simultaneously_ from training data without explicit size or location restrictions.
Suppose there are \(K\) blocks: \(\mathbf{B}_{1}\in\mathbb{R}^{w_{1}^{R}\times w_{1}^{C}},\mathbf{B}_{2}\in\mathbb{R}^{w _{2}^{R}\times w_{2}^{C}},\ldots,\mathbf{B}_{K}\in\mathbb{R}^{w_{K}^{R}\times w_{ K}^{C}}\). Each block \(\mathbf{B}_{k}\) has two parameter sets: 1) the _structural_ parameters that identify the _position_ of the block in the row and column index set \(\mathcal{I}_{n-1}=\{0,1,\ldots,n-1\}\), and 2) the _content_ parameters which specify the actual values of matrix elements.
Configuration of structural parameters.The position of a block is given by the indices of the rows and columns it occupies in the \(n\times n\) matrix. Hence, the placement of a rectangle block can
Figure 2: Left: An example of a GBLR matrix with 4 blocks. A block is generated from the _structural_ parameters \((w^{R},l^{R}),(w^{C},l^{C})\) and the _content_ parameters \((\mathbf{u},\mathbf{v})\), where \((w^{R},l^{R})\) and \((w^{C},l^{C})\) form binary masks \(\mathbf{m}_{(w^{R},l^{R})}\) and \(\mathbf{m}_{(w^{C},l^{C})}\), respectively. Note that overlapped regions can have a rank higher than one. Right: Efficient Matrix-Vector Product computations using cropped content parameters and structural parameters. The structural parameters locate the input and output indices/addresses to read and write.
be identified by four numbers: _width_ and _location_ in terms of the row or column indices. Hence, we use a _location_ parameter \(l\) and a _width_ parameter \(w\) as the structural parameters of a block. Figure 2 (Left) illustrates a block of size \(w^{R}\times w^{C}\) in an \(n\times n\) matrix at location \((l^{R},l^{C})\). The row (column) index set of a block is the sequence of numbers from \(l^{R}\) (\(l^{C}\)) to \(l^{R}+w^{R}\) (\(l^{C}+w^{C}\)) where the addition is a cyclic/modulo addition. For each block \(\mathbf{B}_{k}\) for \(k=1,\ldots,K\), we have four parameters: \((w^{R}_{k},l^{R}_{k})\) for the row and \((w^{C}_{k},l^{C}_{k})\) for the column. We use the notation \(\phi^{R}_{k}=(w^{R}_{k},l^{R}_{k})\) and \(\phi^{C}_{k}=(w^{C}_{k},l^{C}_{k})\) to represent the tuple of width and location for the row (\(\phi^{R}_{k}\)) and column (\(\phi^{C}_{k}\)).
Based on the structural parameter \(w^{C}_{k}\) and \(l^{C}_{k}\), one can construct an \(n\)-dimensional binary mask that has \(w^{C}_{k}\in\mathcal{I}_{n}\) consecutive ones starting from \(l^{C}_{k}\in\mathcal{I}_{n-1}\) in the cyclic order:
\[m_{\phi^{C}_{k}}[j]=m_{(w^{C}_{k},l^{C}_{k})}[j]=\begin{cases}1&\text{if }l^{C}_{k}\leq j+ an<l^{C}_{k}+w^{C}_{k}\\ 0&\text{otherwise}\end{cases},\;j\in\mathcal{I}_{n-1},\;a\in\{0,1\}, \tag{2}\]
where \(a\in\{0,1\}\) is necessary to make the order cyclic. We call the mask in Eq. 2 the _boxcar_ mask since the non-zero elements are located consecutively. The boxcar mask is used to select \(w^{C}_{k}\) (cyclic) consecutive non-zero elements of an \(n\)-dimensional vector. The mask for the rows \(\mathbf{m}_{\phi^{R}_{k}}\) is obtained in the same way from \(w^{R}_{k}\) and \(l^{R}_{k}\).
**Configuration of content parameters.** To represent the values of a rank-1 block \(\mathbf{B}_{k}\), we use two \(n\)-dimensional vectors \(\mathbf{u}_{k}\) and \(\mathbf{v}_{k}\) as _content_ parameters along with the boxcar masks \(\mathbf{m}_{\phi^{R}_{k}}\), \(\mathbf{m}_{\phi^{C}_{k}}\). All these parameters \((\phi^{R}_{k},\phi^{C}_{k},\mathbf{u}_{k},\mathbf{v}_{k})\) are learned during the DNN training _simultaneously_. Since the boxcar masks \(\mathbf{m}_{\phi^{R}_{k}}\) and \(\mathbf{m}_{\phi^{C}_{k}}\) guide the location of the block in the \(n\times n\) matrix, \(\mathbf{u}_{k}\) and \(\mathbf{v}_{k}\) are element-wise multiplied with the boxcar masks:
\[\mathrm{ZeroPad}(\mathbf{B}_{k})=(\mathbf{m}_{\phi^{R}_{k}}\odot\mathbf{u}_{k})(\mathbf{m}_{ \phi^{C}_{k}}\odot\mathbf{v}_{k})^{T},\]
where the resulting \(n\times n\) matrix is a zero-padded block. Ideally, we expect the mask to expand / shrink and shift to find the right subset of the elements of a content parameter, while the content parameter updates the value of the elements selected by the mask.
Now we formally define the **Generalized Block-low-rank** (GBLR) format, which is the sum of \(K\) zero-padded blocks:
\[\mathbf{W}=\sum_{k=1}^{K}(\mathbf{m}_{\phi^{R}_{k}}\mathbf{m}^{T}_{\phi^{C}_{k}})\odot(\bm {u}_{k}\mathbf{v}^{T}_{k})=\sum_{k=1}^{K}\left(\mathbf{m}_{\phi^{R}_{k}}\odot\mathbf{u}_{k }\right)\left(\mathbf{m}_{\phi^{C}_{k}}\odot\mathbf{v}_{k}\right)^{T}. \tag{3}\]
A GBLR matrix is associated with an average width \(\bar{w}=\frac{1}{2K}\sum_{k=1}^{K}w^{R}_{k}+w^{C}_{k}\).
**Definition 1**.: _Let \(\mathsf{GBLR}(n,K,s)\) be the set of matrices obtained by Eq. 3 for the average width less than or equal to \(s\), i.e., \(\bar{w}=\frac{1}{2K}\sum_{k=1}^{K}w^{R}_{k}+w^{C}_{k}\leq s\). A matrix \(\mathbf{W}\) is an \((n,K,s)\)-GBLR if \(\mathbf{W}\in\mathsf{GBLR}(n,K,s)\)._
We use the notation \(\mathbf{\phi}(\mathbf{W}):=(\mathbf{w}(\mathbf{W}),\mathbf{l}(\mathbf{W}))\) to indicate the collection of the structural parameters of \(\mathbf{W}\), where \(\mathbf{w}(\mathbf{W})=\{w^{R}_{1},w^{C}_{1},w^{R}_{2},w^{C}_{2},\ldots,w^{R}_{K},w^{C }_{K}\}\) and \(\mathbf{l}(\mathbf{W})=\{l^{R}_{1},l^{C}_{1},l^{R}_{2},l^{C}_{2},\ldots,l^{R}_{K},l^{C }_{K}\}\). We simply use \(\mathbf{\phi}:=\mathbf{\phi}(\mathbf{W})\) if \(\mathbf{W}\) is clearly inferred in the context.
**Efficiency.** Once the structural parameters are fixed, it is unwise to store and use two \(n\)-dimensional content parameters for each block \(\mathbf{B}_{k}\) because only \(w^{R}_{k}\) elements of \(\mathbf{u}_{k}\) and \(w^{C}_{k}\) elements of \(\mathbf{v}_{k}\) are non-zero according to the boxcar masks in eq. 3. Hence, one can store and use the _cropped_ content parameters \(\mathbf{u}_{l^{R}:l^{R}+w^{R}}\) and \(\mathbf{v}_{l^{C}:l^{C}+w^{C}}\) for MVP between an input \(\mathbf{x}\in\mathbb{R}^{n}\) (which can be also cropped from \(l^{C}\) to \(l^{C}+w^{C}\)) and a block \(\mathbf{B}\), as described below and in Figure 2 (Right):
\[\mathrm{ZeroPad}(\mathbf{B})\mathbf{x} =(\mathbf{m}_{(w^{R},l^{R})}\odot\mathbf{u})(\mathbf{m}_{(w^{C},l^{C})}\odot \mathbf{v})^{T}\mathbf{x}\] \[=\mathrm{ZeroPad}\left(\mathbf{u}_{l^{R}:l^{R}+w^{R}}(\mathbf{v}^{T}_{l^ {C}:l^{C}+w^{C}}\mathbf{x}_{l^{C}:l^{C}+w^{C}})\right),\]
which requires only \(w^{R}+w^{C}\) multiplications. Hence, the number of multiplications (denoted by FLOPs) for multiplying \(\mathbf{W}\in\mathsf{GBLR}(n,K,s)\) with \(\mathbf{x}\in\mathbb{R}^{n}\) is bounded by \(2Ks\):
\[\text{FLOPs}=\sum_{k=1}^{K}(w^{R}_{k}+w^{C}_{k})=2K\bar{w}\leq 2Ks. \tag{4}\]
**Expressiveness.** Low-rank (LR), block sparse (BSP), and block-low-rank (BLR) matrices are popular structured matrices in the DNN literature, and they are special cases of the GLBR matrix under mild conditions. Proofs and formal definitions of LR, BSP, and BLR matrices are in Appendix A.1
**Theorem 1**.: _Let \(n,K,s\) be positive integers satisfying \(Ks\geq n\). Then any \(n\)-by-\(n\) rank-\(\frac{Ks}{n}\) matrices and \((n,\frac{K}{s},s)\)-block-sparse matrices are \((n,K,s)\)-GBLR. Also, any \((n,K,s,1)\)-block-low-rank matrices are \((n,K,s)\)-GBLR if \(K=(n/s)^{2}\)._
More importantly, a new structured matrix obtained by interpolating the structural parameters of two \((n,K,s)\)-GBLR matrices is still \((n,K,s)\)-GBLR, based on Theorem 2. Therefore, a new type of structured matrices can be derived from a set of GBLR matrices.
**Theorem 2** (Closed under structural interpolation).: _Given two \(n\times n\) matrices \(\mathbf{W},\mathbf{Z}\in\mathsf{GBLR}(n,K,s)\), and \(\alpha\in[0,1]\), consider the following combination between the structural parameters:_
\[\mathbf{w}^{\prime}=\lfloor\alpha\mathbf{w}(\mathbf{W})+(1-\alpha)\mathbf{w}(\mathbf{Z})\rfloor, \quad\mathbf{l}^{\prime}=\lfloor\alpha\mathbf{l}(\mathbf{W})+(1-\alpha)\mathbf{l}(\mathbf{Z})\rfloor.\]
_A matrix \(\mathbf{Y}\) generated by Eq. 3 with the structural parameter \((\mathbf{w}^{\prime},\mathbf{l}^{\prime})\) is a \((n,K,s)\)-GBLR matrix, \(\mathbf{Y}\in\mathsf{GBLR}(n,K,s)\)._
Theorem 1 and Theorem 2 tell us that \((n,K,s)\)-GBLR matrices cover a wide range of popular existing structured matrices and also undiscovered ones. In the following section, we introduce a differentiable tool to find/learn structured matrices in \(\mathsf{GBLR}(n,K,s)\).
### Gaussian-Dirichlet (Gaudi) Mask
The matrix structure in the GBLR format is determined by the width and location parameters. We aim to extract/learn these structural parameters from training data using stochastic gradient descent. In order to do so, the non-differentiability of the boxcar mask parameters needs to be handled.
To tackle this issue, we introduce a _Dirichlet-kernel-based_ parameterization of the boxcar to _explicitly_ parameterize the width \(w\) and location \(l\) in the expression. Consider a boxcar mask of length \(n\), width \(w\), and location \(l\). Let \(\hat{\mathbf{m}}_{(w,l)}\) be the discrete Fourier transform (DFT) of the mask \(\mathbf{m}_{(w,l)}\). The frequency-domain representation of the boxcar mask is given by
\[\hat{m}_{(w,l)}[k] =e^{-2\pi\frac{k}{n}l}\hat{m}_{(w,0)}[k]=e^{-2\pi i\frac{k}{n}l} \sum_{j=0}^{w-1}e^{-2\pi ij\frac{k}{n}}=e^{-2\pi i\frac{k}{n}l}w\frac{\text{ sinc}\left(w\frac{k}{n}\right)}{\text{sinc}\left(\frac{k}{n}\right)}e^{\pi k \left(\frac{1-w}{n}\right)} \tag{5}\] \[=e^{-2\pi\frac{k}{n}l}d_{w}[k],\]
where \(d_{w}[k]:=w\frac{\text{sinc}\left(w\frac{k}{n}\right)}{\text{sinc}\left( \frac{k}{n}\right)}e^{\pi k\left(\frac{1-w}{n}\right)}\) is the _Dirichlet kernel of order \(n\)_(Bruckner et al., 1997) in the discrete domain \(\mathcal{I}_{n-1}=\{0,1,\ldots,n-1\}\).
Furthermore, we propose a smooth version of the Dirichlet-kernel-based mask by convolving the time-domain boxcar mask with the Gaussian-shape function. It is obtained in the frequency domain by element-wise multiplying the Gaussian function \(g_{\sigma}[k]=\exp\left(-\frac{k^{2}}{2\sigma^{2}}\right)\) with the standard deviation \(\sigma>0\) to the Dirichlet kernel. We call the resulting function the **Gaussian-Dirichlet** (Gaudi) function \(\mathbf{d}_{w}^{\sigma}\):
\[\mathrm{d}_{w}^{\sigma}[k] :=g_{\sigma}(k)\cdot\mathrm{d}_{n}[k], \tag{6}\] \[\tilde{\mathbf{m}}_{(w,l)}^{\sigma} :=\mathbf{IDFT}(\mathbf{d}_{w}^{\sigma}\cdot e^{-2\pi\frac{k}{n}l}),\]
where \(\mathbf{k}=[0,1,\ldots,n-1]\). And we call the mask generated by a Gaudi function \(\tilde{\mathbf{m}}_{(w,l)}^{\sigma}\) the **Gaudi mask**, where the parameter \(\sigma\) controls the smoothness. Note that as \(\sigma\rightarrow\infty\), the Gaudi mask converges to the boxcar mask (Lemma 5). Figure 3 visualizes the time-domain Gaudi mask approaching the boxcar mask.
Figure 3: Comparison between Boxcar mask and Gaudi masks in the time domain with different smoothing factors \(\sigma\). The Gaudi mask converges to the Boxcar mask as \(\sigma\) grows.
A useful property of the Gaudi mask is that one can obtain exact derivatives with respect to the width parameter, even when the width is zero. To show it, we relax the domain of widths and locations to the continuous interval \([0,n]\) (see Appendix A.4).
**Theorem 3**.: _Let \(n<\infty\) be a finite positive integer. For any \(\sigma\in(0,\infty]\) and \(w,l\in[0,n]\), the derivatives of the Gaudi mask \(\tilde{\mathbf{m}}^{\sigma}_{(w,l)}\) with respect to \(w\) and \(l\) are bounded almost everywhere:_
\[\left\|\frac{\partial\tilde{\mathbf{m}}^{\sigma}_{(w,l)}}{\partial w}\right\|_{2} <\infty,\quad\left\|\frac{\partial\tilde{\mathbf{m}}^{\sigma}_{(w,l)}}{\partial l }\right\|_{2}<\infty.\]
Especially when \(w=0\), the derivative with respect to \(w\) is neither divergent nor zero.
**Corollary 4**.: _For any \(l\in[0,n]\) and \(\sigma\in(0,\infty]\), the norm of the derivative of the Gaudi mask \(\tilde{\mathbf{m}}^{\sigma}_{(w,l)}\) with respect to \(w\) at \(w=0\) is well-defined and greater than zero, i.e., \(0<\left\|\frac{\partial\tilde{\mathbf{m}}^{\sigma}_{(w,l)}}{\partial w}\right\|_{ 2}<\infty\)._
To allow learning the mask structural parameters in a differentiable manner, we plug the Gaudi mask (Eq. 6) into Eq. 3 as the mask of GBLR matrices to model **Gaussian-Dirichlet GBLR** (GauDi-GBLR) with parameters \(\mathbf{\theta}=(\mathbf{\phi},\mathbf{U},\mathbf{V},\sigma)\):
\[\mathbf{W^{\theta}}=\mathbf{W^{(\phi,U,V,\sigma)}}=\sum_{k=1}^{K}\left(\tilde{\mathbf{m}} ^{\sigma}_{\phi_{k}^{R}}\odot\mathbf{u}_{k}\right)\left(\tilde{\mathbf{m}}^{\sigma}_{ \phi_{k}^{C}}\odot\mathbf{v}_{k}\right)^{T}. \tag{7}\]
In practice, one can use small \(\sigma\approx 1\) at the beginning of the training process to update the corresponding (unmasked) content parameters which are more than necessary, then gradually increase \(\sigma\) to adjust the content parameters with a tighter mask. Since the purpose of using a Gaudi mask is to learn structural parameters of the GBLR matrix by _Gradient Descent_, Gaudi-GBLR matrices are later replaced by GBLR matrices once the structural parameters are found/learned. To compute MVP using GBLR matrices, one can use the cropped content parameters and inputs, as we discussed in Section 3.1-efficiency, without constructing masks at all. Hence, during the inference, there is no overhead to compute Gaudi masks.
### Learning Gaudi-GBLR for Efficient Neural Networks
We now introduce an algorithm to learn the structural parameters of Gaudi-GBLR matrices. The goal of the learning algorithm is to identify Gaudi-GBLR matrix structures (i.e., their parameters) that allow computationally efficient DNNs. Our discussion is centered around a two-layered multi-layer perception (MLP) for ease of understanding. However, the technique can be applied to general DNNs that incorporate linear layers.
Now let us consider a two-layered MLP \(\mathbf{f^{\theta}}\) with a Gaudi-GBLR weight matrix \(\mathbf{W^{\theta}}\): \(\mathbf{f^{\theta}}(\mathbf{x})=\mathbf{h}^{T}a(\mathbf{W^{\theta}}\mathbf{x}+\mathbf{b})\). We initially relax the domain of \(w\in\mathcal{I}_{n}\) and \(l\in\mathcal{I}_{n-1}\) of the Gaudi mask to the real-valued space \(w,l\in[0,n]\) as we discuss in Appendix A.4. Due to the property of Gaudi-GBLR matrices Eq. 4, the computational cost constraint on the DNN \(\mathbf{f^{\theta}}\) in Problem (1) can be replaced by a constraint on the sum of the width parameters of \(\mathbf{W^{\theta}}\). Specifically, we find width parameters of \(\mathbf{W^{\theta}}\) satisfying \(\|\mathbf{w}\|_{1}\leq B\) since \(\sum_{k=1}^{K}w_{k}=\|\mathbf{w}\|_{1}\). To solve the problem with Gradient Descent, we relax this \(\ell_{1}\)-norm constrained problem to a _unconstrained_ one using the method of Lagrange multiplier:
\[\min_{\mathbf{\theta}}\sum_{(\mathbf{x},\mathbf{y})\sim\mathcal{D}}\mathcal{L}(\mathbf{f^{ \theta}}(\mathbf{x}),\mathbf{y})+\lambda\|\mathbf{w}\|_{1},\quad\lambda\geq 0. \tag{8}\]
The resulting computational budget is implicitly constrained by a hyperparameter \(\lambda\geq 0\).
Theorem 3 guarantees the derivatives of the widths and locations of the Gaudi-GBLR matrix in \(\mathbf{f^{\theta}}\) can be obtained with any positive smoothing parameter \(\sigma>0\) so that we can safely learn the
parameters in the continuous domain \([0,n]\). Specifically, we update the width parameter \(\mathbf{w}\) in the \(\ell_{1}\)-norm term in Problem (8) by Proximal Gradient Descent (PGD):
\[\mathbf{w}_{t+1}=S_{\eta\lambda}(\mathbf{w}_{t}-\eta\nabla\mathcal{L}(\mathbf{f}^{\mathbf{\theta }}(\mathbf{x}),\mathbf{y})), \tag{9}\]
where \(S_{\mu}(x)=\begin{cases}\mathrm{sign}(x)\cdot(|x|-\mu)&\text{if }x>\mu\\ 0&\text{otherwise}\end{cases}\) is the element-wise soft shrinkage function. In practice, the gradient is calculated with an adaptive optimizer such as Adam (Kingma & Ba, 2014) or AdamW (Loschnikov & Hutter, 2017). The overall process is summarized in Algorithm 1. Although Problem (8) is non-linear, our experimental results show that PGD can attain good local optima with an adaptive optimizer and the initialization method we propose in Appendix A.2. A practical learning method for the width and location parameters defined in the discrete spaces \(\mathcal{I}_{n}\) and \(\mathcal{I}_{n-1}\) is discussed in Appendix A.5.
## 4 Experiments
We evaluate our proposed method by replacing the weight matrices of Vision Transformers (ViT) (Dosovitskiy et al., 2020), and MLP-Mixer (Tolstikhin et al., 2021) with Gaudi-GBLR matrices. For the experiment, we set the number of blocks \(K\) equal to the number of columns of the matrix \(n\). We also evaluate alternative schemes for comparisons where the weights are replaced by popular hand-designed structured matrix formats such as Low-Rank (LR), Block-Sparse-plus-Low-Rank (BSP-LR), and Block-low-rank (BLR). For LR matrices, we use singular vector decomposition to find the best rank-\(s\) approximation of the given matrix for the predefined rank of \(s\). Pixelfly (Chen et al., 2022) and Monarch (Dao et al., 2022) are schemes that use BSP-LR and BLR matrices respectively. We set the structural parameters of these alternative schemes to exhibit similar computational costs for MVP compared to our proposed scheme. Note that the structural parameter sets for LR, BSP-LR, and BLR do not change across different layers in the neural network. We FLOPs to denote the number of multiplications, and use 8 NVIDIA A40 GPUs in our experiments. The detailed experimental settings are described in Appendix A.6.
### Fine-tuning Results
We use the _pre-trained_ weights of the ViT-Base on ImageNet and initialize the parameters of the Gaudi-GBLR matrices by Algorithm 2 in Section A.2. The ViTs with Gaudi-GBLR matrices were fine-tuned on ImageNet (Russakovsky et al., 2015). During the initialization, we set the computational budget of all matrices in the network the same. For a fair comparison, the same set of hyperparameters was used throughout our fine-tuning experiments.
The highest accuracy is achieved by Gaudi-GBLR in ViT-Base with a patch size of \(16\times 16\) on the ImageNet validation dataset after fine-tuning it for 35 epochs. Figure 4 shows that Gaudi-GBLR preserves the accuracy well when the complexity is reduced to 30% of the original 'Dense' model which does not use structured matrices (its FLOPs count is normalized to 1). The other hand-designed approaches exhibit more significant accuracy degradations for the same complexity reduction. Overall, Gaudi-GBLR strikes better Accuracy-FLOPs trade-offs than LR or Monarch approaches. The higher accuracy for a similar FLOPs count quantifies the gain from the learned structured matrices.
### Training From Scratch
We train ViTs (Dosovitskiy et al., 2020) and MLP-Mixers (Tolstikhin et al., 2021) with structured weight matrices on CIFAR-10&100 (Krizhevsky et al., 2009) by Algorithm 1 from randomly-initialized content parameters. We set \(\sigma=1\) for the first epoch and gradually increase it to \(100\) until
Figure 4: ImageNet accuracy after fine-tuning ViT-Base weights replaced with structured weight matrices. Dense: the original ViT-Base model.
the training process ends. In Figure 5, we study the accuracy-FLOPs trade-off using CIFAR10/100 datasets when the models are trained from scratch (i.e., not fine-tuned from pre-trained weights). As in the ImageNet fine-tuning experiment, Gaudi-GBLR achieves superior accuracy-FLOPs trade-offs outperforming the other hand-designed structured matrices.
### Analysis on Learned Gaudi-GBLR Matrices
In this section, we study the learned Gaudi-GBLR matrices in terms of computational budget allocation, and mask patterns. The analysis is based on ViT-Base Gaudi-GBLR matrices trained on ImageNet from scratch by following the same \(\sigma\) adaptation rule used in CIFAR10/100 experiments. The accuracy and FLOPs of the ViT-Base with Gaudi-GBLR is reported in Table 1.
**Learned Computational Budget Allocation.**
The proposed learning framework automatically allocates the computational budget to all linear layers of ViT-Base during the training process given by Algorithm 1. The algorithm finds a well-balanced (not necessarily equal) allocation for each matrix to meet the overall cost budget while minimizing the loss function. As a result, Gaudi-GBLR weight matrices in the network have unequal widths requiring different FLOPs for each MVP. Table 2 summarizes the min/max/average FLOPs statistics collected for different types of linear layers. Within an Attention layer, the weights for _Values_ have the highest budget whereas _Queries_ and _Keys_ use a smaller budget. The smallest layer uses only about 4,200 FLOPs for MVP involving a \(768\times 768\) matrix and a \(768\times 1\) vector. The FLOPs assigned to the linear layer of channel MLPs (_MLP-FC1_ and _MLP-FC2_ in Table 2) vary significantly. Although the size of the weight matrix used in the MLPs are \(4\times\) larger than the ones used for Value, the _MLP-FC2_-type layers use \(7.96\times\) FLOPs than the _Value_-type layers in average.
\begin{table}
\begin{tabular}{|l|c|c|} \hline Model & Acc. (\%) & GFLOPs \\ \hline ViT-Base & 78.57 & 17.2 \\ \hline w/ Gaudi-GBLR & 78.51 & 5.65 \\ \hline \end{tabular}
\end{table}
Table 1: ImageNet accuracies and GFLOPs on a \(224\times 224\) image of ViT-Base models trained from scratch with dense matrices and Gaudi-GBLR matrices.
Figure 5: Accuracy-Cost trade-off of models trained from scratch on CIFAR-10/100 dataset.
Figure 6: Mask patterns of weights for Query, Key, Value, FC1 and FC2 in Layer 0, 5, and 11 (out of \(0\sim 11\)). The brighter, the more overlapped blocks. The numbers below indicate the rank of the matrix (max: 768).
**Visualization.** Figure 6 visualizes the locations of the blocks in exemplary Gaudi-GBLR weight matrices of ViT-Base trained from scratch. We select the first, middle, and the last layers of different types: linear layers for Query, Key, Values in Attention modules, and two linear layers (FC1 and FC2) in MLPs. Bright colors in Figure 6 highlight regions where masks are overlapped. The rank of each weight matrix is marked under each visualized mask pattern. Interestingly, the resulting matrix is neither BSP nor BLR. It is observed that blocks are concentrated in a small number of regions. We believe this is related to the Multi-Head (Vaswani et al., 2017) scheme of the ViT. Each weight matrix of an attention layer is a collection of weights for multiple heads. It is expected that some heads have more significant impacts on the output while others may not contribute as much. Hence multiple blocks are allocated to regions that correspond to those heads. Notice the rank of matrices obtained from the GBLR framework differs significantly across different layers and matrix types (Values, Queries, and Keys).
## 5 Related Works
**Structured Matrices for DNNs.** Prior works have identified that the weights have simple structures under certain conditions (Yaras et al., 2023; Huh et al., 2021; Gunasekar et al., 2017). However, explaining the structure of every weight matrix in practical DNNs such as Transformers (Vaswani et al., 2017) is still a challenging problem. Hence, prior works have focused on manually designing suitably structured matrices for DNNs. Hsu et al. (2022) used weighted low-rank decomposition for the weights of language models. Butterfly matrices (Li et al., 2015; Dao et al., 2019), inspired by Fast Fourier Transform (FFT), were adopted in the form of Block-Sparse (BSP) format (Pixelfly) by Chen et al. (2022), and also in the form of Block-low-rank (BLR) format (Monarch) by Dao et al. (2022). Unlike these prior works that rely on manual designs, our method learns the structure of weight matrices from the training data by stochastic gradient descent.
**Mask Learning** Masking is a popular technique to prune neurons or activations of the DNNs. Since a mask for neuron pruning/selection is a non-differentiable binary vector, its gradient does not exist almost everywhere. Jang et al. (2016) and Maddison et al. (2016) propose alternative distributions close to the Bernoulli distribution, which allow the continuous gradient. Movement Pruning in (Sanh et al., 2020) utilizes Straight-Through Estimator (STE) (Hinton, 2012; Bengio et al., 2013; Zhou et al., 2016) to pass the gradient through the Top-\(k\) function. Lin et al. (2017) adopts deep reinforcement learning to select the input-dependent mask. On the contrary, our mask design solves the non-existing gradient problem by Gaudi-function-based parameterization in the frequency domain without using a surrogate or approximated gradient.
**One-shot Neural Architecture Search.** Neural Architecture Search (NAS) (Zoph and Le, 2016; Liu et al., 2018) seeks the optimal neural network structures from training data. To improve search efficiency, Pham et al. (2018); Liu et al. (2018) adopt the one-shot NAS technique that selects sub-network candidates from a super-network. Our method also falls into a similar category of finding a small-sized neural network in the scope of a structured matrix format while gradually converging to a smaller structure from a super-network given by the baseline network.
**Frequency-domain Learning** DiffStride (Riad et al., 2022) learns a stride of the _pooling_ operation for images by cropping a rectangular region of the frequency response of an image. Riad et al. (2022) utilizes an approximated boxcar mask for a differentiable function. Although DiffStride shares similar components with Gaudi masks, the fundamental difference is in the design of the mask. We parameterize the mask in the _frequency_ domain where widths and locations inherit the exact gradient.
## 6 Conclusion
We propose a generalized and differentiable framework for learning structured matrice for efficient neural networks by gradient descent. We introduce a new generalized format of structured matrices and parameterize the structure in the frequency domain by the Gaussian-Dirichlet (Gaudi) function with a well-defined gradient. Effective learning algorithms are provided for our framework showing flexibility and differentiability to find expressive and efficient structures from training data in an end-to-end manner. Evaluation results show that the proposed framework provides the most efficient and accurate neural network models compared to other hand-designed popular structured matrices.
## 7 Reproducibility Statement
The authors make the following efforts for reproducibility: 1) We attach the complete source code used in our experiment as supplemental materials, 2) we provide the detailed settings and hyperparameters in Section 4, A.5, and A.6, and 3) the proofs of all theorems, lemmas, and corollaries are presented in Section A.1.
## Acknowledgment
We thank Sara Shoouri, Pierre Abillama, and Andrea Bejarano for the insightful discussions and feedback on the paper.
|
2307.14322 | Modeling Inverse Demand Function with Explainable Dual Neural Networks | Financial contagion has been widely recognized as a fundamental risk to the
financial system. Particularly potent is price-mediated contagion, wherein
forced liquidations by firms depress asset prices and propagate financial
stress, enabling crises to proliferate across a broad spectrum of seemingly
unrelated entities. Price impacts are currently modeled via exogenous inverse
demand functions. However, in real-world scenarios, only the initial shocks and
the final equilibrium asset prices are typically observable, leaving actual
asset liquidations largely obscured. This missing data presents significant
limitations to calibrating the existing models. To address these challenges, we
introduce a novel dual neural network structure that operates in two sequential
stages: the first neural network maps initial shocks to predicted asset
liquidations, and the second network utilizes these liquidations to derive
resultant equilibrium prices. This data-driven approach can capture both linear
and non-linear forms without pre-specifying an analytical structure;
furthermore, it functions effectively even in the absence of observable
liquidation data. Experiments with simulated datasets demonstrate that our
model can accurately predict equilibrium asset prices based solely on initial
shocks, while revealing a strong alignment between predicted and true
liquidations. Our explainable framework contributes to the understanding and
modeling of price-mediated contagion and provides valuable insights for
financial authorities to construct effective stress tests and regulatory
policies. | Zhiyu Cao, Zihan Chen, Prerna Mishra, Hamed Amini, Zachary Feinstein | 2023-07-26T17:41:51Z | http://arxiv.org/abs/2307.14322v2 | # Modeling Inverse Demand Function with Explainable Dual Neural Networks
###### Abstract
Financial contagion has been widely recognized as a fundamental risk to the financial system. Particularly potent is price-mediated contagion, wherein forced liquidations by firms depress asset prices and propagate financial stress, enabling crises to proliferate across a broad spectrum of seemingly unrelated entities. Price impacts are currently modeled via _exogenous_ inverse demand functions. However, in real-world scenarios, only the initial shocks and the final equilibrium asset prices are typically observable, leaving actual asset liquidations largely obscured. This missing data presents significant limitations to calibrating the existing models. To address these challenges, we introduce a novel dual neural network structure that operates in two sequential stages: the first neural network maps initial shocks to predicted asset liquidations, and the second network utilizes these liquidations to derive resultant equilibrium prices. This data-driven approach can capture both linear and non-linear forms without pre-specifying an analytical structure; furthermore, it functions effectively even in the absence of observable liquidation data. Experiments with simulated datasets demonstrate that our model can accurately predict equilibrium asset prices based solely on initial shocks, while revealing a strong alignment between predicted and true liquidations. Our explainable framework contributes to the understanding and modeling of price-mediated contagion and provides valuable insights for financial authorities to construct effective stress tests and regulatory policies.
Explainable Machine Learning Inverse Demand Function Financial Contagion Asset Liquidation
## 1 Introduction
The interconnections among financial institutions can propagate and amplify shocks during financial crises [1, 2], the risks of which are often unexpected _a priori_. One striking instance is the 2007 subprime crisis, where subprime mortgage backed securities - initially perceived as a minor asset class - unleashed a wave of catastrophic economic losses and incited a global recession [3]. This progression, wherein exogenous shocks snowball into substantial losses, is conceptualized as contagion effects. Among the various contagion types, price-mediated contagion is notably potent: When encountering external shocks, firms may be forced to liquidate portions of their holdings, which depresses the asset's price and imposes financial stress on all entities possessing the same asset. As these distressed institutions strive to stabilize their balance sheets by liquidating other assets, this behavior becomes self-reinforcing throughout broader financial markets [4]. Considering price-mediated contagion permits the crisis to spread through firms that may not even have direct contractual relations, it can ripple across a wide range of seemingly unrelated entities and magnify the
overall financial damage [5; 6]. Thus, the development of models that accurately depict how adverse shocks spur firms to liquidate assets, and how such actions subsequently affect prices, is of paramount importance in mitigating systemic risk within financial markets.
To measure the impact of asset sales on their prices, existing studies propose "inverse demand functions" to mathematically model the relationship between trading activities and resultant prices. However, when examining price-mediated contagion via these functions, the literature presents two significant gaps. First, in real-world scenarios, typically only the initial shocks and the final equilibrium asset prices are observable. The realized asset liquidation or portfolio rebalancing actions undertaken by financial institutions tend to remain obscured, thereby complicating the direct modeling process from liquidation to price [7]. Second, current literature utilizes exogenous forms, primarily either a linear or exponential form, to model the price impacts [8; 5]. Despite their ease of interpretation, these forms fail to deliver a comprehensive financial interpretation or a compelling economic rationale in practice [9].
Aiming to address these gaps, we introduce a novel dual network structure that models the inverse demand function in two stages: (i) The initial shocks are mapped into predicted equilibrium "liquidations" through the first neural network, and (ii) these predicted liquidations are then input into the second network to generate resultant asset prices. This methodology offers two primary advantages: First, rather than pre-specifying an analytical form, we employ deep neural networks to model the inverse demand function, which enables us to capture both linear and non-linear constructions based on realized data. Second, our framework is capable of modeling the inverse demand function even in the absence of observable ground truth liquidation data.
Moreover, our work also contributes to the explainable machine learning (XAI) literature in the financial economics realm. Although the advent of machine learning (ML) models has led to substantial transformations in the finance sector, the inherent lack of transparency and explainability in these models presents significant challenges for users and regulators in establishing trust [10; 11]. In contrast to existing XAI approaches that explain the model _ex post_[12], our framework endows each model component with explicit financial significance. Specifically, we define the hidden liquidation as a learnable intermediate value for models to apprehend, and use the dual networks to quantify relations from shocks to liquidations and then to equilibrium prices. Although the exploration of ML models in the inverse demand function is still in its nascent stages, our framework pioneers a new direction in model explainability by facilitating a component-by-component understanding of the model.
We assess the performance of our model in two contexts. First, we evaluate it using simulated data with both linear and non-linear (i.e., exponential, arctangent) inverse demand functions. Our experimental results not only demonstrate the model's ability to accurately predict the final equilibrium asset prices, but also reveal a close alignment between predicted and true liquidations. These findings indicate that our framework can deliver comprehensible results without compromising learning performance. Second, we test our model on multiple assets, considering scenarios both with and without cross-impacts. While prior studies often assume that asset prices are only indirectly connected [5; 13], the experimental results highlight that our proposed approach can realistically capture price cross-impacts during financial contagion.
The remainder of this paper is organized as follows. In the next section, we provide an overview of related work on the inverse demand function in financial contagion and XAI methods in finance. We then delve into the details of our proposed dual network framework, including the objective definition and network design. Following this, we present our experimental setup, data simulation strategy, and results. We conclude the paper by pinpointing the potential limitations and proposing avenues for future research.
## 2 Literature Review
### Literature of Inverse Demand Function
The contagion of losses through the financial system is often classified as either direct or indirect [5; 2]. Direct contagion - also known as default contagion - is characterized by the cascade of losses through the contractual obligations between financial institutions. In such an event, the failure of one institution can precipitate substantial losses for its direct counterparties; because of these losses, a counterparty may default on its own obligations propagating the initial shock. For default contagion, systemic risk is characterized by the counterparty network [14]. In contrast, indirect contagion allows a crisis to spread through the financial system to firms that may not even have direct contractual relations with a distressed institution. In this work we focus on price-mediated contagion in which mark-to-market accounting triggers all institutions to write-down the value of their assets simultaneously; as asset values drop, firms may be forced to liquidate portions of their holdings leading to further impacts to asset prices which becomes self-reinforcing within the financial system [4]. As this indirect contagion need not follow the network of counterparties, indirect contagion effects can ripple across a wide range of seemingly unrelated banks and assets [5]. Price-mediated contagion can cause orders
of magnitude greater losses to the financial system than default contagion for these reasons, see e.g., [5; 6; 15]. Hence, constructing models capable of capturing the impact of adverse shocks on asset liquidation, as well as their subsequent influence on market prices, holds pivotal importance.
The relationship between trading activity and the resulting asset prices has, previously, been mathematically modeled through the use of "inverse demand functions" [16; 7]. Within the price-mediated contagion literature, the two most prominent forms for the inverse demand function are _linear_ and _exponential_ functions. Within [5], a linear price impact model is assumed in which the sale of each asset share has a constant absolute impact on the price. Alternatively, Cifuentes et al. [8] utilize an exponential inverse demand function so that the sale of each asset share has a constant relative impact on the price. Despite the easy interpretation of these example inverse demand functions, neither of these forms is consistent with financial practice. This problem was previously highlighted by [9] which presented a theory for the construction of these price impacts through the equilibrium of an exchange economy.
Herein we consider a data-driven approach to learning the inverse demand function. Specifically, our model leverages a deep neural network for the formulation of the inverse demand function, as opposed to adopting a pre-defined analytical form. This approach confers us the flexibility to encapsulate both linear and non-linear relations drawn from the empirical data. Contrary to preceding studies, which postulate the price of assets are merely indirectly connected (see, e.g., [5; 13]), our proposed approach is able to model realistic price cross-impacts. However, this modeling exercise is hindered by data availability. Specifically, in practice, only the initial shocks and the final equilibrium asset prices are typically observable. The realized asset liquidations and portfolio rebalancing actions undertaken by the financial institutions are obscured by the financial system itself. As such, we propose a novel two-step approach in which: (i) the initial shocks map into predicted equilibrium "liquations" and (ii) these values are fed in to the second neural network, which serves as an inverse demand function, to predict the resulting asset prices. As noted, the second neural network directly produces a data-driven equivalent of the inverse demand function even in the absence of observable liquidation data. Experimental results, as discussed in the subsequent sections, reveal a close alignment between the predicted liquidations from the first neural network and the true liquidations in simulated data.
### Literature of Explainable AI in FinTech
Capitalizing on a wealth of data and advanced computational resources, the advent of ML - specifically deep learning - has brought forth a substantial transformation in the financial sector [17]. The intersection of finance and technology, often termed as FinTech, has revolutionized the financial landscape by democratizing financial services, fortifying consumer protection, and refining risk management strategies [18; 19]. Academically, a growing body of literature also sheds light on the untapped potential of machine learning within the ambit of FinTech [20; 21]. However, the ML deployment in real-world financial applications poses significant challenges, chiefly due to their inherent lack of transparency and explainability [11]. Consider contemporary deep learning (DL) models as an example: Despite the
Figure 1: Framework Overview of the Dual Neural Networks
superior accuracy when compared to traditional models like linear regression or decision trees, they are often viewed as "black boxes" that provide scant insightful information, which curtails their wider adoption within the finance sector [22]. Transparency and explainability are critical factors in the development of trustworthy and reliable FinTech solutions for two essential reasons.
First, in highly-regulated domains like finance, regulators mandate rigorous transparency requirements for the implementation of advanced technologies. These measures aim to ensure traceability of decisions [23], adherence to legal regulations [10], and observance of privacy standards [24]. With the massive amount of data generated in recent years, policymakers, legislators, and regulators all require explainable models to fulfill their expanding responsibilities and establish more proactive, data-driven regulation and surveillance approaches. Other stakeholders, including rating agencies and financial institutions, demand model transparency to evaluate their fairness in terms of industry or regional bias [25]. Even individual investors are wary of adopting models if the decision-making process remains opaque to them, regardless of the high accuracy these models might demonstrate. Second, the "black-box" nature of models can expose them to potential vulnerabilities. Studies have shown that models may erroneously interpret similar yet distinct input features, leading to identical outputs. For example, computer vision researchers reveal that DL models struggle to differentiate between yellow-and-black stripe patterns of school buses and sticker-laden parking signs [26]. Consequently, a lack of model interpretability and auditability could precipitate serious repercussions, potentially engendering macro-level risks that may cause unforeseen societal disruptions or harm [27]. Moreover, existing research indicates that attempts to enhance a model's explainability in finance often fail to elucidate the financial implications of the model in a manner comprehensible to average users. Instead, these efforts primarily serve machine learning engineers, assisting them in debugging or refining the models [25, 28]. Hence, the pressing need for explainable AI in the finance field arises not only from the models' intrinsic requirement for robustness but also from the demands of diverse stakeholders and society at large.
To address these challenges, researchers propose the concept of XAI to enhance the transparency of ML models in FinTech.3 XAI aims to generate more comprehensible models without compromising learning performance, thereby enabling humans to better understand, trust, and manage their artificially intelligent counterparts [29]. Existing XAI methods can generally be categorized into two types: intrinsic interpretability and post-hoc interpretability [12].
Footnote 3: We acknowledge that certain literature delineates the nuanced differences between interpretability and explainability in deep learning models. For instance, Misheva et al. [11] posit that interpretability pertains to understanding the relationships between cause and effect, while explainability concerns comprehending the functionality of each model component in human terms. However, as our work primarily focuses on modeling interpretable deep learning models rather than defining XAI, we employ these terms interchangeably throughout this paper as in [12].
Intrinsic interpretability methods focus on constructing models based on human-oriented constructs, facilitating human comprehension of the transformation process from inputs to outputs. These methods rely on pre-programmed rules devised by human experts and are typically deterministic [10], making it easier to identify potential biases and discriminatory practices within the algorithms [30]. However, these models' effectiveness is also constrained by their rigid, inflexible design. Such systems struggle to learn from new data or adapt to changing conditions, rendering them less effective in the fluid world of finance, which is charachterized by evolving markets and unpredictable economic conditions. Furthermore, even though some ML models offer high interpretability, their ease of interpretation can diminish as the scenario complexity escalates. For example, while decision trees are typically straightforward to explain, a real-world application of predicting mortgage defaults may require hundreds of large decision trees operating in parallel, which makes it challenging to intuitively summarize how the model functions [31].
Post-hoc interpretability methods, on the other hand, target the interpretation of complex ML models. As underscored by the trade-off between model complexity and performance [12, 32], advanced ML models often outperform decision trees or case-based reasoning models but, simultaneously, their complexity renders interpretability a challenge [30]. Post-hoc XAI studies mostly hinge on tools such as Local Interpretable Model-Agnostic Explanations (LIME) or Shapley Additive explanations (SHAP) [33, 34]. For instance, studies [11, 22, 35] apply LIME and SHAP to explain credit scoring models, and their experimental results from 2.2 million peer-to-peer loan records indicated that both LIME and SHAP provided consistent explanations aligning with financial logic. To predict financial distress, Zhang et al. [25] propose an ensemble method paired with SHAP, while Park et al. [36] combine LIME with LightGBM- and XGBoost-based models to identify key features leading to bankruptcy. For asset management, Benhamou et al. [37] integrate SHAP with gradient boosting decision trees to scrutinize a feature's impact from a set of 150 macroeconomic features during the 2020 financial meltdown.
In this study, we introduce a novel explainable framework that pre-defines learnable intermediate values for deep learning models to apprehend. Diverging from traditional XAI methods - which typically explain the model post-training - our framework imbues each model component with distinct financial significance _a priori_. Although the exploration of ML for financial systemic risk is yet at a fledging stage [38], our XAI paradigm pioneers a new direction
in model explainability by facilitating a component-by-component understanding of the model. In the subsequent section, we will delve into the specifics of our model design.
## 3 Method
### Definition of the Objectives
Our proposed explainable framework integrates two interconnected networks, with the output of the first network serving as a learnable intermediary value imbued with distinct financial significance. Here we apply our framework to model the inverse demand function, as it presents two pressing requirements that our framework is proficiently positioned to address.
First, as we noted previously, in typical real-world scenarios, only the initial financial shock and the ultimate equilibrium price are visible, while asset liquidation volumes remain elusive [7]. Our model bridges this gap by assigning the unobservable liquidation as the intermediate learning value, endowing this output with a distinct financial meaning. This approach not only addresses a critical shortcoming in existing XAI frameworks that lacks financial interpretability [25; 28], but also provides a versatile framework that can be expanded to broader contexts. Specifically, it is applicable in situations where only the input and output are apparent, yet we acknowledge the existence of certain indispensable hidden values within the process. Second, as a deep learning model, our framework excels in capturing the non-linearity between the shock, liquidation, and equilibrium price. While the prevailing literature commonly fits the relation as linear or exponential without justifying the persistence of such model types [9], our framework fills this gap with a data-driven process and is readily scalable according to the data volume.
To this end, our proposed dual networks, purpose-built for the inverse demand function, fulfill two roles: (i) Deducing the liquidation volume for each asset sold by the respective banks, and (ii) Predicting the equilibrium price for each asset given financial shocks. In pursuit of these objectives, we denote the two interconnected neural networks within our framework as the Liquidation Net and the Price Net. As their names suggest, Liquidation Net ingests the shocks and produces the corresponding asset liquidation. Subsequently, Price Net employs the predicted liquidation as input to forecast the equilibrium asset price. In essence, the liquidation serves as an intermediary variable within the neural networks. The model is then trained using the loss derived from the discrepancy between the predicted and actual asset prices. Following the training phase, we examine the outputs of the Liquidation Net to infer liquidation values.
Figure 1 presents a high-level illustration of the proposed architecture. We will elucidate the construction of the two networks, notations, as well as the detailed process, in the rest of this section.
### Model Architecture and Design
We begin by outlining the transformation from financial shocks to asset liquidations via the Liquidation Net. Suppose there are \(N\) banks and \(M\) assets held by these banks in the price-mediated network. Accordingly, we denote banks by \(n\in\{1,\ldots,N\}\), assets by \(m\in\{1,\ldots,M\}\), and the impact of financial shocks on the banks by \(\mathbf{s}=[s_{1},s_{2},\ldots,s_{N}]\). Given the heterogeneity in bank sizes and asset portfolios, the assets held by banks are designated by \(\mathbf{a}=\left[a_{1}^{(1)},a_{1}^{(2)},\ldots,a_{N}^{(M)}\right]\). For instance, \(a_{1}^{(1)}=1.8\) implies that bank 1 has 1.8 units of asset 1 in its possession.
We feed the financial shocks \(\mathbf{s}\) into the Liquidation Net, a neural network made up of several fully connected layers. What differentiates our approach from traditional deep learning models is our embedding of the expected monotonic relationship between the shocks and liquidation [16; 9]. For instance, it's logical that severe shocks should drive banks to sell off more assets, resulting in increased market liquidation. Therefore, larger shock values should correspond to more substantial liquidation outputs. To integrate this positive correlation, we tailor the Liquidation Net by imposing a clamp on the trainable parameters within each layer. Clamping ensures that the parameters remain non-negative (i.e., in the range \((0,\infty]\)) during the training phase. We further adopt a Rectified Linear Unit (ReLU) activation function, defined as \(\max\{0,x\}\), to guarantee that increased shocks lead to amplified asset liquidations. The inner workings of the Liquidation Net can be expressed mathematically as:
\[\tilde{\boldsymbol{\ell}}=LiquidNet\left(\mathbf{s}\;;\Theta_{\text{Liq}} \right), \tag{1}\]
where \(\Theta_{\text{Liq}}\) represents the trainable parameters of the Liquidation Net and \(\tilde{\boldsymbol{\ell}}\) indicates the model's predicted liquidations. Considering that different banks will liquidate assets based on the shocks and their holdings, the predicted liquidation \(\tilde{\boldsymbol{\ell}}\) is comprised of assets' liquidations from each bank, represented as \(\tilde{\boldsymbol{\ell}}=\left[\tilde{\ell}_{1}^{(1)},\tilde{\ell}_{1}^{(2)},\ldots,\tilde{\ell}_{N}^{(M)}\right]\) where \(\tilde{\ell}_{n}^{(m)}\) denotes the liquidation of asset \(m\) by bank \(n\).
Following asset divstiture by the banks, which generates market liquidation, we aggregate the liquidation values of assets as:
\[\hat{\mathbf{\ell}}=\left[\hat{\ell}^{(1)},\hat{\ell}^{(2)},\ldots,\hat{\ell}^{(M)} \right],\text{ where }\quad\hat{\ell}^{(m)}=\sum_{n=1}^{N}a_{n}^{(m)}\bar{\ell}_{n}^{(m)}. \tag{2}\]
Subsequently, these predicted liquidations of assets serve as inputs to the Price Net. Similar to the Liquidation Net, a monotonic relationship exists between liquidation and price, with higher liquidation values lead to lower prices. Hence, we also incorporate clamping and the ReLU function in the Price Net to maintain monotonicity. Contrary to the positive correlation in Liquidation Net, a negative correlation exists here. Thus, rather than directly outputting the predicted price, we output the negative predicted price as our final result. This process is depicted in the equation below:
\[\hat{\mathbf{p}}=PriceNet\left(\hat{\mathbf{\ell}}\,;\,\Theta_{\text{Price}}\right), \tag{3}\]
where \(\Theta_{\text{Price}}\) are the trainable parameters of the Price Net and \(\hat{\mathbf{p}}=\left[\hat{p}^{(1)},\hat{p}^{(2)},\ldots,\hat{p}^{(M)}\right]\) denotes the predicted equilibrium prices of assets. The ground truth of the prices is represented as \(\mathbf{p}\). The discrepancy between \(\hat{\mathbf{p}}\) and \(\mathbf{p}\) generates the loss, which is used to backpropagate and tune the trainable parameters (i.e., \(\Theta_{\text{Liq}}\) and \(\Theta_{\text{Price}}\)). We summarize the notations in Table 1 below. In the subsequent section, we perform experiments with the simulated dataset and evaluate our model's performance relative to the established benchmarks.
## 4 Experimental Results
In this section, we undertake two detailed case studies to validate the performance of our proposed model. We first consider a single asset scenario with both linear and non-linear inverse demand functions; second, we consider a multi-asset scenario to investigate our model's capabilities to capture price cross-impacts and the utilized liquidation strategies.
### Case Study 1: Single-Asset Scenario
Consider a system with \(N=2\) banks and \(M=1\) asset. We presume that the liabilities of the target banks, denoted by \(\mathbf{L}\), are influenced by financial shocks. The magnitude of these liabilities varies according to the intensity of the financial shocks. Specifically, under the impact of these shocks, the liabilities randomly range between 0.6 and 0.85. We assume that both banks hold a single unit of the marketable illiquid asset, i.e., \(a_{1}=a_{2}=1\).
We study the performance of our dual network framework under linear, exponential, and arctangent inverse demand functions simulated with the contagion model from [7, 39]. 4 We define the true linear inverse demand function as \(\mathbf{p}=1-0.15\ell\), the true exponential inverse demand function as \(\mathbf{p}=\exp{(-0.15\ell)}\), and the true arctangent inverse demand function as \(\mathbf{p}=\frac{\tan^{-1}(-\ell)+2\pi}{2\pi}\). To evaluate the accuracy of these models, we compare the Mean Squared Error (MSE) of each model's predicted prices (\(\hat{\mathbf{p}}\)) against the actual asset prices (\(\mathbf{p}\)). It is noteworthy that none of the existing benchmarks can model the inverse demand function in the absence of liquidation data. Therefore, we validate our performance by comparing three modeling paradigms:
\begin{table}
\begin{tabular}{l l} \hline \hline Notation & Description \\ \hline \(N\),\(M\) & number of banks and assets, respectively \\ \(\mathbf{s}\) & financial shock impact on banks \\ \(\mathbf{a}\) & marketable asset holdings of banks \\ \(\mathbf{L}\) & liabilities of banks \\ \(\hat{\mathbf{\ell}}\) & predicted asset liquidations of banks \\ \(\hat{\mathbf{\ell}}\) & predicted asset liquidations of assets \\ \(\hat{\mathbf{p}}\) & predicted equilibrium prices of assets \\ \(\mathbf{p}\) & actual equilibrium price of assets \\ \(\Theta_{\text{Liq}}\) & trainable parameters of Liquidation Net \\ \(\Theta_{\text{Price}}\) & trainable parameters of Price Net \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of Used Notations
* **Proposed Model**: Our proposed model involves a dual neural network structure, and only utilizes the financial shocks as input features. Initially, the shocks are used in the Liquidation Net to predict the liquidations of the target banks. Following this, the generated predictions are repurposed as inputs to the Price Net, thereby forming a sequential, interlinked predictive process.
* **Linear Price Model**: This model substitutes the original Price Net with a linear regression model. The formulation of linear Price Net is based on the linear regression result between predicted liquidations and predicted asset prices.
* **Inclusive Model**: This model integrates both the financial shocks and the true liquidations of target banks as observed inputs. Apart from the input modifications, the rest of this inclusive model's structure aligns with that of our proposed model.
Table 2 presents the comparison of MSE results for the predicted asset prices in the single-asset scenario. As summarized in this table, our model's accuracy closely aligns with the inclusive model. This is notable because the inclusive model benefits from additional input information (i.e., the actual bank liquidations). Also, our model's performance is consistent across both linear and non-linear inverse demand functions.
Furthermore, we find that, although the linear price model performs well when matched to a linear inverse demand function, it provides faulty results for the strongly non-linear arctangent inverse demand function. Thus, though it is tempting to utilize this simple structure, it can easily fail in practice. We note the high performance in the exponential case; as can be seen in Figure 1(b), this inverse demand function is almost linear for the shocks considered.
In addition to validating the overall system performance (as shown in Table 2), we aim to investigate the capacity of our dual neural network structure to learn the hidden liquidation amounts. As indicated in Table 3, the liquidations (\(\hat{\ell}\)) output by our proposed model exhibit an exceptionally high correlation with the actual liquidations (\(\ell\)). Notably, this correlation remains robust across all three inverse demand function structures that we have employed.
It is important to note that in our dual network model, the predicted liquidations are only able to capture monotonic trends in response to financial shocks, rather than absolute values. To rectify this and enhance the model's efficacy to predict absolute liquidation values, we introduce a linear scaling procedure. This process involves scaling the predictions to a range determined by the minimum and maximum financial shock values. The scaling process is succinctly described by the following formula:
\[\ell^{*}=(\hat{\ell}-\hat{\ell}_{\text{min}})\times\frac{\ell_{\text{max}}- \ell_{\text{min}}}{\hat{\ell}_{\text{max}}-\hat{\ell}_{\text{min}}}+\ell_{ \text{min}}, \tag{4}\]
where \(\hat{\ell}\) represents the outputs from the Liquidation Network, \(\ell^{*}\) represents the scaled predicted liquidations, and \(\ell\) represents the actual liquidations. This scaling procedure is applicable as, though the actual liquidations \(\ell\) are typically unobserved in practice, the response to minimal shocks (\(\ell_{min}\approx 0\)) or maximum shocks (\(\ell_{max}\approx a_{1}+a_{2}\)) can be accurately predicted from balance sheet information. Following this, Equation (4) scales the initial, unscaled predictions of liquidations \(\hat{\ell}\) to \(\ell^{*}\), and \(\ell^{*}\) perfectly aligns with the actual liquidations after this scaling procedure is applied.
As we have successfully predicted the hidden bank liquidations and asset prices, we aim to illuminate the price impact between bank liquidations and equilibrium asset prices. This understanding is valuable for financial authorities in the
\begin{table}
\begin{tabular}{l c c c} \hline \hline Model Name & \multicolumn{3}{c}{Mean Squared Error (MSE)} \\ \cline{2-4} & Linear & Exponential & Arctan \\ \hline Proposed Model & \(2.80\times 10^{-7}\) & \(6.80\times 10^{-7}\) & \(6.00\times 10^{-8}\) \\ Linear Price Model & \(5.20\times 10^{-7}\) & \(8.50\times 10^{-7}\) & \(1.26\times 10^{-5}\) \\ Inclusive Model & \(9.00\times 10^{-8}\) & \(1.40\times 10^{-7}\) & \(4.00\times 10^{-8}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: MSE of the Predicted Asset Prices in the Single-Asset Scenario
\begin{table}
\begin{tabular}{l c} \hline \hline
**Dataset** & **Corr Coefficient** \\ \hline Linear & 0.9999 \\ Exponential & 0.9995 \\ Arctangent & 0.9992 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Predicted and Actual Inverse Demand Functions in the Single-Asset Scenario
construction of effective stress tests and regulatory policies. Figure 2 provides an illustrative comparison between the predicted and actual inverse demand functions. The x-axis represents the total liquidation of assets, i.e., \(\ell_{1}^{*}+\ell_{2}^{*}\), while the y-axis shows the corresponding asset price. It demonstrates a remarkable agreement between the predicted and actual inverse demand functions, underscoring the predictive accuracy of our model. Notably, this agreement persists across both linear and non-linear inverse demand functions. In fact, in comparing our proposed model to the linear price model, we find comparable performance for the linear and exponential inverse demand functions but significantly improved performance for the arctan inverse demand function.
### Case Study 2: Multi-Asset Scenario
Consider now a financial system with \(N=2\) banks and \(M=2\) assets. We fix the total units of both assets to be 1, i.e., \(a_{1}^{(k)}+a_{2}^{(k)}=1\) for \(k=1,2\). However, in order to investigate a more complex system than in Case Study 1, we set \(a_{1}^{(1)}=0.4\), \(a_{1}^{(2)}=0.6\), \(a_{2}^{(1)}=0.6\), and \(a_{2}^{(2)}=0.4\). The banks' liabilities \(\mathbf{L}\), fluctuate based on the intensity of the financial shocks, randomly spanning a range between 0.6 and 0.9. As discussed in the multi-asset framework of Section 4 in [9], we consider the price cross-impacts that liquidating one asset can have on the price of another. This impacts, commonplace in real-world financial markets, stands in contrast to the typical, simplifying assumption that there are no cross-impacts as taken in previous studies (e.g., [5; 39]). We use this setting to simulate datasets with linear inverse demand functions that include or exclude price cross-impacts.
Drawing upon the financial contagion model outlined by [1; 5; 7], we validate the performance of our model under a proportional liquidation strategy. This strategy implies that each bank liquidates its portfolio proportionally to its holdings, enabling us to assess how our model learns banks' asset liquidation patterns, which can vary due to differences in portfolio holdings. Specifically, if a bank's portfolio comprises 60% of Asset 1 and 40% of Asset 2, then, when the bank liquidates, it disposes of 60% of Asset 1 and 40% of Asset 2. We would like to emphasize that the proportional liquidation strategy has been extensively examined and validated in existing literature, as evidenced by works such as [5; 13; 39].
Based on the results presented in Table 4, it is clear that our proposed model performs impressively in a complex multi-asset scenario. While the MSE of the proposed model is slightly higher compared to the inclusive model, the errors are still comparable. This demonstrates that our proposed model can accurately predict asset prices even when dealing with a greater level of complexity.
Furthermore, as demonstrated in Table 5, the predicted liquidations are extremely highly correlated to the true liquidations for both assets, either with or without cross-impacts. We wish to note that the same linear scaling procedure (4) can be adopted herein so as to improve the interpretability of our results.
\begin{table}
\begin{tabular}{l c c} \hline \hline Model Name & \multicolumn{2}{c}{Mean Squared Error (MSE)} \\ \cline{2-3} & Without Cross-Impacts & With Cross-Impacts \\ \hline Proposed Model & \(4.10\times 10^{-7}\) & \(8.00\times 10^{-8}\) \\ Inclusive Model & \(1.70\times 10^{-7}\) & \(4.00\times 10^{-8}\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Sum of the MSEs of the Predicted Asset Prices in the Two-Asset Scenario
Figure 2: Predicted and Actual Inverse Demand Functions in the Single-Asset Scenario
Finally, we want to investigate our model's ability to capture the true inverse demand function with price cross-impacts. Herein, the true inverse demand function for asset 1 is given by \(\mathbf{p}_{1}=1-0.15\ell_{1}-0.015\ell_{2}\). For this experiment, we fit a multiple linear regression with the predicted price of asset 1 (\(\hat{p}_{1}\)) as the dependent variable, and the scaled predicted liquidations of both assets (\(\ell^{*}\)) as independent variables. This regression analysis, as displayed in Table 6, has estimated coefficients of \(-0.149\) and \(-0.015\) for the scaled predicted liquidations of assets 1 and 2 respectively, and an intercept of approximately \(1.000\).
We derive two major conclusions from the statistical tests performed. First, we fail to reject the null hypothesis that these coefficients are the true values (i.e., the coefficients are \(-0.15\) and \(-0.015\) for assets 1 and 2, respectively, and the intercept is \(1.000\)), with p-values of 0.935, 0.951, and 0.912 for these coefficients and intercept. This indicates that the model's estimates align closely with the true values in the inverse demand function. Second, when we test the null hypothesis that the coefficients are equal to zero, we reject this hypothesis as illustrated with extremely low p-values (<0.001, 0.032, and <0.001) in Table 7. This provides significant statistical evidence that the neural network accurately identifies the real price cross-impacts.
## 5 Conclusion
Within this work, we proposed a novel two-step neural network method to learn the inverse demand function from only partial information. Notably, although the liquidation values are unobserved, our proposed procedure is able to learn them with a high degree of accuracy. In this way, this procedure is designed so as to be _ex ante_ interpretable. To the best of our knowledge, this is the first study capable of modeling the inverse demand function in the absence of liquidation data. Given the robust performance observed in our numerical case studies, this method can be practically employed to construct a data-driven inverse demand function rather than the exogenous forms typically used in practice.
Our work further highlights a pair of promising avenues for future research. First, despite our model effectively capturing cross-impacts in multi-asset scenarios, our investigation focused on scenarios involving a limited number of assets and banks. Real-world cases, however, may encompass a more extensive range of assets or banks, often characterized by complex relational structures. Thus, future research can integrate more comprehensive, interconnected datasets to derive profound insights.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{Linear Regression Results} & \multicolumn{2}{c}{Statistical Test} \\ \cline{2-5} Variable & Estimate & Standard Error & t-value & p-value \\ \hline Predicted Liquidation of Asset 1 & -0.149 & 0.010 & 0.082 & 0.935 \\ Predicted Liquidation of Asset 2 & -0.015 & 0.007 & -0.062 & 0.951 \\ Intercept & 1.000 & 0.004 & 0.110 & 0.912 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Linear Regression Estimates and Statistical Hypothesis Test Results for the Predicted Price of Asset 1. The Null Hypothesis for the Coefficients are: \(\hat{a}=-0.15\), \(\hat{b}=-0.015\) and for the Intercept, \(\hat{c}=1\).
\begin{table}
\begin{tabular}{l c c c} \hline \hline & \multicolumn{2}{c}{Linear Regression Results} & \multicolumn{2}{c}{Statistical Test} \\ \cline{2-5} Variable & Estimate & Standard Error & t-value & p-value \\ \hline Predicted Liquidation of Asset 1 & -0.149 & 0.010 & -14.9 & *** \\ Predicted Liquidation of Asset 2 & -0.015 & 0.007 & -2.14 & * \\ Intercept & 1.000 & 0.004 & 250.0 & *** \\ \hline \hline \end{tabular}
\end{table}
Table 7: Linear Regression Estimates and Statistical Hypothesis Test Results for the Predicted Price of Asset 1. The Null Hypothesis for the Coefficients are: \(\hat{a}=0\), \(\hat{b}=0\) and for the Intercept, \(\hat{c}=0\).
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Dataset** & **Corr Coefficient Asset 1** & **Corr Coefficient Asset 2** \\ \hline Without Cross-Impacts & 0.9954 & 0.9523 \\ With Cross-Impacts & 0.9999 & 0.9998 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Correlation Coefficients between the Predicted and Actual Liquidations in the Two-Asset Scenario
Second, we deliberately maintain a simple form of neural networks in our framework, in order to ensure a consistent number of trainable parameters with benchmark models [40]. Our experiments attest that the proposed framework achieves a comparable level of price prediction accuracy, even when partial information is employed. Considering the inherent noise of real-world data, the incorporation of more sophisticated deep learning architectures could further enhance the effectiveness of our proposed method. For example, replacing the fully connected networks with Graph Neural Networks (GNNs) could be a compelling development. Since GNNs consider both the features and relations of banks during predicting, this transition can offer a potential improvement to the model's adaptability in practical scenarios.
|
2306.05674 | Efficient Uncertainty Quantification and Reduction for
Over-Parameterized Neural Networks | Uncertainty quantification (UQ) is important for reliability assessment and
enhancement of machine learning models. In deep learning, uncertainties arise
not only from data, but also from the training procedure that often injects
substantial noises and biases. These hinder the attainment of statistical
guarantees and, moreover, impose computational challenges on UQ due to the need
for repeated network retraining. Building upon the recent neural tangent kernel
theory, we create statistically guaranteed schemes to principally
\emph{characterize}, and \emph{remove}, the uncertainty of over-parameterized
neural networks with very low computation effort. In particular, our approach,
based on what we call a procedural-noise-correcting (PNC) predictor, removes
the procedural uncertainty by using only \emph{one} auxiliary network that is
trained on a suitably labeled dataset, instead of many retrained networks
employed in deep ensembles. Moreover, by combining our PNC predictor with
suitable light-computation resampling methods, we build several approaches to
construct asymptotically exact-coverage confidence intervals using as low as
four trained networks without additional overheads. | Ziyi Huang, Henry Lam, Haofeng Zhang | 2023-06-09T05:15:53Z | http://arxiv.org/abs/2306.05674v2 | # Efficient Uncertainty Quantification and Reduction for Over-Parameterized Neural Networks
###### Abstract
Uncertainty quantification (UQ) is important for reliability assessment and enhancement of machine learning models. In deep learning, uncertainties arise not only from data, but also from the training procedure that often injects substantial noises and biases. These hinder the attainment of statistical guarantees and, moreover, impose computational challenges on UQ due to the need for repeated network retraining. Building upon the recent neural tangent kernel theory, we create statistically guaranteed schemes to principally _quantify_, and _remove_, the procedural uncertainty of over-parameterized neural networks with very low computation effort. In particular, our approach, based on what we call a procedural-noise-correcting (PNC) predictor, removes the procedural uncertainty by using only _one_ auxiliary network that is trained on a suitably labeled data set, instead of many retrained networks employed in deep ensembles. Moreover, by combining our PNC predictor with suitable light-computation resampling methods, we build several approaches to construct asymptotically exact-coverage confidence intervals using as low as four trained networks without additional overheads.
## 1 Introduction
Uncertainty quantification (UQ) concerns the dissection and estimation of various sources of errors in a prediction model. It has growing importance in machine learning, as it helps assess and enhance the trustworthiness and deployment safety across many real-world tasks ranging from computer vision [64; 21] and natural language processing [106; 98] to autonomous driving [85; 86], as well as guiding exploration in sequential learning [9; 1; 111]. In the deep learning context, UQ encounters unique challenges on both the statistical and computational fronts. On a high level, these challenges arise from the over-parametrized and large-scale nature of neural networks so that, unlike classical statistical models, the prediction outcomes incur not only noises from the data, but also importantly the training procedure itself [70; 89]. This elicits a deviation from the classical statistical theory that hinders the attainment of established guarantees. Moreover, because of the sizes of these models, conventional procedures such as resampling [97] demand an amount of computation that could quickly become infeasible.
Our main goal of this paper is to propose a UQ framework for epistemic uncertainty for over-parametrized neural networks that has simultaneous _statistical coverage guarantee_, in the sense of classical frequentist asymptotic exactness, and _low computation cost_, in the sense of requiring only few (as low as four) neural network trainings, without other extra overheads. A main driver of these strengths in our framework is a new implementable concept, which we call the _Procedural-Noise-Correcting (PNC)_ predictor. It consists of an auxiliary network that is trained on a suitably artificially labeled data set, with behavior mimicking the variability coming from the training procedure. To reach our goal, we synthesize and build on two recent lines of tools that appear largely segregated thus far. First is neural tangent kernel (NTK) theory [61; 75; 7], which provides explicit approximate
formulas for well-trained infinitely wide neural networks. Importantly, NTK reveals how procedural variability enters into the prediction outcomes through, in a sense, an approximately shifted kernel regression that guides our PNC construction. Second is light-computation resampling methodology, including batching [44; 94; 95] and the so-called cheap bootstrap method [71], which allows valid confidence interval construction using extremely few model retrainings. We suitably enhance these methods to account for both data and procedural variabilities via the PNC incorporation.
We compare our framework with several major related lines of work. First, our work focuses on the quantification of epistemic uncertainty, which refers to the errors coming from the inadequacy of the model or data noises. This is different from aleatoric uncertainty, which refers to the intrinsic stochasticity of the problem [87; 91; 11; 59; 34], or predictive uncertainty which captures the sum of epistemic and aleatoric uncertainties (but not their dissection) [89; 92; 12; 3; 22]. Regarding epistemic uncertainty, a related line of study is deep ensemble that aggregates predictions from multiple independent training replications [76; 70; 38; 8; 89]. This approach, as we will make clear later, can reduce and potentially quantify procedural variability, but a naive use would require demanding retraining effort and does not address data variability. Another line is the Bayesian UQ approach on neural networks [41; 2]. This regards network weights as parameters subject to common priors such as Gaussian. Because of the computation difficulties in exact inference, an array of studies investigate efficient approximate inference approaches to estimate the posteriors [40; 46; 16; 31; 30; 90; 74; 53]. While powerful, these approaches nonetheless possess inference error that could be hard to quantify, and ultimately finding rigorous guarantees on the performance of these approximate posteriors remains open to our best knowledge.
## 2 Statistical Framework of Uncertainty
We first describe our learning setting and define uncertainties in our framework. Suppose the input-output pair \((X,Y)\) is a random vector following an unknown probability distribution \(\pi\) on \(\mathcal{X}\times\mathcal{Y}\), where \(X\in\mathcal{X}\subset\mathbb{R}^{d}\) is an input and \(Y\in\mathcal{Y}\subset\mathbb{R}\) is the response. Let the marginal distribution of \(X\) be \(\pi_{X}(x)\) and the conditional distribution of \(Y\) given \(X\) be \(\pi_{Y|X}(y|x)\). Given a set of training data \(\mathcal{D}=\{(x_{1},y_{1}),(x_{2},y_{2}),...,(x_{n},y_{n})\}\) drawn i.i.d. from \(\pi\) (we write \(\mathbf{x}=(x_{1},...,x_{n})^{T}\) and \(\mathbf{y}=(y_{1},...,y_{n})^{T}\) for short), we build a prediction model \(h:\mathcal{X}\rightarrow\mathcal{Y}\) that best approximates \(Y\) given \(X\). Let \(\pi_{\mathcal{D}}\) be the empirical distribution associated with the training data \(\mathcal{D}\).
To this end, provided a loss function \(\mathcal{L}:\mathcal{Y}\times\mathbb{R}\rightarrow\mathbb{R}\), we denote the population risk \(R_{\pi}(h):=\mathbb{E}_{(X,Y)\sim\pi}[\mathcal{L}(h(X),Y)]\). If \(h\) is allowed to be any possible functions, the best prediction model is the _Bayes predictor_[58; 88]: \(h^{*}_{B}(X)\in\text{argmin}_{y\in\mathcal{Y}}\mathbb{E}_{Y\sim\pi_{Y|X}}[ \mathcal{L}(y,Y)|X]\). With finite training data of size \(n\), classical statistical learning suggests finding \(\hat{h}_{n}\in\mathcal{H}\), where \(\mathcal{H}\) denotes a hypothesis class that minimizes the empirical risk, i.e., \(\hat{h}_{n}\in\text{argmin}_{h\in\mathcal{H}}R_{\pi_{\mathcal{D}}}(h)\). This framework fits methods such as linear or kernel ridge regression (Appendix B), but because of non-convexity and other issues, it is not feasible for deep learning. Instead, in practice, gradient-based optimization methods are used, giving rise to \(\hat{h}_{n,\gamma}\) as a variant of \(\hat{h}_{n}\), where the additional variable \(\gamma\) represents the randomness in the training procedure. It is worth mentioning that this randomness generally depends on the empirical data \(\mathcal{D}\), and thus we use \(P_{\gamma|\mathcal{D}}\) to represent the distribution of \(\gamma\) conditional on \(\mathcal{D}\).
Furthermore, we consider \(\hat{h}_{n}^{*}=\text{aggregate}(\{\hat{h}_{n,\gamma}:\gamma\sim P_{\gamma| \mathcal{D}}\})\) where "aggregate" means an idealized aggregation approach to remove the training randomness in \(\hat{h}_{n,\gamma}\) (known as ensemble learning [70; 18; 17; 82; 42]). A prime example in deep learning is deep ensemble [70] that we will detail in the sequel. Finally, we denote \(h^{*}=\lim_{n\rightarrow\infty}\hat{h}_{n}^{*}\) as the grand "limiting" predictor when we have infinite samples. The exact meaning of "lim" will be clear momentarily. Under this framework, we can dissect epistemic uncertainty into three sources (illustrated in Figure 1):
**Model approximation error**\(\text{UQ}_{AE}=h^{*}-h_{B}^{*}\). This discrepancy between \(h_{B}^{*}\) and \(h^{*}\) arises from the inadequacy of the hypothesis class \(\mathcal{H}\). For an over-parameterized sufficiently wide neural network \(\mathcal{H}\), this error is usually negligible because of the universal approximation power of neural networks for any continuous functions [27; 55; 50] or Lebesgue-integrable functions [83].
Figure 1: Three sources of epistemic uncertainty.
**Data variability** UQ\({}_{DV}=\hat{h}_{n}^{*}-h^{*}\). This measures the representativeness of the training dataset, which is the most standard epistemic uncertainty in classical statistics [104].
**Procedural variability** UQ\({}_{PV}=\hat{h}_{n,\gamma}-\hat{h}_{n}^{*}\). This arises from the randomness in the training process for a single network \(\hat{h}_{n,\gamma}\), which is present even with deterministic or infinite data. The randomness comes from the initialization of the network parameters, and also data ordering and possibly training time when we run stochastic gradient descent with finite training epochs.
## 3 Quantifying Epistemic Uncertainty
We use a frequentist framework and, for a given \(x\), we aim to construct a confidence interval for the Bayes predictor \(h_{B}^{*}(x)\). As discussed in the introduction, the over-parametrized nature of neural networks defies conventional statistical techniques and moreover introduces procedural variability that makes inference difficult. We focus on sufficiently wide neural networks, which gives two theoretical advantages. First, this makes model approximation error negligible and a confidence interval for \(h^{*}(x)\) also applies to \(h_{B}^{*}(x)\). Second, the NTK theory [61] implies a phenomenon that the network evolves essentially as a "linear model" under gradient descent, and thus the resulting predictor behaves like a (shifted) kernel ridge regression whose kernel is the NTK [7; 75; 51; 56; 110] (more details in Appendix C). However, we do not find off-the-shelf results that exactly match our need for the epistemic uncertainty task, so we describe our result below. Consider the following regression problem:
\[\hat{R}_{n}(f_{\theta},\theta^{b})=\frac{1}{n}\sum_{i=1}^{n}(f_{ \theta}(x_{i})-y_{i})^{2}+\lambda_{n}\|\theta-\theta^{b}\|_{2}^{2}. \tag{1}\]
where \(\theta^{b}\) is an initialization network parameter and \(\lambda_{n}>0\) is the regularization hyper-parameter which may depend on the data size. We add regularization \(\lambda_{n}\) to this problem, which is slightly different from previous work [75] without the regularization \(\lambda_{n}\), since it can guarantee the stable computation of the inversion of the NTK Gram matrix and can be naturally linked to kernel ridge regression, as we will show shortly in Proposition 3.1. We assume the network adopts the NTK parametrization, and its parameters are randomly initialized using the He initialization, and it is sufficiently (infinitely) wide so that the linearized neural network assumption holds; See Appendix C for details. Moreover, we assume that the network is trained using the loss function in (1) via continuous-time gradient flow by feeding the entire training data and using sufficient training time (\(t\rightarrow\infty\)). In this sense, the uncertainty of data ordering and training time vanishes, and thus the random initialization is the only uncertainty in procedural variability. The above specifications are formally summarized in Assumption C.2. With them, we have
**Proposition 3.1** (Proposition C.3).: _Suppose that Assumption C.2 holds. Then the final trained network, conditional on the initial network \(s_{\theta^{b}}(x)\), is given by_
\[\hat{h}_{n,\theta^{b}}(x)=s_{\theta^{b}}(x)+\mathbf{K}(x,\mathbf{x})^{T} (\mathbf{K}(\mathbf{x},\mathbf{x})+\lambda_{n}n\mathbf{I})^{-1}(\mathbf{y}-s_{\theta^{b}}(\mathbf{x})), \tag{2}\]
_where in the subscript of \(\hat{h}_{n,\theta^{b}}\), \(n\) represents \(n\) training data, \(\theta^{b}\) represents an instance of the initial network parameters drawn from the standard Gaussian distribution \(P_{\theta^{b}}=\mathcal{N}(0,\mathbf{I}_{p})\) where the dimension of \(\theta^{b}\) is \(p\) (He initialization); \(\mathbf{K}(\mathbf{x},\mathbf{x}):=(K(x_{i},x_{j}))_{i,j=1,...,n}\) and \(\mathbf{K}(x,\mathbf{x}):=(K(x,x_{1}),K(x,x_{2}),...,K(x,x_{n}))^{T}\) where \(K\) is the (population) NTK. This implies that \(\hat{h}_{n,\theta^{b}}\) is the solution to the following kernel ridge regression: \(s_{\theta^{b}}+\underset{g\in\bar{\mathcal{H}}}{\arg\min}\ \frac{1}{n}\sum_{i=1}^{n}(y_{i}-s_{\theta^{b}}(x_{i})-g(x_{i}))^{2}+ \lambda_{n}\|g\|_{\bar{\mathcal{H}}}^{2}\) where \(\bar{\mathcal{H}}\) is the reproducing kernel Hilbert space constructed from the NTK \(K(x,x^{\prime})\)._
Proposition 3.1 shows that the shifted kernel ridge regressor using NTK with a shift from an initial function \(s_{\theta_{b}}\) is exactly the linearized neural network regressor that starts from the initial network \(s_{\theta_{b}}\). We provide details and deviation of Proposition 3.1 in Appendix C.
### Challenges from the Interplay of Procedural and Data Variabilities
Next, to motivate our main approach based on the PNC predictor, let us discuss the existing challenges in quantifying and reducing uncertainty under the NTK framework. To this end, deep ensemble [76; 70; 38; 8; 89] is arguably the most common ensemble approach in deep learning to reduce procedural variability. [8] shows that deep ensemble achieves the best performance compared with a wide
range of other ensemble methods, and more networks in deep ensemble lead to better performance. Specifically, the _deep ensemble predictor_[70]\(\hat{h}_{n}^{m}(x)\) is defined as: \(\hat{h}_{n}^{m}(x):=\frac{1}{m}\sum_{i=1}^{m}\hat{h}_{n,\theta_{i}^{k}}(x)\) where \(m\) is the number of networks in the ensemble, \(\hat{h}_{n,\theta_{i}^{b}}(x)\) is the independently trained network with initialization \(\theta_{j}^{b}\) (with the same training data \(\mathcal{D}\)) and \(\theta_{1}^{b},...,\theta_{m}^{b}\) are i.i.d. samples drawn from \(P_{\theta^{b}}\). We also introduce \(\hat{h}_{n}^{*}(x):=\mathbb{E}_{P_{\theta^{b}}}[\hat{h}_{n,\theta^{b}}(x)]\) as the expectation of \(\hat{h}_{n,\theta^{b}}(x)\) with respect to \(\theta^{b}\sim P_{\theta^{b}}\). Taking \(m\rightarrow\infty\) and using the law of large numbers, we have \(\lim_{m\rightarrow\infty}\hat{h}_{n}^{m}(x)=\hat{h}_{n}^{*}(x)\;a.s..\). So \(\hat{h}_{n}^{*}(x)\) behaves as an _idealized_ deep ensemble predictor with infinitely many independent training procedures. Using Proposition 3.1 and the linearity of kernel ridge regressor with respect to data (Appendix B), we have
\[\hat{h}_{n}^{m}(x)=\frac{1}{m}\sum_{i=1}^{m}s_{\theta_{i}^{b}}(x)+\mathbf{K}(x, \mathbf{x})^{T}(\mathbf{K}(\mathbf{x},\mathbf{x})+\lambda_{n}n\mathbf{I})^{-1}\left(\mathbf{y}-\frac{1 }{m}\sum_{i=1}^{m}s_{\theta_{i}^{b}}(\mathbf{x})\right)\]
\[\hat{h}_{n}^{*}(x)=\mathbb{E}_{P_{\theta^{b}}}[\hat{h}_{n,\theta^{b}}(x)]= \bar{s}(x)+\mathbf{K}(x,\mathbf{x})^{T}(\mathbf{K}(\mathbf{x},\mathbf{x})+\lambda_{n}n\mathbf{I})^{-1} (\mathbf{y}-\bar{s}(\mathbf{x})) \tag{3}\]
where \(\bar{s}(x)=\mathbb{E}_{P_{\theta^{b}}}[s_{\theta^{b}}(x)]\) is the expectation of \(s_{\theta^{b}}(x_{0})\) with respect to the the distribution of the initialization parameters \(P_{\theta^{b}}=\mathcal{N}(0,\mathbf{I}_{p})\). It is easy to see that \(\mathbb{E}_{P_{\theta^{b}}}[\hat{h}_{n}^{m}(x)]=\hat{h}_{n}^{*}(x)\) and \(\text{Var}_{P_{\theta^{b}}}(\hat{h}_{n}^{m}(x))=\frac{1}{m}\text{Var}_{P_{ \theta^{b}}}(\hat{h}_{n,\theta^{b}}(x))\) where \(\text{Var}_{P_{\theta^{b}}}\) is the variance with respect to the random initialization. As for the total variance,
**Proposition 3.2**.: _We have_
\[\text{Var}(\hat{h}_{n}^{m}(x))=\text{Var}(\hat{h}_{n}^{*}(x))+\frac{1}{m} \mathbb{E}[\text{Var}(\hat{h}_{n,\theta^{b}}(x)|\mathcal{D})]\leq\text{Var}( \hat{h}_{n}^{*}(x))+\mathbb{E}[\text{Var}(\hat{h}_{n,\theta^{b}}(x)|\mathcal{D })]=\text{Var}(\hat{h}_{n,\theta^{b}}(x)),\]
_where the variance is taken with respect to both the data \(\mathcal{D}\) and the random initialization \(P_{\theta^{b}}\)._
Proposition 3.2 shows that deep ensemble improves the statistical profile of a single model by reducing its procedural variability by a factor \(\frac{1}{m}\) (but not the data variability), and achieving this reduction requires \(m\) trainings. To quantify the epistemic uncertainty in a deep ensemble, we may employ resampling approaches such as "bootstrap on a deep ensemble". This would involve two layers of sampling, the outer being the resampling of data, and the inner being the retraining of base networks with different initializations. In other words, this "nested" sampling amounts to a multiplicative amount of training effort in the large number of outer bootstrap resamples and the \(m\) inner retaining per resample. This leads to a huge computational burden.
In the following, we introduce our PNC framework that can bypass the above issues in that:
**Uncertainty reduction.** We train one single network and _one_ additional auxiliary network to completely remove the procedural variability. This is in contrast to deep ensemble that trains \(m\) networks and only reduces the procedural variability by an \(m\)-factor.
**Uncertainty quantification.** We resolve the computational challenges in "bootstrap on a deep ensemble", by combining PNC predictors with low-budget inference tools that require only as low as _four_ network trainings.
### PNC Predictor and Procedural Variability Removal
We first develop a computationally efficient approach to obtain \(\hat{h}_{n}^{*}(x)\), the idealized deep ensemble predictor that is free of procedural variability. We term our approach the procedural-noise-correcting (PNC) predictor, whose pseudo-code is given in Algorithm 1. This predictor consists of a difference between essentially two neural network outcomes, \(\hat{h}_{n,\theta^{b}}(x)\) which is the original network trained using one initialization, and \(\hat{\phi}_{n,\theta^{b}}(x)\) that is trained on a suitably artificially labeled dataset. More precisely, this dataset applies label \(\bar{s}(x_{i})\) to \(x_{i}\), the expected outcome of an _untrained_ network with random initialization. Note that labeling this artificial dataset does not involve any training process and, compared with standard network training, the only additional running time in the PNC predictor is to train this single artificial-label-trained network.
To motivate this predictor, we first note that, instead of directly computing \(\hat{h}_{n}^{*}(x)\), we can compute the procedural noise, which is given by
\[\hat{\phi}_{n,\theta^{b}}(\cdot):=\hat{h}_{n,\theta^{b}}(x)-\hat{h}_{n}^{*}(x)=s _{\theta^{b}}(x)-\bar{s}(x)+\mathbf{K}(x,\mathbf{x})^{T}(\mathbf{K}(\mathbf{x},\mathbf{x})+\lambda _{n}n\mathbf{I})^{-1}(\bar{s}(\mathbf{x})-s_{\theta^{b}}(\mathbf{x})). \tag{4}\]
By Proposition 3.1, this closed-form expression of \(\hat{\phi}_{n,\theta^{b}}\) corresponds exactly to the auxiliary network described above. This subsequently leads to:
**Theorem 3.3** (Pnc).: _Suppose that Assumption C.2 holds. Then the output of the PNC predictor (Algorithm 1) is exactly \(\hat{h}_{n}^{*}(x)\) given in (3)._
There are two approaches to computing \(\bar{s}(x)\) in Algorithm 1: 1) Under He initialization, \(s_{\theta^{b}}(x)\) is a zero-mean Gaussian process in the infinite width limit [75], and thus we can numerically set \(\bar{s}\equiv 0\). 2) An alternative approach that does not require specific initialization is to use Monte Carlo integration: \(\bar{s}(x)=\lim_{N\to\infty}\frac{1}{N}\sum_{i=1}^{N}s_{\theta^{b}_{i}}(x)\) where \(\theta^{b}_{i}\) are i.i.d. from \(P_{\theta^{b}}\). When \(N\) is finite, it introduces procedural variance that vanishes at the order \(N^{-1}\). However, since this computation does not require any training and is extremely fast in practice, we may use \(N\gg n\) to guarantee that compared with data variance at the order \(n^{-1}\), procedural variance in computing \(\bar{s}(x)\) is negligible. In experiments, both approaches work well, and we will use the first approach for efficiency. Note that although \(\bar{s}\equiv 0\) in the infinite width limit [75], it does not imply that the trained auxiliary neural network in Algorithm 1 will be a zero network (all parameters are zero). In fact, over-parameterized neural networks have many global minima, and the auxiliary neural network will find one of the global minima that is very close to the \(\theta^{b}\)[33; 10].
Compared with Algorithm 1, the following candidate approaches encounter computational issues: 1) Using deep ensemble with sufficiently many networks in the ensemble. However, this approach is time-consuming as \(m\) networks in the ensemble mean \(m\)-fold training times. \(m\) is typically as small as 5 in practice [70] so it cannot approximate \(\hat{h}_{n}^{*}(x)\) well. 2) Using the closed-form expression of \(\hat{h}_{n}^{*}\) in (3), which requires computing the NTK \(K(x,x^{\prime})\) and the inversion of the NTK Gram matrix. \(K(x,x^{\prime})\) is recursively defined and does not have a simple form for computation, which might be addressed by approximating it with the empirical NTK \(K_{\theta}(x,x^{\prime})\) numerically (See Appendix C). However, the inversion of the NTK Gram matrix is another issue: When the training data size is large, the NTK Gram matrix is large, and the matrix inversion becomes time-consuming.
### Constructing Confidence Intervals from PNC Predictors
We construct confidence intervals for \(h^{*}(x)\) leveraging our PNC predictor in Algorithm 1. To handle data variability, two lines of works borrowed from classical statistics may be considered. First is an analytical approach using the delta method for asymptotic normality, which involves computing the influence function [48; 35] that acts as the functional gradient of the predictor with respect to the data distribution. It was initially introduced in modern machine learning for understanding a training point's effect on a model's prediction [69; 68; 13]. The second is to use resamplings [97], such as bootstrap or jackknife, to avoid explicit variance computation. The classical resampling method requires enough resample replications and thus incurs demanding resampling effort. For instance, the jackknife requires the same number of training times as the training data size, which is extremely time-consuming. Standard bootstrap requires a sufficient number of resampling and retraining to produce accurate resample quantiles. Given these computational bottleneck, we will utilize light-computation resampling alternatives, including batching [44; 94; 95] and the so-called cheap bootstrap method [71; 72], which allows valid confidence interval construction using extremely few model retrainings.
**Large-sample asymptotics of the PNC predictor.** To derive our intervals, we first gain understanding on the large-sample properties of the PNC predictor. Proposition 3.1 shows that \(\hat{h}_{n}^{*}(x)\) in (3) is the
solution to the following empirical risk minimization problem:
\[\hat{h}_{n}^{*}(\cdot)=\bar{s}(\cdot)+\min_{g\in\mathcal{H}}\frac{1}{n}\sum_{i=1 }^{n}[(y_{i}-\bar{s}(x_{i})-g(x_{i}))^{2}]+\lambda_{n}\|g\|_{\mathcal{H}}^{2} \tag{5}\]
Therefore, its corresponding population risk minimization problem (i.e., removing the data variability) is:
\[h^{*}(\cdot)=\bar{s}(\cdot)+\min_{g\in\mathcal{H}}\mathbb{E}_{\pi}[(Y-\bar{s}( X)-g(X))^{2}]+\lambda_{0}\|g\|_{\mathcal{H}}^{2} \tag{6}\]
where \(\lambda_{0}=\lim_{n\to\infty}\lambda_{n}\). To study the difference between the empirical and population risk minimization problems of kernel ridge regression, we introduce the following established result on the asymptotic normality of kernel ridge regression (See Appendix B for details):
**Proposition 3.4** (Asymptotic normality of kernel ridge regression [47]).: _Suppose that Assumptions B.3 and B.4 hold. Let \(f_{P}\) be the solution to the following problem: \(\min_{g\in\mathcal{H}}\mathbb{E}_{P}[(Y-g(X))^{2}]+\lambda_{P}\|g\|_{\mathcal{ H}}^{2}\) where \(P=\pi_{n}\) or \(\pi\). Then \(\sqrt{n}(f_{\pi_{n}}-f_{\pi})\Rightarrow\mathbb{G}\) in \(\mathcal{\bar{H}}\) where \(\mathbb{G}\) is a zero-mean Gaussian process and \(\Rightarrow\) represents "converges weakly". Moreover, at point \(x\), \(\sqrt{n}(f_{\pi_{n}}(x)-f_{\pi}(x))\Rightarrow\mathcal{N}(0,\sigma_{\pi}^{2}(x))\) where \(\sigma_{\pi}^{2}(x)=\int_{z\in\mathcal{X}\times\mathcal{Y}}IF^{2}(z;f_{P},\pi) (x)d\pi(z)\) and \(IF\) is the influence function of statistical functional \(f_{P}\)._
Next, we apply the above proposition to our problems about \(\hat{h}_{n}^{*}\). Let \(T_{1}(P)(x)\) be the solution of the following problem \(\min_{g\in\mathcal{H}}\mathbb{E}_{P}[(Y-\bar{s}(X)-g(X))^{2}]+\lambda_{P}\|g \|_{\mathcal{H}}^{2}\) that is evaluated at a point \(x\). Here \(\lambda_{\pi_{n}}=\lambda_{n}\) and \(\lambda_{\pi}=\lambda_{0}\). Then we have the following large-sample asymptotics of the PNC predictor, which provides the theoretical foundation for constructing an asymptotic confidence interval.
**Theorem 3.5** (Large-sample asymptotics of the PNC predictor).: _Suppose that Assumption C.2 holds. Suppose that Assumptions B.3 and B.4 hold when \(Y\) is replaced by \(Y-\bar{s}(X)\). Input the training data \(\mathcal{D}\) into Algorithm 1 to obtain \(\hat{h}_{n,\theta^{b}}(x)-\hat{\phi}_{n,\theta^{b}}(x)\). We have_
\[\sqrt{n}\left(\hat{h}_{n,\theta^{b}}(x)-\hat{\phi}_{n,\theta^{b}}(x)-h^{*}(x) \right)\Rightarrow\mathcal{N}(0,\sigma_{\theta^{b}}^{2}(x)), \tag{7}\]
_where_
\[\sigma_{\theta^{b}}^{2}(x)=\int_{z\in\mathcal{X}\times\mathcal{Y}}IF^{2}(z;T_ {1},\pi)(x)d\pi(z). \tag{8}\]
_Thus, an asymptotically (in the sense of \(n\to\infty\)) exact \((1-\alpha)\)-level confidence interval of \(h^{*}(x)\) is \([\hat{h}_{n,\theta^{b}}(x)-\hat{\phi}_{n,\theta^{b}}(x)-q_{1-\frac{\alpha}{2}},\hat{h}_{n,\theta^{b}}(x)-\hat{\phi}_{n,\theta^{b}}(x)+q_{1-\frac{\alpha}{2}}]\) where \(q_{\alpha}\) is the \(\alpha\)-quantile of the distribution \(\mathcal{N}(0,\frac{1}{n}\sigma_{\theta^{b}}^{2}(x))\)._
In general, \(\sigma_{\theta^{b}}^{2}\) is unknown and must be estimated. It is common to approximate \(IF^{2}(z;T_{1},\pi)\) with \(IF^{2}(z;T_{1},\pi_{n})\) and \(\hat{\sigma}_{\theta^{b}}^{2}(x)=\frac{1}{n}\sum_{z_{i}\in\mathcal{D}}IF^{2}(z _{i};T_{1},\pi_{n})(x)\). This method is known as the infinitesimal jackknife variance estimator [97]. We provide the closed-form expression of the variance estimation \(\hat{\sigma}_{\theta^{b}}^{2}(x)\) in Appendix B that could be of theoretical interest. Yet in practice, the computation of \(\hat{\sigma}_{\theta^{b}}^{2}(x)\) requires the evaluation of the NTK Gram matrix and its inversion, which is computationally demanding for large \(n\) and thus not recommended for practical implementation on a large dataset. In the following, we provide two approaches that avoid the explicit estimation of the asymptotic variance in the infinitesimal jackknife approach, especially the Gram matrix inversion.
**PNC-enhanced batching.** We propose an approach for constructing a confidence interval that is especially useful for large datasets, termed _PNC-enhanced batching_. The pseudo-code is given in Algorithm 2. Originating from simulation analysis [94, 95, 44], the key idea of batching is to construct a self-normalizing \(t\)-statistic that "cancels out" the unknown variance, leading to a valid confidence interval without explicitly needing to compute the variance. It can be used to conduct inference on serially dependent simulation outputs where the standard error is difficult to compute analytically. Previous studies have demonstrated the effectiveness of batching on the use of inference for Markov chain Monte Carlo [43, 37, 63] and also the so-called input uncertainty problem [45]. Its application in deep learning uncertainty quantification was not revealed in previous work, potentially due to the additional procedural variability. Integrating it with the PNC predictor, PNC-enhanced batching is very efficient and enjoys the asymptotically exact coverage of its confidence interval, as stated below.
**Theorem 3.6** (Exact coverage of PNC-enhanced batching confidence interval).: _Suppose that Assumption C.2 holds. Suppose that Assumptions B.3 and B.4 hold when \(Y\) is replaced by \(Y-\bar{s}(X)\). Then an asymptotically exact \((1-\alpha)\)-level confidence interval of \(h^{*}(x)\) is \([\psi_{B}(x)-\frac{S_{B}(x)}{\sqrt{m^{\prime}}}q_{1-\frac{\alpha}{2}},\psi_{B}( x)+\frac{S_{B}(x)}{\sqrt{m^{\prime}}}q_{1-\frac{\alpha}{2}}]\) where \(q_{\alpha}\) is the \(\alpha\)-quantile of the t distribution \(t_{m^{\prime}-1}\) with degree of freedom \(m^{\prime}-1\)._
**PNC-enhanced cheap bootstrap.** We propose an alternative approach for constructing a confidence interval that is also suitable for small datasets, termed _PNC-enhanced cheap bootstrap_. The pseudo-code is given in Algorithm 3. Cheap bootstrap [71, 72] is a modified bootstrap procedure with substantially less retraining effort than conventional bootstrap methods, via leveraging the independence between resample and original estimators. Note that our proposal is fundamentally different from the naive use of bootstrap or bagging when the additional randomness appears [70, 76, 42], which mixes the procedural and data variability, and does not directly provide confidence intervals. Like PNC-enhanced batching, PNC-enhanced cheap bootstrap also avoids the explicit estimation of the asymptotic variance. The difference between them is that the PNC-enhanced batching divides data into a small number of batches, and thus is suggested for large datasets while PNC-enhanced cheap bootstrap re-selects the samples from the entire dataset, which is also suitable for small datasets. In terms of running time, when \(R=m^{\prime}-1\), PNC-enhanced cheap bootstrap and PNC-enhanced batching share the same number of network training, but since batching is trained on the subsets of the data, the individual network training in PNC-enhanced batching is faster than PNC-enhanced cheap bootstrap. PNC-enhanced cheap bootstrap also enjoys the asymptotically exact coverage of its confidence interval, as stated below.
**Input:** Training dataset \(\mathcal{D}\) of size \(n\). The number of replications \(R\geq 1\).
**Procedure: 1.** Input \(\mathcal{D}_{1}\cup\mathcal{D}_{2}\) in Algorithm 1 to output \(\hat{h}_{n,\theta^{b}}(x)-\hat{\phi}_{n,\theta^{b}}(x)\).
**2.** For each replication \(b\in[R]\), resample \(\mathcal{D}\), i.e., independently and uniformly sample with replacement from \(\{(x_{1},y_{1}),...,(x_{n},y_{n})\}\)\(n\) times to obtain \(\mathcal{D}^{*b}=\{(x_{1}^{*b},y_{1}^{*b}),...,(x_{n}^{*b},y_{n}^{*b})\}\). Input \(\mathcal{D}^{*b}\) in Algorithm 1 to output \(\hat{h}_{n,\theta^{b}}^{*b}(x)-\hat{\phi}_{n,\theta^{b}}^{*b}(x)\).
**3.** Compute \(\psi_{C}(x)=\hat{h}_{n,\theta^{b}}(x)-\hat{\phi}_{n,\theta^{b}}(x)\),
and \(S_{C}(x)^{2}=\frac{1}{R}\sum_{b=1}^{R}\left(\hat{h}_{n,\theta^{b}}^{*b}(x)- \hat{\phi}_{n,\theta^{b}}^{*b}(x)-\psi_{C}(x)\right)^{2}.\)
**Output:** At point \(x\), output \(\psi_{C}(x)\) and \(S_{C}(x)^{2}\).
**Algorithm 3** PNC-Enhanced Cheap Bootstrap
**Theorem 3.7** (Exact coverage of PNC-enhanced cheap bootstrap confidence interval).: _Suppose that Assumption C.2 holds. Suppose that Assumptions B.3 and B.4 hold when \(Y\) is replaced by \(Y-\bar{s}(X)\). Then an asymptotically exact \((1-\alpha)\)-level confidence interval of \(h^{*}(x)\) is \([\psi_{C}(x)-S_{C}(x)q_{1-\frac{\alpha}{2}},\psi_{C}(x)+S_{C}(x)q_{1-\frac{ \alpha}{2}}]\) where \(q_{\alpha}\) is the \(\alpha\)-quantile of the t distribution \(t_{R}\) with degree of freedom \(R\)._
## 4 Experiments
We conduct numerical experiments to demonstrate the effectiveness of our approaches. Our code will be publicly available. Our proposed approaches are evaluated on the following two tasks: 1) construct confidence intervals and 2) remove procedural variability. With a known ground-truth regression function, training data are regenerated from the underlying synthetic data generative
process. According to the NTK parameterization in Section C, our base network is formed with two fully connected layers with \(n\times 32\) neurons in each hidden layer to ensure the network is sufficiently wide and over-parameterized. Detailed optimization specifications are described in Proposition C.3. Our synthetic dataset is generated with the following distribution: \(X\sim\text{Unif}([0,0.2]^{d})\) and \(Y\sim\sum_{i=1}^{d}\sin(X^{(i)})+\mathcal{N}(0,0.001^{2})\). The training set \(\mathcal{D}=\{(x_{i},y_{i}):i=1,...,n\}\) is formed by drawing i.i.d. samples of \((X,Y)\) from the above distribution with sample size \(n\). We consider multiple dimension settings \(d=1,2,4,8\) and data size settings \(n=128,256,512,1024\) to study the effects on different dimensionalities and data sizes. Additional experimental results are presented in Appendix E.
**Constructing confidence intervals.** We use \(x_{0}=(0.1,...,0.1)\) as the fixed test point for confidence intervals construction. Let \(y_{0}=\sum_{i=1}^{d}\sin(0.1)\) be the ground-truth label for \(x_{0}\) without aleatoric noise. To evaluate the construction performance on confidence intervals, we set the repeat time \(J=40\). In each repetition \(j\in[J]\), we generate a new training dataset from the same synthetic distribution and construct a new confidence interval \([L_{j}(x_{0}),U_{j}(x_{0})]\) with \(95\%\) or \(90\%\) confidence level, and then check the coverage rate (CR): \(\text{CR}=\frac{1}{J}\sum_{j=1}^{J}\mathbf{1}_{L_{j}(x_{0})\leq y_{0}\leq U_{j }(x_{0})}\). The primary evaluation of the confidence interval is based on whether its coverage rate is equal to or larger than the desired confidence level. In addition to CR, we also report the median point of the interval (MP): \(\text{MP}=\frac{1}{J}\sum_{j=1}^{J}\frac{1}{2}(U_{j}(x_{0})+L_{j}(x_{0}))\) and the interval width (IW): \(\text{IW}=\frac{1}{J}\sum_{j=1}^{J}(U_{j}(x_{0})-L_{j}(x_{0}))\). MP reflects the most likely point prediction for \(x_{0}\), while IW reflects the conservativeness of the confidence interval, as a wide interval can easily achieve \(95\%\) coverage rate but is not practically preferable.
In the confidence interval construction tasks, we use the dropout-based Bayesian approximate inference (DropoutUQ) [40] as the baseline for comparison since it is the most widely-used baseline in Bayesian uncertainty quantification. The implementation details are provided in Appendix E. Table 1 reports the CR, IW, and Mean of \(95\%\) and \(90\%\) confidence intervals from our proposals and baselines. We have the following observations:
1) The CR values: In the majority of experiments, \(\text{CR}\geq 95\%\) for \(95\%\) confidence intervals and \(\text{CR}\geq 90\%\) for \(90\%\) confidence intervals satisfy the requirement of confidence intervals, while in very few experiments, the performances are degraded. This occasional degeneration is insignificant and can be potentially explained by the fact that the statistical guarantee of confidence intervals generated by PNC-enhanced batching and cheap bootstrap is in the asymptotic sense. Overall, the CR for \(95\%\)/\(90\%\) confidence intervals are always above \(92.5\%\)/\(87.5\%\) from both proposed methods. In contrast, DropoutUQ does not have such statistical guarantees. The intervals from DropoutUQ cannot maintain a satisfactory coverage rate when the interval is narrow, although they have a similar or larger interval width as PNC-enhanced batching or cheap bootstrap. Moreover, we find that the
\begin{table}
\begin{tabular}{l|c c c|c c c|c c c c} \hline \hline & \multicolumn{3}{c|}{PNC-enhanced batching} & \multicolumn{3}{c|}{PNC-enhanced cheap bootstrap} & \multicolumn{3}{c}{DropoutUQ} \\ & 95\%(CIR/W) & 90\%(CIR/W) & MP & 95\%(CIR/W) & 90\%(CIR/CIR/W) & MP & 95\%(CIR/W) & 90\%(CIR/W) & MP \\ \hline \((d=1)\) & \(\mathbf{0.95}\)/0.00598 & \(\mathbf{0.95}\)/0.00042 & 0.10031 & \(\mathbf{1.0}\)/0.00557 & \(\mathbf{0.95}\)/0.00428 & \(\mathbf{0.9987}\)/**0.00565** & \(\mathbf{0.925}\)/0.00341 & \(\mathbf{0.09972}\) \\ \(n=256\) & \(\mathbf{1.0}\)/0.00437 & \(\mathbf{0.95}\)/0.00323 & \(\mathbf{0.09990}\) & \(\mathbf{0.95}\)/0.00303 & \(\mathbf{0.87}\)/0.00233 & \(\mathbf{0.09986}\) & \(\mathbf{1.0}\)/0.00336 & \(\mathbf{0.925}\)/0.00211 & \(\mathbf{0.09973}\) \\ \(n=512\) & \(\mathbf{0.975}\)/0.00241 & \(\mathbf{0.95}\)/0.00178 & \(\mathbf{0.9994}\) & \(\mathbf{1.0}\)/0.00262 & \(\mathbf{1.0}\)/0.00173 & \(\mathbf{0.09986}\) & \(\mathbf{0.975}\)/0.00243 & \(\mathbf{0.875}\)/0.00153 & \(\mathbf{0.09983}\) \\ \(n=1024\) & \(\mathbf{0.975}\)/0.00163 & \(\mathbf{0.90}\)/0.00120 & \(\mathbf{0.09983}\) & \(\mathbf{0.975}\)/0.00145 & \(\mathbf{0.975}\)/0.00107 & \(\mathbf{0.99968}\) & \(\mathbf{0.925}\)/0.00145 & \(\mathbf{0.600}\)/0.000815 & \(\mathbf{0.09978}\) \\ \hline \((d=2)\) & \(\mathbf{0.975}\)/0.01810 & \(\mathbf{0.99}\)/0.01344 & \(\mathbf{0.20205}\) & \(\mathbf{0.95}\)/0.01252 & \(\mathbf{0.925}\)/0.00408 & \(\mathbf{0.19984}\) & \(\mathbf{0.95}\)/0.01853 & \(\mathbf{0.95}\)/0.01466 & \(\mathbf{0.20131}\) \\ \(n=256\) & \(\mathbf{1.0}\)/0.01260 & \(\mathbf{0.90}\)/0.009317 & \(\mathbf{0.2010}\)/1.000473 & \(\mathbf{1.00}\)/0.00656 & \(\mathbf{1.1991}\)/0.07950 & \(\mathbf{0.990}\)/0.007128 & \(\mathbf{0.19903}\) \\ \(n=512\) & \(\mathbf{1.0}\)/0.005555 & \(\mathbf{0.925}\)/0.004847 & \(\mathbf{1.9999}\)/0.005612 & \(\mathbf{0.925}\)/0.004347 & \(\mathbf{0.9197}\)/0.0250/0.01644 & \(\mathbf{0.850}\)/0.00185 & \(\mathbf{0.19929}\) \\ \(n=1024\) & \(\mathbf{0.95}\)/0.003825 & \(\mathbf{0.875}\)/0.002829 & \(\mathbf{1.0974}\) & \(\mathbf{1.0}\)/0.004187 & \(\mathbf{1.0}\)/0.003214 & \(\mathbf{1.0973}\)/0.85005160 & \(\mathbf{0.750}\)/0.003905 & \(\mathbf{0.19893}\) \\ \hline \((d=4)\) & \(\mathbf{0.4}\) & \(\mathbf{1.0}\)/0.05706 & \(\mathbf{0.975}\)/0.04219 & \(\mathbf{0.409}\)/0.00929 & \(\mathbf{0.975}\)/0.03487 & \(\mathbf{0.925}\)/0.02677 & \(\mathbf{0.39605}\)/0.04288 & \(\mathbf{0.875}\)/0.03489 & \(\mathbf{0.40271}\) \\ \(n=256\) & \(\mathbf{0.95}\)/0.02495 & \(\mathbf{0.925}\)/0.01845 & \(\mathbf{0.9640}\)/0.02553 & \(\mathbf{0.925}\)/0.01730 & \(\mathbf{0.3961}\)/0.03691 & \(\mathbf{0.95}\)/0.02524 & \(\mathbf{0.925}\)/0.02067 & \(\mathbf{0.40111}\) \\ \(n=512\) & \(\mathbf{0.925}\)/0.01805 & \(\mathbf{0.875}\)/0.01334 & \(\mathbf{0.400}\)/0.0950 & \(\mathbf{0.95}\)/0.019147 & \(\mathbf{0.990}\)/0.01471 & \(\mathbf{0.97367}\)/0.900191 & \(\mathbf{0.825}\)/0.01556 & \(\mathbf{0.39820}\) \\ \(n=1024\) & \(\mathbf{0.975}\)/0.01412 & \
dropout rate has a significant impact on the interval width of DropoutUQ, and thus, additional tuning efforts are demanded for this baseline, while our approaches do not need this level of tuning procedure. These observations evidently demonstrate the robustness and effectiveness of our proposals. 2) The IW values: As shown, the IW values reduce roughly at the rate \(n^{-\frac{1}{2}}\) along with an increased training data size, which corroborates with our theory (Section 3.3) well. In comparison with PNC-enhanced cheap bootstrap, the PNC-enhanced batching tends to generate wider intervals when the sample size is small, as it requires data splits, and thus its accuracy might be hindered on small datasets. Herein, we recommend using PNC-enhanced batching for larger datasets while applying PNC-enhanced cheap bootstrap for smaller datasets. Overall, our IW values are small for both approaches, indicating that our framework is able to successfully generate narrow confidence intervals with high coverage rates. 3) The MP values: MP appears highly consistent to the ground-truth label \(y_{0}\) in all experiments, showing that our confidence intervals can accurately capture the ground-truth label on average under multiple problem settings.
**Removing procedural variability.** We compare our proposed PNC predictor (Algorithm 1) with the following approaches: a base network, that is, a standard network training with a single random initialization, and the deep ensemble approach with different numbers (\(m\)) of networks in the ensemble [70]. In deep ensemble, we consider \(m=2\) as it has a similar running time as our PNC (with one auxiliary network), and \(m=5\) as it is widely used in previous work [70; 89]. All networks share the same hyperparameters and specifications. Table 2 reports the mean and standard deviation of mean square error (MSE) from our proposed PNC predictor and other baseline approaches with \(10\) repetition times. The MSE is computed as \(\text{MSE}(h):=\frac{1}{N_{te}}\sum_{i=1}^{N_{te}}(h(x^{\prime}_{i})-y^{\prime }_{i})^{2}\) where \((x^{\prime}_{i},y^{\prime}_{i}),i=1,...,N_{te}\) are i.i.d. testing data independent of the training. We set \(N_{te}=2048\) for test performance evaluation in all experiments. From Table 2, we observe that our PNC achieves notably smaller MSE than baselines with similar computational costs (the base network and deep ensemble with two networks) in all experiments. Moreover, it achieves better/analogous results as the deep ensemble with 5 networks, but the latter has 2.5 times the running time as much as PNC. These results indicate that our proposed PNC can achieve compatible results with deep ensemble but with significantly reduced computational costs.
## 5 Concluding Remarks
In this study, we propose a novel epistemic uncertainty assessment framework with simultaneous statistical coverage guarantees and low computation costs for over-parametrized neural networks. Benefiting from our proposed PNC approach, our study shows promise to remove procedural uncertainty by using only one auxiliary network. Integrated with suitable light-computation resampling methods, we provide two effective approaches to construct asymptotically exact-coverage confidence intervals using as low as two retrained networks on different problem settings. Our evaluation results fully corroborate our theory and show that our approach can generate confidence intervals that are narrow and has satisfactory confidence levels simultaneously.
\begin{table}
\begin{tabular}{c|c c c} \hline \hline MSE & One base network & PNC predictor & Deep ensemble (5 networks) & Deep ensemble (2 networks) \\ \hline \(d=1\), \(n=128\) & \((6.77\pm 4.17)\times 10^{-5}\) & \((3.11\pm 1.79)\times 10^{-5}\) & \((3.58\pm 2.93)\times 10^{-5}\) & \((4.55\pm 2.87)\times 10^{-5}\) \\ \(d=1\), \(n=256\) & \((1.53\pm 1.58)\times 10^{-5}\) & \((5.74\pm 3.66)\times 10^{-6}\) & \((6.34\pm 4.35)\times 10^{-6}\) & \((1.05\pm 0.83)\times 10^{-5}\) \\ \(d=1\), \(n=512\) & \((3.97\pm 3.37)\times 10^{-6}\) & \((1.85\pm 0.67)\times 10^{-6}\) & \((2.19\pm 1.86)\times 10^{-6}\) & \((2.50\pm 3.12)\times 10^{-6}\) \\ \(d=1\), \(n=1024\) & \((1.58\pm 0.57)\times 10^{-6}\) & \((5.59\pm 2.30)\times 10^{-7}\) & \((4.50\pm 3.01)\times 10^{-7}\) & \((0.74\pm 1.01)\times 10^{-6}\) \\ \hline \(d=2\), \(n=128\) & \((6.68\pm 2.74)\times 10^{-4}\) & \((3.68\pm 1.84)\times 10^{-4}\) & \((4.22\pm 1.74)\times 10^{-4}\) & \((6.28\pm 2.74)\times 10^{-4}\) \\ \(d=2\), \(n=256\) & \((2.38\pm 1.06)\times 10^{-4}\) & \((6.22\pm 2.01)\times 10^{-5}\) & \((8.86\pm 4.94)\times 10^{-5}\) & \((1.25\pm 0.67)\times 10^{-4}\) \\ \(d=2\), \(n=512\) & \((1.03\pm 0.98)\times 10^{-4}\) & \((2.06\pm 0.72)\times 10^{-5}\) & \((3.32\pm 1.10)\times 10^{-5}\) & \((5.11\pm 3.33)\times 10^{-5}\) \\ \(d=2\), \(n=1024\) & \((6.98\pm 4.77)\times 10^{-5}\) & \((8.92\pm 5.69)\times 10^{-6}\) & \((1.72\pm 0.77)\times 10^{-5}\) & \((5.75\pm 2.28)\times 10^{-5}\) \\ \hline \(d=4\), \(n=128\) & \((2.11\pm 1.49)\times 10^{-3}\) & \((1.18\pm 0.25)\times 10^{-3}\) & \((1.53\pm 0.60)\times 10^{-3}\) & \((1.83\pm 0.80)\times 10^{-3}\) \\ \(d=4\), \(n=256\) & \((8.82\pm 3.26)\times 10^{-4}\) & \((4.09\pm 0.85)\times 10^{-4}\) & \((4.22\pm 2.01)\times 10^{-4}\) & \((5.47\pm 1.91)\times 10^{-4}\) \\ \(d=4\), \(n=512\) & \((5.35\pm 1.91)\times 10^{-4}\) & \((1.92\pm 0.53)\times 10^{-4}\) & \((2.88\pm 1.80)\times 10^{-4}\) & \((3.99\pm 1.87)\times 10^{-4}\) \\ \(d=4\), \(n=1024\) & \((2.23\pm 0.83)\times 10^{-4}\) & \((4.22\pm 0.43)\times 10^{-5}\) & \((8.50\pm 2.39)\times 10^{-5}\) & \((1.65\pm 0.57)\times 10^{-4}\) \\ \hline \(d=8\), \(n=128\) & \((4.07\pm 1.04)\times 10^{-3}\) & \((2.54\pm 0.29)\times 10^{-3}\) & \((2.73\pm 0.86)\times 10^{-3}\) & \((3.27\pm 0.96)\times 10^{-3}\) \\ \(d=8\), \(n=256\) & \((2.13\pm 0.70)\times 10^{-3}\) & \((1.05\pm 0.18)\times 10^{-3}\) & \((1.34\pm 0.33)\times 10^{-3}\) & \((1.48\pm 0.38)\times 10^{-3}\) \\ \(d=8\), \(n=512\) & \((1.36\pm 0.34)\times 10^{-3}\) & \((**5.04\pm 0.70)\times 10^{-4}\) & \((7.40\pm 1.35)\times 10^{-4}\) & \((1.08\pm 0.40)\times 10^{-3}\) \\ \(d=8\), \(n=1024\) & \((8.54\pm 2.23)\times 10^{-4}\) & \((**2.02\pm 0.24)\times 10^{-4}\) & \((3.79\pm 1.01)\times 10^{-4}\) & \((5.91\pm 1.87)\times 10^{-4}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Reducing procedural variability on synthetic datasets with different data sizes \(n=128,256,512\), \(1024\) and different data dimensions \(d=1,2,4,8\). The best results are in **bold**.
## References
* [1] Y. Abbasi-Yadkori, D. Pal, and C. Szepesvari. Improved algorithms for linear stochastic bandits. _Advances in Neural Information Processing Systems_, 24:2312-2320, 2011.
* [2] M. Abdar, F. Pourpanah, S. Hussain, D. Rezazadegan, L. Liu, M. Ghavamzadeh, P. Fieguth, X. Cao, A. Khosravi, U. R. Acharya, et al. A review of uncertainty quantification in deep learning: Techniques, applications and challenges. _Information Fusion_, 76:243-297, 2021.
* [3] A. Alaa and M. Van Der Schaar. Frequentist uncertainty in recurrent neural networks via blockwise influence functions. In _International Conference on Machine Learning_, pages 175-190. PMLR, 2020.
* [4] Z. Allen-Zhu, Y. Li, and Y. Liang. Learning and generalization in overparameterized neural networks, going beyond two layers. _Advances in Neural Information Processing Systems_, 2019.
* [5] Z. Allen-Zhu, Y. Li, and Z. Song. A convergence theory for deep learning via overparameterization. In _International Conference on Machine Learning_, pages 242-252. PMLR, 2019.
* [6] Z. Allen-Zhu, Y. Li, and Z. Song. On the convergence rate of training recurrent neural networks. _Advances in Neural Information Processing Systems_, 32:6676-6688, 2019.
* [7] S. Arora, S. S. Du, W. Hu, Z. Li, R. R. Salakhutdinov, and R. Wang. On exact computation with an infinitely wide neural net. _Advances in Neural Information Processing Systems_, 32:8141-8150, 2019.
* [8] A. Ashukha, A. Lyzhov, D. Molchanov, and D. Vetrov. Pitfalls of in-domain uncertainty estimation and ensembling in deep learning. _arXiv preprint arXiv:2002.06470_, 2020.
* [9] P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time analysis of the multiarmed bandit problem. _Machine Learning_, 47(2):235-256, 2002.
* [10] Y. Bai, S. Mei, H. Wang, and C. Xiong. Understanding the under-coverage bias in uncertainty estimation. _arXiv preprint arXiv:2106.05515_, 2021.
* [11] R. F. Barber, E. J. Candes, A. Ramdas, and R. J. Tibshirani. The limits of distribution-free conditional predictive inference. _arXiv preprint arXiv:1903.04684_, 2019.
* [12] R. F. Barber, E. J. Candes, A. Ramdas, and R. J. Tibshirani. Predictive inference with the jackknife+. _arXiv preprint arXiv:1905.02928_, 2019.
* [13] S. Basu, P. Pope, and S. Feizi. Influence functions in deep learning are fragile. _arXiv preprint arXiv:2006.14651_, 2020.
* [14] A. Berlinet and C. Thomas-Agnan. _Reproducing kernel Hilbert spaces in probability and statistics_. Springer Science & Business Media, 2011.
* [15] A. Bietti and J. Mairal. On the inductive bias of neural tangent kernels. _Advances in Neural Information Processing Systems_, 32, 2019.
* [16] C. Blundell, J. Cornebise, K. Kavukcuoglu, and D. Wierstra. Weight uncertainty in neural network. In _International conference on machine learning_, pages 1613-1622. PMLR, 2015.
* [17] L. Breiman. Bagging predictors. _Machine learning_, 24:123-140, 1996.
* [18] L. Breiman. Random forests. _Machine learning_, 45:5-32, 2001.
* [19] Y. Cao and Q. Gu. Generalization bounds of stochastic gradient descent for wide and deep neural networks. _Advances in Neural Information Processing Systems_, 32:10836-10846, 2019.
* [20] Y. Cao and Q. Gu. Generalization error bounds of gradient descent for learning over-parameterized deep relu networks. _Proceedings of the AAAI Conference on Artificial Intelligence_, 34(04):3349-3356, 2020.
* [21] E. D. Carvalho, R. Clark, A. Nicastro, and P. H. Kelly. Scalable uncertainty for computer vision with functional variational inference. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 12003-12013, 2020.
* [22] H. Chen, Z. Huang, H. Lam, H. Qian, and H. Zhang. Learning prediction intervals for regression: Generalization and calibration. In _International Conference on Artificial Intelligence and Statistics_, 2021.
* [23] A. Christmann and R. Hable. On the consistency of the bootstrap approach for support vector machines and related kernel-based methods. _Empirical inference: Festschrift in honor of vladimir n. vapnik_, pages 231-244, 2013.
* [24] A. Christmann and I. Steinwart. Consistency and robustness of kernel-based regression in convex risk minimization. _Bernoulli_, 13(3):799-819, 2007.
* [25] A. Christmann and I. Steinwart. How svms can estimate quantiles and the median. _Advances in Neural Information Processing Systems_, 20:305-312, 2007.
* [26] F. Cucker and S. Smale. On the mathematical foundations of learning. _Bulletin of the American Mathematical Society_, 39(1):1-49, 2002.
* [27] G. Cybenko. Approximation by superpositions of a sigmoidal function. _Mathematics of Control, Signals and Systems_, 2(4):303-314, 1989.
* [28] N. Dalmasso, T. Pospisil, A. B. Lee, R. Izbicki, P. E. Freeman, and A. I. Malz. Conditional density estimation tools in python and r with applications to photometric redshifts and likelihood-free cosmological inference. _Astronomy and Computing_, 30:100362, 2020.
* [29] M. Debruyne, M. Hubert, and J. A. Suykens. Model selection in kernel based regression using the influence function. _Journal of Machine Learning Research_, 9(10), 2008.
* [30] S. Depeweg, J. M. Hernandez-Lobato, F. Doshi-Velez, and S. Udluft. Learning and policy search in stochastic dynamical systems with bayesian neural networks. _arXiv preprint arXiv:1605.07127_, 2016.
* [31] S. Depeweg, J.-M. Hernandez-Lobato, F. Doshi-Velez, and S. Udluft. Decomposition of uncertainty in bayesian deep learning for efficient and risk-sensitive learning. In _International Conference on Machine Learning_, pages 1184-1193. PMLR, 2018.
* [32] S. Du, J. Lee, H. Li, L. Wang, and X. Zhai. Gradient descent finds global minima of deep neural networks. In _International Conference on Machine Learning_, pages 1675-1685. PMLR, 2019.
* [33] S. S. Du, X. Zhai, B. Poczos, and A. Singh. Gradient descent provably optimizes over-parameterized neural networks. In _International Conference on Learning Representations_, 2018.
* [34] V. Dutordoir, H. Salimbeni, J. Hensman, and M. Deisenroth. Gaussian process conditional density estimation. In _Advances in Neural Information Processing Systems_, pages 2385-2395, 2018.
* [35] L. T. Fernholz. _Von Mises calculus for statistical functionals_, volume 19. Springer Science & Business Media, 2012.
* [36] R. A. Fisher. Inverse probability. _Mathematical Proceedings of the Cambridge Philosophical Society_, 26(4):528-535, 1930.
* [37] J. M. Flegal, G. L. Jones, et al. Batch means and spectral variance estimators in Markov chain Monte Carlo. _The Annals of Statistics_, 38(2):1034-1070, 2010.
* [38] S. Fort, H. Hu, and B. Lakshminarayanan. Deep ensembles: A loss landscape perspective. _arXiv preprint arXiv:1912.02757_, 2019.
* [39] P. E. Freeman, R. Izbicki, and A. B. Lee. A unified framework for constructing, tuning and assessing photometric redshift density estimates in a selection bias setting. _Monthly Notices of the Royal Astronomical Society_, 468(4):4556-4565, 2017.
* [40] Y. Gal and Z. Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In _International Conference on Machine Learning_, pages 1050-1059, 2016.
* [41] A. Gelman, J. B. Carlin, H. S. Stern, D. B. Dunson, A. Vehtari, and D. B. Rubin. _Bayesian data analysis_. CRC press, 2013.
* [42] P. Geurts, D. Ernst, and L. Wehenkel. Extremely randomized trees. _Machine learning_, 63:3-42, 2006.
* [43] C. J. Geyer. Practical Markov chain Monte Carlo. _Statistical Science_, 7(4):473-483, 1992.
* [44] P. W. Glynn and D. L. Iglehart. Simulation output analysis using standardized time series. _Mathematics of Operations Research_, 15(1):1-16, 1990.
* [45] P. W. Glynn and H. Lam. Constructing simulation output intervals under input uncertainty via data sectioning. In _2018 Winter Simulation Conference (WSC)_, pages 1551-1562. IEEE, 2018.
* [46] A. Graves. Practical variational inference for neural networks. _Advances in neural information processing systems_, 24, 2011.
* [47] R. Hable. Asymptotic normality of support vector machine variants and other regularized kernel methods. _Journal of Multivariate Analysis_, 106:92-117, 2012.
* [48] F. R. Hampel. The influence curve and its role in robust estimation. _Journal of the American Statistical Association_, 69(346):383-393, 1974.
* [49] F. R. Hampel. Potential surprises. _International Symposium on Imprecise Probability: Theories and Applications (ISIPTA)_, page 209, 2011.
* [50] B. Hanin and M. Sellke. Approximating continuous functions by relu nets of minimal width. _arXiv preprint arXiv:1710.11278_, 2017.
* [51] B. He, B. Lakshminarayanan, and Y. W. Teh. Bayesian deep ensembles via the neural tangent kernel. _arXiv preprint arXiv:2007.05864_, 2020.
* [52] K. He, X. Zhang, S. Ren, and J. Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In _Proceedings of the IEEE International Conference on Computer Vision_, pages 1026-1034, 2015.
* [53] J. M. Hernandez-Lobato and R. Adams. Probabilistic backpropagation for scalable learning of bayesian neural networks. In _International Conference on Machine Learning_, pages 1861-1869, 2015.
* [54] M. P. Holmes, A. G. Gray, and C. L. Isbell Jr. Fast nonparametric conditional density estimation. In _Proceedings of the Twenty-Third Conference on Uncertainty in Artificial Intelligence_, pages 175-182, 2007.
* [55] K. Hornik. Approximation capabilities of multilayer feedforward networks. _Neural Networks_, 4(2):251-257, 1991.
* [56] W. Hu, Z. Li, and D. Yu. Simple and effective regularization methods for training on noisily labeled data with generalization guarantee. In _International Conference on Learning Representations_, 2020.
* [57] Z. Huang, H. Lam, and H. Zhang. Quantifying epistemic uncertainty in deep learning. _arXiv preprint arXiv:2110.12122_, 2021.
* [58] E. Hullermeier and W. Waegeman. Aleatoric and epistemic uncertainty in machine learning: An introduction to concepts and methods. _Machine Learning_, 110(3):457-506, 2021.
* [59] R. Izbicki and A. B. Lee. Nonparametric conditional density estimation in a high-dimensional regression setting. _Journal of Computational and Graphical Statistics_, 25(4):1297-1316, 2016.
* [60] R. Izbicki, A. B. Lee, and P. E. Freeman. Photo-\(z\) estimation: An example of nonparametric conditional density estimation under selection bias. _Ann. Appl. Stat._, 11(2):698-724, 06 2017.
* [61] A. Jacot, F. Gabriel, and C. Hongler. Neural tangent kernel: Convergence and generalization in neural networks. _arXiv preprint arXiv:1806.07572_, 2018.
* [62] L. Jaeckel. The infinitesimal jackknife. memorandum. Technical report, MM 72-1215-11, Bell Lab. Murray Hill, NJ, 1972.
* [63] G. L. Jones, M. Haran, B. S. Caffo, and R. Neath. Fixed-width output analysis for Markov chain Monte Carlo. _Journal of the American Statistical Association_, 101(476):1537-1547, 2006.
* [64] A. Kendall and Y. Gal. What uncertainties do we need in bayesian deep learning for computer vision? _arXiv preprint arXiv:1703.04977_, 2017.
* [65] A. Khosravi, S. Nahavandi, D. Creighton, and A. F. Atiya. Lower upper bound estimation method for construction of neural network-based prediction intervals. _IEEE Transactions on Neural Networks_, 22(3):337-346, 2010.
* [66] A. Khosravi, S. Nahavandi, D. Creighton, and A. F. Atiya. Comprehensive review of neural network-based prediction intervals and new advances. _IEEE Transactions on Neural Networks_, 22(9):1341-1356, 2011.
* [67] R. Koenker and K. F. Hallock. Quantile regression. _Journal of Economic Perspectives_, 15(4):143-156, 2001.
* [68] P. W. Koh, K.-S. Ang, H. H. Teo, and P. Liang. On the accuracy of influence functions for measuring group effects. _arXiv preprint arXiv:1905.13289_, 2019.
* [69] P. W. Koh and P. Liang. Understanding black-box predictions via influence functions. In _International Conference on Machine Learning_, pages 1885-1894. PMLR, 2017.
* [70] B. Lakshminarayanan, A. Pritzel, and C. Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. In _Advances in Neural Information Processing Systems_, pages 6402-6413, 2017.
* [71] H. Lam. A cheap bootstrap method for fast inference. _arXiv preprint arXiv:2202.00090_, 2022.
* [72] H. Lam and Z. Liu. Bootstrap in high dimension with low computation. _International Conference on Machine Learning (ICML)_, 2023.
* [73] H. Lam and H. Zhang. Doubly robust stein-kernelized monte carlo estimator: Simultaneous bias-variance reduction and supercanonical convergence. _Journal of Machine Learning Research_, 24(85):1-58, 2023.
* [74] J. Lee, Y. Bahri, R. Novak, S. S. Schoenholz, J. Pennington, and J. Sohl-Dickstein. Deep neural networks as gaussian processes. _arXiv preprint arXiv:1711.00165_, 2017.
* [75] J. Lee, L. Xiao, S. Schoenholz, Y. Bahri, R. Novak, J. Sohl-Dickstein, and J. Pennington. Wide neural networks of any depth evolve as linear models under gradient descent. _Advances in Neural Information Processing Systems_, 32:8572-8583, 2019.
* [76] S. Lee, S. Purushwalkam, M. Cogswell, D. Crandall, and D. Batra. Why m heads are better than one: Training a diverse ensemble of deep networks. _arXiv preprint arXiv:1511.06314_, 2015.
* [77] J. Lei, M. G'Sell, A. Rinaldo, R. J. Tibshirani, and L. Wasserman. Distribution-free predictive inference for regression. _Journal of the American Statistical Association_, 113(523):1094-1111, 2018.
* [78] J. Lei, A. Rinaldo, and L. Wasserman. A conformal prediction approach to explore functional data. _Annals of Mathematics and Artificial Intelligence_, 74(1-2):29-43, 2015.
* [79] J. Lei and L. Wasserman. Distribution-free prediction bands for non-parametric regression. _Journal of the Royal Statistical Society: Series B (Statistical Methodology)_, 76(1):71-96, 2014.
* [80] Y. Li and Y. Liang. Learning overparameterized neural networks via stochastic gradient descent on structured data. _Advances in Neural Information Processing Systems_, 2018.
* [81] Z. Li, R. Wang, D. Yu, S. S. Du, W. Hu, R. Salakhutdinov, and S. Arora. Enhanced convolutional neural tangent kernels. _arXiv preprint arXiv:1911.00809_, 2019.
* [82] N. Littlestone and M. K. Warmuth. The weighted majority algorithm. _Information and computation_, 108(2):212-261, 1994.
* [83] Z. Lu, H. Pu, F. Wang, Z. Hu, and L. Wang. The expressive power of neural networks: A view from the width. In _Proceedings of the 31st International Conference on Neural Information Processing Systems_, pages 6232-6240, 2017.
* [84] N. Meinshausen. Quantile regression forests. _Journal of Machine Learning Research_, 7(Jun):983-999, 2006.
* [85] R. Michelmore, M. Kwiatkowska, and Y. Gal. Evaluating uncertainty quantification in end-to-end autonomous driving control. _arXiv preprint arXiv:1811.06817_, 2018.
* [86] R. Michelmore, M. Wicker, L. Laurenti, L. Cardelli, Y. Gal, and M. Kwiatkowska. Uncertainty quantification with statistical guarantees in end-to-end autonomous driving control. In _2020 IEEE international conference on robotics and automation (ICRA)_, pages 7344-7350. IEEE, 2020.
* [87] M. Mirza and S. Osindero. Conditional generative adversarial nets. _arXiv preprint arXiv:1411.1784_, 2014.
* [88] M. Mohri, A. Rostamizadeh, and A. Talwalkar. _Foundations of machine learning_. MIT press, 2018.
* [89] T. Pearce, M. Zaki, A. Brintrup, and A. Neely. High-quality prediction intervals for deep learning: A distribution-free, ensembled approach. In _International Conference on Machine Learning, PMLR: Volume 80_, 2018.
* [90] K. Posch and J. Pilz. Correlated parameters to accurately measure uncertainty in deep neural networks. _IEEE Transactions on Neural Networks and Learning Systems_, 32(3):1037-1051, 2020.
* [91] Y. Ren, J. Zhu, J. Li, and Y. Luo. Conditional generative moment-matching networks. _Advances in Neural Information Processing Systems_, 29, 2016.
* [92] Y. Romano, E. Patterson, and E. Candes. Conformalized quantile regression. In _Advances in Neural Information Processing Systems_, pages 3543-3553, 2019.
* [93] N. Rosenfeld, Y. Mansour, and E. Yom-Tov. Discriminative learning of prediction intervals. In _International Conference on Artificial Intelligence and Statistics_, pages 347-355, 2018.
* [94] B. Schmeiser. Batch size effects in the analysis of simulation output. _Operations Research_, 30(3):556-568, 1982.
* [95] L. Schruben. Confidence interval estimation using standardized time series. _Operations Research_, 31(6):1090-1108, 1983.
* [96] R. Senge, S. Bosner, K. Dembczynski, J. Haasenritter, O. Hirsch, N. Donner-Banzhoff, and E. Hullermeier. Reliable classification: Learning classifiers that distinguish aleatoric and epistemic uncertainty. _Information Sciences_, 255:16-29, 2014.
* [97] J. Shao and D. Tu. _The jackknife and bootstrap_. Springer Science & Business Media, 2012.
* [98] A. Siddhant and Z. C. Lipton. Deep bayesian active learning for natural language processing: Results of a large-scale empirical study. _arXiv preprint arXiv:1808.05697_, 2018.
* [99] S. Smale and D.-X. Zhou. Shannon sampling and function reconstruction from point values. _Bulletin of the American Mathematical Society_, 41(3):279-306, 2004.
* [100] S. Smale and D.-X. Zhou. Shannon sampling II: Connections to learning theory. _Applied and Computational Harmonic Analysis_, 19(3):285-302, 2005.
* [101] S. Smale and D.-X. Zhou. Learning theory estimates via integral operators and their approximations. _Constructive Approximation_, 26(2):153-172, 2007.
* [102] I. Steinwart, A. Christmann, et al. Estimating conditional quantiles with the help of the pinball loss. _Bernoulli_, 17(1):211-225, 2011.
* [103] H. Sun and Q. Wu. Application of integral operator for regularized least-square regression. _Mathematical and Computer Modelling_, 49(1-2):276-285, 2009.
* [104] A. W. Van der Vaart. _Asymptotic statistics_, volume 3. Cambridge university press, 2000.
* [105] V. Vovk, A. Gammerman, and G. Shafer. _Algorithmic learning in a random world_. Springer Science & Business Media, 2005.
* [106] Y. Xiao and W. Y. Wang. Quantifying uncertainties in natural language processing tasks. In _Proceedings of the AAAI conference on artificial intelligence_, volume 33, pages 7322-7329, 2019.
* [107] G. Yang. Scaling limits of wide neural networks with weight sharing: Gaussian process behavior, gradient independence, and neural tangent kernel derivation. _arXiv preprint arXiv:1902.04760_, 2019.
* [108] G. Yang. Tensor programs ii: Neural tangent kernel for any architecture. _arXiv preprint arXiv:2006.14548_, 2020.
* [109] H. Zhang, J. Zimmerman, D. Nettleton, and D. J. Nordman. Random forest prediction intervals. _The American Statistician_, pages 1-15, 2019.
* [110] Y. Zhang, Z.-Q. J. Xu, T. Luo, and Z. Ma. A type of generalization error induced by initialization in deep neural networks. In _Mathematical and Scientific Machine Learning_, pages 144-164. PMLR, 2020.
* [111] D. Zhou, L. Li, and Q. Gu. Neural contextual bandits with ucb-based exploration. In _International Conference on Machine Learning_, pages 11492-11502. PMLR, 2020.
* [112] X. Zhou, Y. Jiao, J. Liu, and J. Huang. A deep generative approach to conditional sampling. _Journal of the American Statistical Association_, pages 1-12, 2022.
* [113] D. Zou, Y. Cao, D. Zhou, and Q. Gu. Gradient descent optimizes over-parameterized deep relu networks. _Machine Learning_, 109(3):467-492, 2020.
## Appendices
We provide further results and discussions in this supplementary material. Section A discusses aleatoric uncertainty and predictive uncertainty. Section B discusses statistical inference for kernel-based regression and, in particular, the asymptotic normality of kernel-based regression. Section C discusses the training of the linearized neural networks based on the NTK theory. In particular, we show Proposition 3.1 that the shifted kernel ridge regressor using NTK with a shift from an initial function \(s_{\theta_{b}}\) is exactly the linearized neural network regressor that starts from the initial network \(s_{\theta_{b}}\). Section D presents the proofs for results in the paper. Section E presents experimental details and more experimental results. Section F provides additional methodology and results, extending our framework to semi-supervised learning by introducing the independent PNC predictor.
## Appendix A Other Types of Uncertainty
Back in 1930, [36] proposed a formal distinction between aleatory and epistemic uncertainty [49], while in modern machine learning, their distinction and connection were further investigated in [96] and then extended to deep learning models [64, 31]. In [89], the differences between procedural and data variabilities in epistemic uncertainty were intuitively described; however, no rigorous definition or estimation method was provided. We present an additional discussion on other types of uncertainty for the sake of completeness. Section A.1 presents aleatoric uncertainty and Section A.2 presents predictive uncertainty.
### Aleatoric Uncertainty
We note that if we could remove all the epistemic uncertainty, i.e., letting \(\text{UQ}_{EU}=0\), the best predictor we could get is the Bayes predictor \(h_{B}^{*}\). However, \(h_{B}^{*}\), as a point estimator, is not able to capture the randomness in \(\pi_{Y|X}\) if it is not a point mass distribution.
The easiest way to think of aleatoric uncertainty is that it is captured by \(\pi_{Y|X}\). At the level of the realized response or label value, this uncertainty is represented by
\[\text{UQ}_{AU}=y-h_{B}^{*}(x)\]
If the connection between \(X\) and \(Y\) is non-deterministic, the description of a new prediction problem involves a conditional probability distribution \(\pi_{Y|X}\). Standard neural network predictors can only provide a single output \(y\). Thus, even given full information of the distribution \(\pi\), the uncertainty in the prediction of a single output \(y\) remains. This uncertainty cannot be removed by better modeling or more data.
There are multiple work aiming to estimate the aleatoric uncertainty, i.e., to learn \(\pi_{Y|X}\). For instance, conditional quantile regression aims to learn each quantile of the distribution \(\pi_{Y|X}\)[67, 84, 102]. Conditional density estimation aims to approximately describe the density of \(\pi_{Y|X}\)[54, 34, 59, 28, 39, 60]. However, we remark that these approaches also face their own epistemic uncertainty in the estimation. See [25, 102, 10] for recent studies on the epistemic bias in quantile regression.
### Predictive Uncertainty
Existing work on uncertainty measurement in deep learning models mainly focuses on _prediction sets_ (_predictive uncertainty_), which captures the sum of epistemic and aleatoric uncertainties [89, 92, 12, 3, 22].
In certain scenarios, it is not necessary to estimate each uncertainty separately. A distinction between aleatoric and epistemic uncertainties might appear less significant. The user may only concern about the overall uncertainty related to the prediction, which is called predictive uncertainty and can be thought as the summation of the epistemic and aleatoric uncertainties.
The most common way to quantify predictive uncertainty is the _prediction set_: We aim to find a map \(\hat{C}:\mathcal{X}\to 2^{\mathcal{Y}}\) which maps an input to a subset of the output space so that for each test point \(x_{0}\in\mathcal{X}\), the prediction set \(\hat{C}(x_{0})\) is likely to cover the true outcome \(y_{0}\)[105, 11, 12, 79, 78, 77]. This
prediction set is also called a prediction interval in the case of regression [65; 66]. The prediction set communicates predictive uncertainty via a statistical guarantee on the marginal coverage, i.e.,
\[\mathbb{P}(y_{0}\in\hat{C}(x_{0}))\geq 1-\delta\]
for a small threshold \(\delta>0\) where the probability is taken with respect to both training data \(\mathcal{D}_{tr}\) (epistemic uncertainty) for learning \(\hat{C}\) and the test data \((x_{0},y_{0})\) (aleatoric uncertainty). It is more tempting to obtain a statistical guarantee with only aleatoric uncertainty by considering the probablity conditional on the prediction set, i.e.,
\[\mathbb{P}(y_{0}\in\hat{C}(x_{0})|\hat{C})\geq 1-\delta. \tag{9}\]
However, this guarantee is in general very hard to achieve in the finite-sample case. Even asymptotically, (9) is not easy to achieve unless we have a very simple structure on the data distribution [93; 109]. A recent study show that (9) could hold in the finite-sample sense if we could leverage a set of validation data [22].
## Appendix B Statistical Inference for Kernel-Based Regression
### Classical Statistical Learning Framework and Kernel Ridge Regression
Following Section 2, assuming that the input-output pair \((X,Y)\) is a random vector following an unknown probability distribution \(\pi\) on \(\mathcal{X}\times\mathcal{Y}\) where \(X\in\mathcal{X}\subset\mathbb{R}^{d}\) is an input and \(Y\in\mathcal{Y}\subset\mathbb{R}\) is the corresponding output. Let the marginal distribution of \(X\) be \(\pi_{X}(x)\) and the conditional distribution of \(Y\) given \(X\) be \(\pi_{Y|X}(y|x)\).
Suppose a learner has access to a set of training data \(\mathcal{D}=\{(x_{1},y_{1}),(x_{2},y_{2}),...,(x_{n},y_{n})\}\), which is assumed to be independent and identically distributed (i.i.d.) according to the data distribution \(\pi\). Let \(\mathbf{x}=(x_{1},...,x_{n})^{T}\) and \(\mathbf{y}=(y_{1},...,y_{n})^{T}\) for short. Let \(\pi_{n}\) or \(\pi_{\mathcal{D}}\) be the empirical distribution associated with the training data \(\mathcal{D}\):
\[\pi_{\mathcal{D}}=\pi_{n}=\frac{1}{n}\sum_{i=1}^{n}\delta_{(x_{i},y_{i})}.\]
The goal of the learner is to obtain a function \(h:\mathcal{X}\rightarrow\mathcal{Y}\) such that \(h(x)\) is a "good" predictor for the response \(y\) if \(X=x\) is observed for any \(x\in\mathcal{X}\). This is typically done by minimizing the following (ground-truth) _population risk_
\[R_{\pi}(h):=\mathbb{E}_{(X,Y)\sim\pi}[\mathcal{L}(h(X),Y)]\]
where \(\mathcal{L}:\mathcal{Y}\times\mathbb{R}\rightarrow\mathbb{R}\) is the _loss function_. For instance, \(\mathcal{L}\) can be the square error in regression or cross-entropy loss in classification. If \(h\) is allowed to be any possible functions, the best predictions in the sense of minimizing the risk are described by the _Bayes predictor_\(h_{B}^{*}\)[88; 58]:
\[h_{B}^{*}(X):=\operatorname*{arg\,min}_{\bar{y}\in\mathcal{Y}}\,\mathbb{E}_{Y \sim\pi_{Y|X}}[\mathcal{L}(\hat{y},Y)|X].\]
Note that \(h_{B}^{*}\) cannot be obtained in practice, since the conditional distribution \(\pi_{Y|X}(y|x)\) is unknown. In particular, the least squares loss \(\mathcal{L}(h(X),Y)=(h(X)-Y)^{2}\) yields the _(ground-truth) regression function_ defined by
\[g_{\pi}^{*}(x):=\mathbb{E}_{(X,Y)\sim\pi}[Y|X=x].\]
As the ground-truth distribution \(\pi\) is unknown, we can neither compute nor minimize the population risk \(R_{\pi}(h)\) directly. We define the risk under a general distribution \(P\),
\[R_{P}(h):=\mathbb{E}_{(X,Y)\sim P}[\mathcal{L}(h(X),Y)].\]
In particular, for \(P=\pi_{n}\) (the empirical distribution associated with the training data \(\mathcal{D}\)), an _empirical risk_ is derived:
\[R_{\pi_{n}}(h):=\mathbb{E}_{(X,Y)\sim\pi_{n}}[\mathcal{L}(h(X),Y)]=\frac{1}{n }\sum_{i=1}^{n}\mathcal{L}(h(x_{i}),y_{i})\]
In practice, minimizing the risk over \(h\) is restricted to a certain function class. Let \(\mathcal{H}\) be a hypothesis class, which is a set of functions \(\{h:\mathcal{X}\to\mathcal{Y}|h\in\mathcal{H}\}\). For instance, 1) \(\mathcal{H}\) could be a nonparametric class such as a _reproducing kernel Hilbert space (RKHS)_. The resulting method is known as _kernel-based regression_; See below. 2) \(\mathcal{H}\) could be a parametric class \(\{h_{\theta}:\mathcal{X}\to\mathcal{Y}|\theta\in\Theta\}\) where \(\theta\) is the parameter, and \(\Theta\) is the set of all possible parameters, e.g., the linear coefficients in a linear regression model, or the set of all network parameters in a neural network model.
In classical statistical learning, one is interested in finding a hypothesis \(g_{\pi}\in\mathcal{H}\) that minimizes the population risk
\[g_{\pi}:=\operatorname*{arg\,min}_{h\in\mathcal{H}}\,R_{\pi}(h). \tag{10}\]
which is called the true risk minimizer. We remark that \(h_{\pi}\) is the best choice in the sense of minimizing the risk within the hypothesis set \(\mathcal{H}\) and different choices of \(\mathcal{H}\) may lead to different \(h_{\pi}\). As \(\pi\) is unknown, the learner may consider minimizing the empirical risk:
\[g_{\pi_{n}}:=\operatorname*{arg\,min}_{h\in\mathcal{H}}\,R_{\pi_{n}}(h) \tag{11}\]
which is called the empirical risk minimizer. More generally, to avoid overfitting in the finite sample regime, the learner may consider a regularized empirical risk minimization problem:
\[g_{\pi_{n},\lambda_{n}}:=\operatorname*{arg\,min}_{h\in\mathcal{H}}\,R_{\pi_{ n}}(h)+\lambda_{n}\|h\|_{\mathcal{H}}^{2} \tag{12}\]
and its corresponding population problem
\[g_{\pi,\lambda_{0}}:=\operatorname*{arg\,min}_{h\in\mathcal{H}}\,R_{\pi}(h)+ \lambda_{0}\|h\|_{\mathcal{H}}^{2} \tag{13}\]
which are called the true regularized risk minimizer and the empirical regularized risk minimizer, respectively. Here, \(\lambda_{n}\geq 0\) is the regularization parameter, which may depend on the data size \(n\), and we assume it has a limit \(\lambda_{0}=\lim_{n\to\infty}\lambda_{n}\). In general, the target \(g_{\pi}\) is not equal to \(g_{\pi_{n},\lambda_{n}}\) or \(g_{\pi_{n}}\) (We omit \(0\) in the subscript if \(\lambda=0\)).
The above framework fits classical machine learning approaches such as linear regression or kernel-based convex regression (such as kernel ridge regression), since (12) as well as (13) has a unique solution in this setting. However, this framework is not suitable for deep learning: the empirical (regularized) risk minimizer \(g_{\pi_{n},\lambda_{n}}\) or \(h_{\pi_{n}}\) is not unique and cannot be precisely obtained due to the non-convex nature of neural networks. Therefore, the gradient-based methods in deep learning can only find an approximate solution \(g_{\pi_{n},\lambda_{n}}\) or \(h_{\pi_{n}}\), subjected to procedural variability.
**Kernel ridge regression.** Next, we apply the above framework to kernel ridge regression and review some basic existing results about kernel ridge regression, which can be found, e.g., in [14, 101, 73, 99, 103, 14].
The kernel-based regression means that the hypothesis class \(\mathcal{H}\) in statistical learning is chosen to be an RKHS. Formally, a Hilbert space \(\mathcal{H}\) consisting of functions \(h:\mathcal{X}\to\mathbb{R}\) with an inner product \(\langle\cdot,\cdot\rangle:\mathcal{H}\times\mathcal{H}\to\mathbb{R}\) is a RKHS if there exists a symmetric positive definite function \(k:\mathcal{X}\times\mathcal{X}\to\mathbb{R}\), called a (reproducing) kernel, such that for all \(x\in\mathcal{X}\), we have \(k(\cdot,x)\in\mathcal{H}\) and for all \(x\in\mathcal{X}\) and \(h\in\mathcal{H}\), we have \(h(x)=\langle h(\cdot),k(\cdot,x)\rangle\). We use the above \(k\) to denote the kernel associated with \(\mathcal{H}\). Note that for any symmetric positive definite function \(k\), we can naturally construct, from \(k\), an RKHS \(\mathcal{H}\) whose kernel is exactly \(k\)[14].
When \(\mathcal{H}\) is a RKHS, \(g_{\pi_{n},\lambda_{n}}\) in (12) can be efficiently computed for a number of convex loss functions \(\mathcal{L}\). In particular, for the least squares loss \(\mathcal{L}(h(X),Y)=(h(X)-Y)^{2}\), the resulting problem (12) is well known as the kernel ridge regression, and there exists a closed-form expression for both \(g_{\pi_{n},\lambda_{n}}\) and \(g_{\pi,\lambda_{0}}\), as we discuss below.
Formally, the kernel ridge regression problem is given by
\[g_{\pi_{n},\lambda_{n}}(x):=\operatorname*{arg\,min}_{g\in\mathcal{H}}\left\{ \frac{1}{n}\sum_{j=1}^{n}(y_{j}-g(x_{j}))^{2}+\lambda_{n}\|g\|_{\mathcal{H}}^ {2}\right\}\]
where \(\lambda_{n}>0\) is a regularization hyperparameter that may depend on the cardinality of the training data set. There is a closed-form formula for its solution, as stated below.
**Proposition B.1**.: _Let_
\[\mathbf{k}(\mathbf{x},\mathbf{x})=(k(x_{i},x_{j}))_{n\times n}\in\mathbb{R}^{n\times n}\]
_be the kernel Gram matrix and_
\[\mathbf{k}(x,\mathbf{x})=(k(x,x_{1}),\cdots,k(x,x_{n}))^{T}.\]
_Then the (unique) kernel ridge regression solution is given as_
\[g_{\pi_{n},\lambda_{n}}(x)=\mathbf{k}(x,\mathbf{x})^{T}(\mathbf{k}(\mathbf{x},\mathbf{x})+\lambda_ {n}n\mathbf{I})^{-1}\mathbf{y}\]
It is worth mentioning the linearity of kernel ridge regression: When we have two outputs \(\mathbf{y}=(y_{1},...,y_{n})^{T}\) and \(\mathbf{y}^{\prime}=(y^{\prime}_{1},...,y^{\prime}_{n})^{T}\), we have
\[\mathbf{k}(x,\mathbf{x})^{T}(\mathbf{k}(\mathbf{x},\mathbf{x})+\lambda_{n}n\mathbf{I})^{- 1}(\mathbf{y}+\mathbf{y}^{\prime})\] \[= \mathbf{k}(x,\mathbf{x})^{T}(\mathbf{k}(\mathbf{x},\mathbf{x})+\lambda_{n}n\mathbf{I})^{- 1}\mathbf{y}+\mathbf{k}(x,\mathbf{x})^{T}(\mathbf{k}(\mathbf{x},\mathbf{x})+\lambda_{n}n\mathbf{I})^{-1} \mathbf{y}\]
which means the solution to
\[\operatorname*{arg\,min}_{g\in\mathcal{H}}\left\{\frac{1}{n}\sum_{j=1}^{n}(y_ {j}+y^{\prime}_{j}-g(x_{j}))^{2}+\lambda_{n}\|g\|_{\mathcal{H}}^{2}\right\}\]
is the summation of the solution to
\[\operatorname*{arg\,min}_{g\in\mathcal{H}}\left\{\frac{1}{n}\sum_{j=1}^{n}(y_ {j}-g(x_{j}))^{2}+\lambda_{n}\|g\|_{\mathcal{H}}^{2}\right\}\]
and the solution to
\[\operatorname*{arg\,min}_{g\in\mathcal{H}}\left\{\frac{1}{n}\sum_{j=1}^{n}(y^{ \prime}_{j}-g(x_{j}))^{2}+\lambda_{n}\|g\|_{\mathcal{H}}^{2}\right\}.\]
Define \(\mathcal{O}_{k}:L^{2}(\pi_{X})\to L^{2}(\pi_{X})\) as the kernel integral operator
\[(\mathcal{O}_{k}g)(x):=\int_{\mathcal{X}}k(x,x^{\prime})g(x^{\prime})\pi_{X}( x^{\prime})dx^{\prime},\;x\in\mathcal{X},\;g\in L^{2}(\pi_{X}).\]
where \(L^{2}(\pi_{X}):=\{f:\int_{\mathcal{X}}f(x)^{2}\pi_{X}(x)dx<\infty\}\).
[103] shows that \(\mathcal{O}_{k}\) is a compact and positive self-adjoint linear operator on \(L^{2}(\pi_{X})\). Note that since \(L_{k}\) is positive: \(\mathcal{O}_{k}\geq 0\), we have that \(\mathcal{O}_{k}+\lambda_{0}I\) is strictly positive for any \(\lambda_{0}>0\): \(\mathcal{O}_{k}+\lambda_{0}I>0\), and thus its inverse operator \((\mathcal{O}_{k}+\lambda_{0}I)^{-1}\) exists and is unique. Note that \((\mathcal{O}_{k}+\lambda_{0}I)^{-1}\) is also a linear operator on \(L^{2}(\pi_{X})\).
Next, consider the population risk minimization problem corresponding to \(g_{\pi_{n},\lambda_{n}}\) as follows:
\[g_{\lambda_{0}}:=\operatorname*{arg\,min}_{g\in\mathcal{H}}\left\{\mathbb{E}_ {\pi}[(Y-g(X))^{2}]+\lambda_{0}\|g\|_{\mathcal{H}}^{2}\right\}.\]
It is easy to see that this problem is equivalent to
\[g_{\lambda_{0}}:=\operatorname*{arg\,min}_{g\in\mathcal{H}}\left\{\mathbb{E}_ {\pi}[(g_{\pi}^{*}(X)-g(X))^{2}]+\lambda_{0}\|g\|_{\mathcal{H}}^{2}\right\}.\]
where \(g_{\pi}^{*}(x)=\mathbb{E}_{(X,Y)\sim\pi}[Y|X=x]\) is the (ground-truth) regression function. This problem has the following explicit closed-form expression of the solution (a proof can be found in [26]):
**Proposition B.2**.: _The solution of \(g_{\lambda_{0}}\) is given as \(g_{\lambda_{0}}=(\mathcal{O}_{k}+\lambda_{0}I)^{-1}\mathcal{O}_{k}g_{\pi}^{*}\)._
The linearity of kernel ridge regression also holds for the population-level solution:
\[(g+g^{\prime})_{\lambda_{0}}= (\mathcal{O}_{k}+\lambda_{0}I)^{-1}\mathcal{O}_{k}(g+g^{\prime})_{\pi}^ {*}\] \[= (\mathcal{O}_{k}+\lambda_{0}I)^{-1}\mathcal{O}_{k}(g_{\pi}^{*}+g_ {\pi}^{*})\] \[= (\mathcal{O}_{k}+\lambda_{0}I)^{-1}\mathcal{O}_{k}g_{\pi}^{*}+( \mathcal{O}_{k}+\lambda_{0}I)^{-1}\mathcal{O}_{k}g_{\pi}^{\prime*}\] \[= g_{\lambda_{0}}+g_{\lambda_{0}}^{\prime}\]
for any two functions \(g,g^{\prime}\in L^{2}(\pi_{X})\) since both \((\mathcal{O}_{k}+\lambda_{0}I)^{-1}\) and \(\mathcal{O}_{k}\) are linear operators on \(L^{2}(\pi_{X})\).
### Asymptotic Normality of Kernel-Based Regression
In this section, we review existing results on the asymptotic normality of kernel-based regression, which is established using the influence function concept [24, 23, 47].
Let \(\mathcal{H}\) be a generic RKHS with the associated kernel \(k(x,x^{\prime})\). Let \(\|\cdot\|_{\mathcal{H}}\) denote the norm on \(\mathcal{H}\). Note that the feature map \(k_{x}=k(x,\cdot)\) is a function in \(\mathcal{H}\). We consider the connection between the true regularized risk minimizer \(g_{\pi,\lambda_{0}}\) and the empirical regularized risk minimizer \(g_{\pi_{n},\lambda_{n}}\) introduced in (11) and (12) when \(\mathcal{H}\) is a RKHS.
The asymptotic normality of kernel-based regression roughly states that under some mild conditions, \(\sqrt{n}(g_{\pi_{n},\lambda_{n}}-g_{\pi,\lambda_{0}})\) is asymptotically normal as \(n\to\infty\). This result provides the theoretical foundations to conduct asymptotic statistical inference for kernel-based regression, in particular, building asymptotic confidence interval for the ground-truth \(g_{\pi,\lambda_{0}}\).
To be rigorous, we first introduce the following assumptions:
**Assumption B.3**.: Let \((\Omega,\mathcal{A},\mathcal{Q})\) be a probability space. Suppose that \(\mathcal{X}\subset\mathbb{R}^{d}\) is closed and bounded with Borel-\(\sigma\)-algebra \(\mathcal{B}(\mathcal{X})\), and \(\mathcal{Y}\subset\mathbb{R}\) is closed with Borel-\(\sigma\)-algebra \(\mathcal{B}(\mathcal{Y})\). The Borel-\(\sigma\)-algebra of \(\mathcal{X}\times\mathcal{Y}\) is denoted by \(\mathcal{B}(\mathcal{X}\times\mathcal{Y})\). Let \(\mathcal{H}\) be an RKHS with kernel \(k\) and let \(\pi\) be a probability measure on \((\mathcal{X}\times\mathcal{Y},\mathcal{B}(\mathcal{X}\times\mathcal{Y}))\). Assume that the kernel of \(\mathcal{H}\), \(k:\mathcal{X}\times\mathcal{X}\to\mathbb{R}\) is the restriction of an \(m\)-times continuously differentiable kernel \(\tilde{k}:\mathbb{R}^{d}\times\mathbb{R}^{d}\to\mathbb{R}\) such that \(m>d/2\) and \(k\neq 0\). Let \(\lambda_{0}\in(0,+\infty)\) be any positive number. Suppose that the sequence \(\lambda_{n}\) satisfy that \(\lambda_{n}-\lambda_{0}=o(\frac{1}{\sqrt{n}})\).
**Assumption B.4**.: Let \(\mathcal{L}:\mathcal{Y}\times\mathbb{R}\to[0,+\infty)\) be a loss function satisfying the following conditions:
* \(\mathcal{L}\) is a convex loss, i.e., \(z\mapsto L(y,z)\) is convex in \(z\) for every \(y\in\mathcal{Y}\).
* The partial derivatives \[\mathcal{L}^{\prime}(y,z)=\frac{\partial}{\partial z}\mathcal{L}(y,z),\quad \mathcal{L}^{\prime\prime}(y,z)=\frac{\partial^{2}}{\partial^{2}z}\mathcal{L }(y,z)\] exist for every \((y,z)\in\mathcal{Y}\times\mathbb{R}\).
* The maps \[(y,z)\mapsto\mathcal{L}^{\prime}(y,z),\quad(y,z)\mapsto\mathcal{L}^{\prime \prime}(y,z)\] are continuous.
* There is a \(b\in L^{2}(\pi_{Y})\), and for every \(a\in(0,+\infty)\), there is a \(b^{\prime}_{a}\in L^{2}(\pi_{Y})\) and a constant \(b^{\prime\prime}_{a}\in(0,+\infty)\) such that, for every \(y\in\mathcal{Y}\), \[|\mathcal{L}(y,z)|\leq b(y)+|z|^{p}\ \forall z,\quad\sup_{z\in[-a,a]}| \mathcal{L}^{\prime}(y,z)|\leq b^{\prime}_{a}(y),\quad\sup_{z\in[-a,a]}| \mathcal{L}^{\prime\prime}(y,z)|\leq b^{\prime\prime}_{a}\] where \(p\geq 1\) is a constant.
Note that the conditions on the loss function \(\mathcal{L}\) in Assumption B.4 are satisfied, e.g., for the logistic loss for classification or least-square loss for regression with \(\pi\) such that \(\mathbb{E}_{\pi_{Y}}[Y^{4}]<\infty\). Therefore, Assumption B.4 is reasonable for the kernel ridge regression problem we consider in this work.
**Theorem B.5** (Theorem 3.1 in [47]).: _Suppose that Assumptions B.3 and B.4 hold. Then there is a tight, Borel-measurable Gaussian process \(\mathbb{G}:\Omega\to\mathcal{H},\omega\mapsto\mathbb{G}(\omega)\) such that_
\[\sqrt{n}(g_{\pi_{n},\lambda_{n}}-g_{\pi,\lambda_{0}})\Rightarrow\mathbb{G}\text { in }\mathcal{H}\]
_where \(\Rightarrow\) represents "converges weakly". The Gaussian process \(\mathbb{G}\) is zero-mean; i.e., \(\mathbb{E}[\langle g,\mathbb{G}\rangle_{\mathcal{H}}]=0\) for every \(g\in\mathcal{H}\)._
Note that Theorem B.5 implies that for any \(g\in\mathcal{H}\), the random variable
\[\Omega\to\mathbb{R},\quad\omega\mapsto\langle g,\mathbb{G}(\omega)\rangle_{ \mathcal{H}}\]
has a zero-mean normal distribution. In particular, letting \(g=k_{x}\), the reproducing property of \(k\) implies that,
\[\sqrt{n}(g_{\pi_{n},\lambda_{n}}(x)-g_{\pi,\lambda_{0}}(x))\Rightarrow\mathbb{ G}(x)\]
To obtain the variance of this limiting distribution, denoted as \(\zeta^{2}(x)\), we review the tool of influence functions as follows.
Let \(P\) be a general distribution on the domain \(\mathcal{X}\times\mathcal{Y}\). Let \(z=(z_{x},z_{y})\in\mathcal{X}\times\mathcal{Y}\) be a general point. Suppose \(P_{\varepsilon,z}=(1-\varepsilon)P+\varepsilon\delta_{z}\) where \(\delta_{z}\) denotes the point mass distribution in \(z\). Let \(T\) be a general statistical functional that maps a distribution \(P\) to a real value: \(T:P\to T(P)\). Then the influence function of \(T\) at \(P\) at the point \(z\) is defined as
\[IF(z;T,P)=\lim_{\varepsilon\to 0}\frac{T(P_{\varepsilon,z})-T(P)}{ \varepsilon}=T^{\prime}_{P}(\delta_{z}-P).\]
Under some mild conditions (e.g., \(T\) is \(\rho_{\infty}\)-Hadamard differentiable [35]), the central limit theorem holds: \(\sqrt{n}(T(\pi_{n})-T(\pi))\) is asymptotically normal with mean 0 and variance \(\int_{z}IF^{2}(z;T,\pi)d\pi(z)\) where \(\pi_{n}\) is the empirical distribution associated with \(n\) samples i.i.d. drawn from \(\pi\).
In particular, let \(T(P)\) be the solution to target problem:
\[\min_{g\in\mathcal{H}}\mathbb{E}_{P}[(Y-g(X))^{2}]+\lambda_{P}\|g\|_{\mathcal{ H}}^{2}\]
where \(P=\pi_{n}\) or \(\pi\), \(\lambda_{\pi_{n}}=\lambda_{n}\) and \(\lambda_{\pi}=\lambda_{0}\). Then the proof of Theorem 3.1 in [47] provides the closed-formed expression of the Gateaux derivative \(T^{\prime}_{P}(Q)\) as follows:
\[T^{\prime}_{P}(Q)=-S_{P}^{-1}(\mathbb{E}_{Q}[\mathcal{L}^{\prime}(Y,g_{P, \lambda_{0}}(X))\Phi(X)])\]
for \(Q\in\text{lin}(B_{S})\) where \(\text{lin}(B_{S})\) corresponds to a subset of finite measures on \((\mathcal{X}\times\mathcal{Y},\mathcal{B}(\mathcal{X}\times\mathcal{Y}))\) (See Proposition A.8. in [47]) and \(S_{P}:\mathcal{H}\rightarrow\mathcal{H}\) is defined by
\[S_{P}(f)=2\lambda_{0}f+\mathbb{E}_{P}[\mathcal{L}^{\prime\prime}(Y,g_{P, \lambda_{0}}(X))f(X)\Phi(X)]\]
where \(\Phi\) is the feature map of \(\mathcal{H}\) (e.g., we can simply take \(\Phi(x)(\cdot)=k(x,\cdot)\)). Note that \(S_{P}\) is a continuous linear operator which is invertible (See Proposition A.5. in [47]). In particular, letting \(Q=\delta_{z}-P\), we obtain that
\[T^{\prime}_{P}(\delta_{z}-P)=S_{P}^{-1}(\mathbb{E}_{P}[\mathcal{L}^{\prime}(Y,g_{P,\lambda_{0}}(X))\Phi(X)])-\mathcal{L}^{\prime}(z_{y},g_{P,\lambda_{0}}( z_{x}))S_{P}^{-1}\Phi(X) \tag{14}\]
Note that the above special Gateaux derivative (influence function) of \(T(P)\), \(T^{\prime}_{P}(\delta_{z}-P)\), was initially derived by [24].
The proof of Theorem 3.1 in [47] shows that \(T(P)\) is Hadamard differentiable, and thus the asymptotic normality follows. Using the above expression of the influence function, we conclude that:
\[\sqrt{n}(g_{\pi_{n},\lambda_{n}}(x)-g_{\pi,\lambda_{0}}(x))\Rightarrow\mathcal{ N}(0,\xi^{2}(x))\]
where
\[\xi^{2}(x)=\int_{z\in\mathcal{X}\times\mathcal{Y}}IF^{2}(z;T,\pi)(x)d\pi(z)\]
and \(IF(z;T,\pi)\) is given by (14):
\[IF(z;T,\pi)=T^{\prime}_{\pi}(\delta_{z}-\pi)=S_{\pi}^{-1}(\mathbb{E}_{\pi}[ \mathcal{L}^{\prime}(Y,g_{\pi,\lambda_{0}}(X))\Phi(X)])-\mathcal{L}^{\prime}( z_{y},g_{\pi,\lambda_{0}}(z_{x}))S_{\pi}^{-1}\Phi(X).\]
Summarizing the above discussion, we have Proposition 3.4.
### Infinitesimal Jackknife for Kernel-Based Regression
In this section, we follow up on our discussion in Section 3.3 on the infinitesimal jackknife variance estimation. We derive the closed-formed expression of the infinitesimal jackknife variance estimation for \(\xi^{2}(x)\) in Section B.2 in the kernel ridge regression. We also show the consistency of infinitesimal jackknife variance estimation in general kernel-based regression. Our consistency result appears new in the literature.
First, we estimate \(\xi^{2}(x_{0})\) as follows:
\[\hat{\xi}^{2}(x_{0})=\frac{1}{n}\sum_{z_{i}\in\mathcal{D}}IF^{2}(z_{i};T,\pi_{ n})(x_{0})\]
where \(\mathcal{D}=\{(x_{1},y_{1}),(x_{2},y_{2}),...,(x_{n},y_{n})\}\) and \(\pi_{n}\) is the empirical distribution associated with the data \(\mathcal{D}\). This estimator is known as the infinitesimal jackknife variance estimator [62]. In this section,
we use \(x_{0}\) instead of \(x\) in the variance or influence function at a test point \(x_{0}\) to avoid confusion between \(x_{0}\) and \(z_{x}\). The latter \(z_{x}\) is referred to the \(x\)-componenet of \(z\).
Based on [24, 29, 47], we derive the closed-form expression of the variance estimation \(\hat{\xi}^{2}(x_{0})\) as follows.
**Theorem B.6** (Expression of infinitesimal jackknife for kernel ridge regression).: _Suppose that Assumptions B.3 and B.4 hold. For the least squares loss \(\mathcal{L}(h(X),Y)=(h(X)-Y)^{2}\) in kernel ridge regression, \(IF(z;T,\pi_{n})\) is given by_
\[IF(z;T,\pi_{n})(x_{0})=\mathbf{k}(x_{0},\mathbf{x})^{T}(\mathbf{k}(\mathbf{x},\mathbf{x})+\lambda_ {0}n\mathbf{I})^{-1}M_{z}(\mathbf{x})-M_{z}(x_{0}) \tag{15}\]
_where \(M_{z}(\mathbf{x}):=(M_{z}(x_{1}),...,M_{z}(x_{n}))^{T}\) and_
\[M_{z}(x_{i}):=g_{\pi_{n},\lambda_{0}}(x_{i})-\frac{1}{\lambda_{0}}(z_{y}-g_{ \pi_{n},\lambda_{0}}(z_{x}))k(z_{x},x_{i})\]
_for \(x_{i}=x_{0},x_{1},\cdots,x_{n}\)._
Note that although Hadamard differentiability is able to guarantee the central limit theorem of \(T\), the consistency of variance estimation generally requires more than Hadamard differentiability. In general, we need some additional continuity condition (such as continuously Gateaux differentiability) to guarantee that \(IF^{2}(z;T,\pi_{n})\) is indeed "close" to \(IF^{2}(z;T,\pi)\) and \(\frac{1}{n}\sum_{z_{i}\in\mathcal{D}}IF^{2}(z_{i};T,\pi_{n})(x_{0})\) is indeed "close" to \(\int_{z\in\mathcal{X}\times\mathcal{Y}}IF^{2}(z;T,\pi)(x_{0})d\pi(z)\). We show that this is achievable for the infinitesimal jackknife variance estimator for kernel ridge regression by only imposing a weak assumption.
**Theorem B.7** (Consistency of infinitesimal jackknife for kernel-based regression).: _Suppose that Assumptions B.3 and B.4 hold. Moreover, assume that \(\mathcal{Y}\subset\mathbb{R}\) is bounded, and \(b\) and \(b^{\prime}_{a}\) in Assumption B.4 are bounded on \(\mathcal{Y}\). Then we have_
\[\hat{\xi}^{2}(x_{0})=\frac{1}{n}\sum_{z_{i}\in\mathcal{D}}IF^{2}(z_{i};T,\pi_ {n})(x_{0})\rightarrow\int_{z\in\mathcal{X}\times\mathcal{Y}}IF^{2}(z;T,\pi)( x)d\pi(z)=\xi^{2}(x_{0}),\quad a.s.\]
_as \(n\rightarrow\infty\). Hence, an asymptotically exact \((1-\alpha)\)-level confidence interval of \(g_{\pi,\lambda_{0}}(x_{0})\) is_
\[\left[g_{\pi_{n},\lambda_{n}}(x_{0})-\frac{\hat{\xi}(x_{0})}{n}q_{1-\frac{ \alpha}{2}},g_{\pi_{n},\lambda_{n}}(x_{0})+\frac{\hat{\xi}(x_{0})}{n}q_{1-\frac {\alpha}{2}}\right]\]
_where \(q_{\alpha}\) is the \(\alpha\)-quantile of the standard Gaussian distribution \(\mathcal{N}(0,1)\)._
The proof of Theorems B.6 and B.7 is given in Appendix D.
### PNC-Enhanced Infinitesimal Jackknife
In this section, we return to our problem setting in Section 3.3. We apply the general results in Section B.3 to our PNC predictor and develop the PNC-enhanced infinitesimal jackknife confidence interval for over-parameterized neural networks.
Recall that the statistical functional \(T_{1}\) is associated with the following problem:
\[\hat{h}_{n,\theta^{\text{b}}}(\cdot)-\hat{\phi}_{n,\theta^{\text{b}}}(\cdot)- \bar{s}(\cdot)=\min_{g\in\mathcal{H}}\frac{1}{n}\sum_{i=1}^{n}[(y_{i}-\bar{s} (x_{i})-g(x_{i}))^{2}]+\lambda_{n}\|g\|_{\mathcal{H}}^{2}\]
Consider the PNC-enhanced infinitesimal jackknife variance estimator:
\[\hat{\sigma}_{\theta^{\text{b}}}^{2}(x_{0})=\frac{1}{n}\sum_{z_{i}\in\mathcal{ D}}IF^{2}(z_{i};T_{1},\pi_{n})(x_{0}).\]
We obtain that
**Theorem B.8** (Expression of PNC-enhanced infinitesimal jackknife).: _Suppose that Assumption C.2 holds. Suppose that Assumptions B.3 and B.4 hold when \(Y\) is replaced by \(Y-\bar{s}(X)\). Then \(IF(z;T_{1},\pi_{n})\) is given by_
\[IF(z;T_{1},\pi_{n})(x_{0})=\mathbf{K}(x_{0},\mathbf{x})^{T}(\mathbf{K}(\mathbf{x},\mathbf{x})+ \lambda_{0}n\mathbf{I})^{-1}M_{z}(\mathbf{x})-M_{z}(x_{0}) \tag{16}\]
_where \(M_{z}(\mathbf{x}):=(M_{z}(x_{1}),...,M_{z}(x_{n}))^{T}\) and_
\[M_{z}(x_{i}):=\hat{h}_{n,\theta^{b}}(x_{i})-\hat{\phi}_{n,\theta^{b}}(x_{i})- \bar{s}(x_{i})-\frac{1}{\lambda_{0}}(z_{y}-(\hat{h}_{n,\theta^{b}}(z_{x})-\hat{ \phi}_{n,\theta^{b}}(z_{x})))K(z_{x},x_{i})\]
_for \(x_{i}=x_{0},x_{1},\cdots,x_{n}\)._
Theorem B.8 immediately follows from Theorem B.6.
**Theorem B.9** (Exact coverage of PNC-enhanced infinitesimal jackknife confidence interval).: _Suppose that Assumption C.2 holds. Suppose that Assumptions B.3 and B.4 hold when \(Y\) is replaced by \(Y-\bar{s}(X)\). Moreover, assume that \(\mathcal{Y}\subset\mathbb{R}\) is bounded. Then we have_
\[\hat{\sigma}_{\theta^{b}}^{2}(x_{0})=\frac{1}{n}\sum_{z_{i}\in\mathcal{D}}IF^{ 2}(z_{i};T_{1},\pi_{n})(x_{0})\to\int_{z\in\mathcal{X}\times\mathcal{Y}}IF^{2 }(z;T_{1},\pi)(x)d\pi(z)=\sigma_{\theta^{b}}(x_{0}),\quad a.s.\]
_as \(n\to\infty\). Hence, an asymptotically exact \((1-\alpha)\)-level confidence interval of \(h^{*}(x_{0})\) is_
\[\left[\hat{h}_{n,\theta^{b}}(x_{0})-\hat{\phi}_{n,\theta^{b}}(x_{0})-\frac{ \hat{\sigma}_{\theta^{b}}(x_{0})}{n}q_{1-\frac{a}{2}},\hat{h}_{n,\theta^{b}}( x_{0})-\hat{\phi}_{n,\theta^{b}}(x_{0})+\frac{\hat{\sigma}_{\theta^{b}}(x_{0})}{n}q _{1-\frac{a}{2}}\right]\]
_where \(q_{\alpha}\) is the \(\alpha\)-quantile of the standard Gaussian distribution \(\mathcal{N}(0,1)\) and the computation of \(\hat{\sigma}_{\theta^{b}}(x_{0})\) is given by Theorem B.8._
Note that in our problem setting, we can set \(b(y)=y^{2}\) and \(b^{\prime}_{a}=a+|y|\) for kernel ridge regression. Since \(\mathcal{Y}\) is bounded, \(b\) and \(b^{\prime}_{a}\) are naturally bounded on \(\mathcal{Y}\). Therefore, Theorem B.9 follows from Theorem B.7.
## Appendix C Theory of Neural Tangent Kernel
In this section, we provide a brief review of the theory of neural tangent kernel (NTK). We will then discuss one of the results employed by this paper, i.e.,
_The shifted kernel ridge regressor using NTK with a shift from an initial function \(s_{\theta_{b}}\) is exactly the linearized neural network regressor that starts from the initial network \(s_{\theta_{b}}\)._
NTK [61] has attracted a lot of attention since it provides a new perspective on training dynamics, generalization, and expressibility of over-parameterized neural networks. Recent papers show that when training an over-parametrized neural network, the weight matrix at each layer is close to its initialization [80, 33]. An over-parameterized neural network can rapidly reduce training error to zero via gradient descent, and thus finds a global minimum despite the objective function being non-convex [32, 4, 5, 6, 113]. Moreover, the trained network also exhibits good generalization property [19, 20]. These observations are implicitly described by a notion, NTK, suggested by [61]. This kernel is able to characterize the training behavior of sufficiently wide fully-connected neural networks and build a new connection between neural networks and kernel methods. Another line of work is to extend NTK for fully-connected neural networks to CNTK for convolutional neural networks [7, 107, 81]. NTK is a fruitful and rapidly growing area.
We focus on the result that is used as the foundation of our epistemic uncertainty quantification: Since the gradient of over-parameterized neural networks is nearly constant and close to its initialization, training networks is very similar to a shifted kernel ridge regression where the feature map of the kernel is given by the gradient of networks [75]. Multiple previous works [7, 75, 51, 56, 110, 15] have established some kind of equivalences between the neural network regressor and the kernel ridge regressor based on a wide variety of assumptions or settings, such as the full-rank assumption of NTK, or zero-valued initialization, or introducing a small multiplier \(\kappa\), or normalized training inputs \(\|x_{i}\|_{2}=1\). These results are also of different forms. Since a uniform statement is not available, we do not intend to dig into those detailed assumptions or theorems as they are not the focus of this paper (Interested readers may refer to the above papers).
In the following, we provide some discussions on how to obtain the equivalence in Proposition 3.1 adopted in the main paper, based on a less (technically) rigorous assumption. This assumption is borrowed from the _linearized neural network_ property introduced in [75], where they replace the outputs of the neural network with their first-order Taylor expansion, called the linearized neural
network. They show that in the infinite width limit, the outputs of the neural network are the same as the linearized neural network. [75] does not add regularization in the loss. Instead, we introduce a regularization \(\lambda_{n}>0\) in the loss since regularization is common and useful in practice. Moreover, it can guarantee the stable computation of the inversion of the NTK Gram matrix and can be naturally linked to kernel ridge regression. In the following, we will review the linearized neural network assumption (Assumption C.1) as well as other network specifications. Based on them, we show that Proposition C.3 holds, which provides the starting point for our uncertainty quantification framework. We also provide some additional remarks in Section C.1.
**NTK parameterization.** We consider a fully-connected neural network with any depth defined formally as follows. Suppose the network consists of \(L\) hidden layers. Denote \(g^{(0)}(x)=x\) and \(d_{0}=d\). Let
\[f^{(l)}(x)=W^{(l)}g^{(l-1)}(x),\;g^{(l)}(x)=\sqrt{\frac{c_{\sigma}}{d_{l}}} \sigma(f^{(l)}(x))\]
where \(W^{(l)}\in\mathbb{R}^{d_{l}\times d_{l-1}}\) is the weight matrix in the \(l\)-th layer, \(\sigma\) is a coordinate-wise activation function, \(c_{\sigma}=\mathbb{E}_{z\sim\mathcal{N}(0,1)}[\sigma^{2}(z)]^{-1}\), and \(l\in[L]\). The output of the neural network is
\[f_{\theta}(x):=f^{(L+1)}(x)=W^{(L+1)}g^{(L)}(x)\]
where \(W^{(L+1)}\in\mathbb{R}^{1\times d_{L}}\), and \(\theta=(W^{(1)},...,W^{(L+1)})\) represents all the parameters in the network. Note that here we use NTK parametrization with a width-dependent scaling factor, which is thus slightly different from the standard parametrization. Unlike the standard parameterization which only normalizes the forward dynamics of the network, the NTK parameterization also normalizes its backward dynamics.
**He Initialization.** The NTK theory depends on the random initialization of the network. Suppose the dimension of network parameters \(\theta\) is \(p\). We randomly initialize all the weights to be i.i.d. \(\mathcal{N}(0,1)\) random variables. In other words, we set \(P_{\theta_{b}}=\mathcal{N}(0,\mathbf{I}_{p})\) and let \(\theta^{b}\) be an instantiation drawn from \(P_{\theta_{b}}\). This initialization method is essentially known as the He initialization [52].
**NTK and Linearized neural network.** With the above NTK parameterization and He initialization, the population NTK expression is defined recursively as follows: [61, 7]: For \(l\in[L]\),
\[\Sigma^{(0)}(x,x^{\prime})=x^{T}x^{\prime}\in\mathbb{R},\]
\[\Lambda^{(l)}(x,x^{\prime})=\begin{pmatrix}\Sigma^{(l-1)}(x,x)&\Sigma^{(l-1)} (x,x^{\prime})\\ \Sigma^{(l-1)}(x,x^{\prime})&\Sigma^{(l-1)}(x^{\prime},x^{\prime})\end{pmatrix} \in\mathbb{R}^{2\times 2},\]
\[\Sigma^{(l)}(x,x^{\prime})=c_{\sigma}\mathbb{E}_{(u,v)\sim\mathcal{N}(0, \Lambda(l))}[\sigma(u)\sigma(v)]\in\mathbb{R}.\]
We also define a derivative covariance:
\[\Sigma^{(l)\prime}(x,x^{\prime})=c_{\sigma}\mathbb{E}_{(u,v)\sim\mathcal{N}(0, \Lambda(l))}[\sigma^{\prime}(u)\sigma^{\prime}(v)]\in\mathbb{R},\]
The final population NTK is defined as
\[K(x,x^{\prime})=\sum_{l=1}^{L+1}\left(\Sigma^{(l-1)}(x,x^{\prime})\prod_{s=l}^ {L+1}\Sigma^{(s)\prime}(x,x^{\prime})\right)\]
where \(\Sigma^{(L+1)\prime}(x,x^{\prime})=1\) for convenience. Let \(\langle\cdot,\cdot\rangle\) be the standard inner product in \(\mathbb{R}^{p}\).
Let \(J(\theta;x):=\nabla_{\theta}f_{\theta}(x)\in\mathbb{R}^{1\times p}\) denote the gradient of the network. The empirical (since the initialization is random) NTK matrix is defined as
\[K_{\theta}(x,x^{\prime})=\langle J(\theta;x);J(\theta;x^{\prime})\rangle.\]
In practice, we may use the empirical NTK matrix to numerically compute the population NTK matrix.
One of the most important results in the NTK theory is that the empirical NTK matrix converges to the population NTK matrix as the width of the network increases [61, 107, 108, 7]:
\[K_{\theta}(x,x^{\prime})=\langle J(\theta;x),J(\theta;x)\rangle\to K(x,x^{ \prime})\]
where \(\rightarrow\) represents "converge in probability" [7] or "almost surely" [107].
The NTK theory shows that \(J(\theta;x)=\nabla_{\theta}f_{\theta}(x)\) is approximately a constant independent of the network parameter \(\theta\) (but still depends on \(x\)) when the network is sufficiently wide. Therefore, when we treat \(J(\theta;x)\) as a constant independent of the network parameter \(\theta\), we can obtain a model obtained from the first-order Taylor expansion of the network around its initial parameters, which is called a linearized neural network [75]. Our study on epistemic uncertainty is based on the linearized neural network.
To be rigorous, we adopt the following linearized neural network assumption based on the NTK theory:
**Assumption C.1**.: [Linearized neural network assumption] Suppose that \(J(\theta;x)=\nabla_{\theta}f_{\theta}(x)\in\mathbb{R}^{1\times p}\) is independent of \(\theta\) during network training, which is thus denoted as \(J(x)\). The population NTK is then given by \(K(x,x^{\prime})=\langle J(x),J(x)\rangle=J(x)J(x)^{\top}\).
We introduce the NTK vector/matrix as follows. For \(\mathbf{x}=(x_{1},...,x_{j})^{\top}\), we let
\[\mathbf{K}(\mathbf{x},\mathbf{x}):=(K(x_{i},x_{j}))_{i,j=1,...,n}=J(\mathbf{x})J(\mathbf{x})^{\top }\in\mathbb{R}^{n\times n}\]
be the NTK Gram matrix evaluated on data \(\mathbf{x}\). For an arbitrary point \(x\in\mathbb{R}^{d}\), we let \(\mathbf{K}(x,\mathbf{x})\in\mathbb{R}^{n}\) be the kernel value evaluated between the point \(x\) and data \(\mathbf{x}\), i.e.,
\[\mathbf{K}(x,\mathbf{x}):=(K(x,x_{1}),K(x,x_{2}),...,K(x,x_{n}))^{T}=J(x)J(\mathbf{x})^{ \top}\in\mathbb{R}^{1\times n}.\]
**Training dynamics of the linearized network.** Based on Assumption C.1, the training dynamics of the linearized network can be derived as follows.
We consider the regression problem with the following regularized empirical loss
\[\hat{R}(f_{\theta})=\frac{1}{n}\sum_{i=1}^{n}\mathcal{L}(f_{\theta}(x_{i}),y_{ i})+\lambda_{n}\|\theta-\theta^{b}\|_{2}^{2}. \tag{17}\]
where \(\theta^{b}=\theta(0)\) is the initialization of the network. In the following, we use \(f_{\theta(t)}\) to represent the network with parameter \(\theta(t)\) that evolves with the time \(t\). Let \(\eta\) be the learning rate. Suppose the network parameter is trained via continuous-time gradient flow. Then the evolution of the parameters \(\theta(t)\) and \(f_{\theta(t)}\) can be written as
\[\frac{d\theta(t)}{dt}=-\eta\left(\frac{1}{n}\nabla_{\theta}f_{\theta(t)}(\mathbf{ x})^{\top}\nabla_{f_{\theta(t)}(\mathbf{x})}\mathcal{L}+2\lambda_{n}(\theta(t)- \theta(0))\right) \tag{18}\]
\[\frac{df_{\theta(t)}(\mathbf{x})}{dt}=\nabla_{\theta}f_{\theta(t)}(\mathbf{x})\frac{d \theta(t)}{dt} \tag{19}\]
where \(\theta(t)\) and \(f_{\theta(t)}(x)\) are the network parameters and network output at time \(t\).
Under Assumption C.1, we have that
\[f_{\theta(t)}(x)=f_{\theta(0)}(x)+J(x)(\theta(t)-\theta(0)),\]
which gives
\[\nabla_{\theta}f_{\theta(t)}(x)=J(x).\]
Suppose the loss function is an MSE loss, i.e., \(\mathcal{L}(\tilde{y},y)=(\tilde{y}-y)^{2}\). Then
\[\nabla_{f_{\theta(t)}(\mathbf{x})}\mathcal{L}(f_{\theta(t)}(\mathbf{x}),\mathbf{y})=2(f_{ \theta(t)}(\mathbf{x})-\mathbf{y})\]
Therefore, under Assumption C.1, (18) becomes
\[\frac{d\theta(t)}{dt} =-\eta\left(\frac{1}{n}J(\mathbf{x})^{\top}\nabla_{f_{\theta(t)}(\bm {x})}\mathcal{L}+2\lambda_{n}(\theta(t)-\theta(0))\right)\] \[=-\eta\left(\frac{2}{n}J(\mathbf{x})^{\top}\left(f_{\theta(0)}(\mathbf{x} )+J(\mathbf{x})(\theta(t)-\theta(0))-\mathbf{y}\right)+2\lambda_{n}(\theta(t)-\theta( 0))\right)\] \[=-2\eta\left(\frac{1}{n}J(\mathbf{x})^{\top}\left(f_{\theta(0)}(\mathbf{x })-J(\mathbf{x})\theta(0)-\mathbf{y}\right)-\lambda_{n}\theta(0)+\left(\frac{1}{n}J( \mathbf{x})^{\top}J(\mathbf{x})+\lambda_{n}\mathbf{I}\right)\theta(t)\right)\]
Note that \(\frac{1}{n}J(\mathbf{x})^{\top}\left(f_{\theta(0)}(\mathbf{x})-J(\mathbf{x})\theta(0)-\mathbf{y} \right)-\lambda_{n}\theta(0)\) and \(\left(\frac{1}{n}J(\mathbf{x})^{\top}J(\mathbf{x})+\lambda_{n}\mathbf{I}\right)\) are both independent of \(t\). Solving this ordinary differential equation, we obtain
\[\theta(t)= \theta(0)-\frac{1}{n}J(\mathbf{x})^{\top}\left(\frac{1}{n}J(\mathbf{x})^{ \top}J(\mathbf{x})+\lambda_{n}\mathbf{I}\right)^{-1}\] \[\left(\mathbf{I}-\exp\left(-2\eta t\left(\frac{1}{n}J(\mathbf{x})^{\top}J (\mathbf{x})+\lambda_{n}\mathbf{I}\right)\right)\right)\left(f_{\theta(0)}(\mathbf{x})- \mathbf{y}\right)\]
Hence the network output at time \(t\) becomes
\[f_{\theta(t)}(x)-f_{\theta(0)}(x)\] \[= J(x)(\theta(t)-\theta(0))\] \[= -\frac{1}{n}J(x)J(\mathbf{x})^{\top}\left(\frac{1}{n}J(\mathbf{x})^{\top }J(\mathbf{x})+\lambda_{n}\mathbf{I}\right)^{-1}\] \[= -\frac{1}{n}\mathbf{K}(x,\mathbf{x})\left(\frac{1}{n}\mathbf{K}(\mathbf{x},\mathbf{x })+\lambda_{n}\mathbf{I}\right)^{-1}\left(\mathbf{I}-\exp\left(-2\eta t\left(\frac{1} {n}\mathbf{K}(\mathbf{x},\mathbf{x})+\lambda_{n}\mathbf{I}\right)\right)\right)\left(f_{\theta (0)}(\mathbf{x})-\mathbf{y}\right)\]
where the last inequality used the notation \(\mathbf{K}(x,\mathbf{x})=J(x)J(\mathbf{x})^{\top}\) and \(\mathbf{K}(x_{0},\mathbf{x})=J(\mathbf{x})J(\mathbf{x})^{\top}\).
Therefore, with sufficient time of training (\(t\to\infty\)), the final trained network is
\[f_{\theta(\infty)}(x) =\lim_{t\to\infty}f_{\theta(t)}(x)\] \[=f_{\theta(0)}(x)-\frac{1}{n}\mathbf{K}(x,\mathbf{x})\left(\frac{1}{n}\bm {K}(\mathbf{x},\mathbf{x})+\lambda_{n}\mathbf{I}\right)^{-1}\left(f_{\theta(0)}(\mathbf{x})- \mathbf{y}\right)\] \[=f_{\theta(0)}(x)+\mathbf{K}(x,\mathbf{x})\left(\mathbf{K}(\mathbf{x},\mathbf{x})+n \lambda_{n}\mathbf{I}\right)^{-1}\left(\mathbf{y}-f_{\theta(0)}(\mathbf{x})\right)\] \[=s_{\theta^{b}}(x)+\mathbf{K}(x,\mathbf{x})\left(\mathbf{K}(\mathbf{x},\mathbf{x})+n \lambda_{n}\mathbf{I}\right)^{-1}\left(\mathbf{y}-s_{\theta^{b}}(\mathbf{x})\right)\]
where in the last inequality, we use \(s_{\theta^{b}}\) to represent the network initialized with the parameter \(\theta^{b}\) as in the main paper. Summarizing the above discussion, we conclude that
**Assumption C.2**.: Suppose that the network training is specified as follows:
1. The network adopts the NTK parametrization and its parameters are randomly initialized using He initialization.
2. The network is sufficiently (infinitely) wide so that the linearized neural network assumption (Assumption C.1) holds.
3. The network is trained using the loss function in (1) via continuous-time gradient flow by feeding the entire training data and using sufficient training time (\(t\to\infty\)).
**Proposition C.3**.: _Suppose that Assumption C.2 holds. Then the final trained network is given by_ \[\hat{h}_{n,\theta^{b}}(x)=s_{\theta^{b}}(x)+\mathbf{K}(x,\mathbf{x})\left(\mathbf{K}(\bm {x},\mathbf{x})+\lambda_{n}n\mathbf{I}\right)^{-1}\left(\mathbf{y}-s_{\theta^{b}}(\mathbf{x}) \right).\] (20)
_For the rest part of Proposition 3.1, please refer to Lemma C.4 below._
**Shifted kernel ridge regression.** On the other hand, we build a shifted kernel ridge regression based on the NTK as follows.
Let \(\bar{\mathcal{H}}\) be the reproducing kernel Hilbert space constructed from the kernel function \(K(x,x^{\prime})\) (the population NTK); See Appendix B. Consider the following kernel ridge regression problem on the RKHS \(\bar{\mathcal{H}}\):
\[s_{\theta^{b}}+\operatorname*{arg\,min}_{g\in\bar{\mathcal{H}}}\frac{1}{n}\sum _{i=1}^{n}(y_{i}-s_{\theta^{b}}(x_{i})-g(x_{i}))^{2}+\lambda_{n}\|g\|_{\bar{ \mathcal{H}}}^{2} \tag{21}\]
where \(\lambda_{n}\) is a regularization hyper-parameter and \(s_{\theta^{b}}\) is a known given function.
**Lemma C.4**.: (20) _is the solution to the following kernel ridge regression problem_
\[s_{\theta^{b}}+\operatorname*{arg\,min}_{g\in\bar{\mathcal{H}}}\,\frac{1}{n}\sum _{i=1}^{n}(y_{i}-s_{\theta^{b}}(x_{i})-g(x_{i}))^{2}+\lambda_{n}\|g\|_{\bar{ \mathcal{H}}}^{2}. \tag{22}\]
Proof of Lemma c.4.: We first let \(\tilde{y}_{i}=y_{i}-s_{\theta^{b}}(x_{i})\) be the shifted label. Now consider the kernel ridge regression problem
\[\operatorname*{arg\,min}_{g\in\bar{\mathcal{H}}}\,\frac{1}{n}\sum_{i=1}^{n}( \tilde{y}_{i}-g(x_{i}))^{2}+\lambda_{n}\|g\|_{\bar{\mathcal{H}}}^{2}.\]
Standard theory in kernel ridge regression (Proposition B.1) shows that the closed-form solution to the above problem is given by:
\[\mathbf{K}(x,\mathbf{x})^{T}(\mathbf{K}(\mathbf{x},\mathbf{x})+\lambda_{n}n\mathbf{I})^{-1}\tilde{\bm {y}}=\mathbf{K}(x,\mathbf{x})^{T}(\mathbf{K}(\mathbf{x},\mathbf{x})+\lambda_{n}n\mathbf{I})^{-1}(\mathbf{ y}-s_{\theta^{b}}(\mathbf{x}))\]
as \(K\) is the kernel function of \(\bar{\mathcal{H}}\). This equation corresponds to the definition of \(\hat{h}_{n,\theta^{b}}(x)-s_{\theta^{b}}(x)\) in (20).
From Proposition C.3 and Lemma C.4, we immediately conclude that
_The shifted kernel ridge regressor using NTK with a shift from an initial function \(s_{\theta_{b}}\) is exactly the linearized neural network regressor that starts from the initial network \(s_{\theta_{b}}\)._
### Additional Remarks.
We make the following additional remarks:
1. We introduce the regularization parameters \(\lambda_{n}>0\) in this paper, which has some advantages: 1) A common assumption in NTK theory is to assume that the smallest eigenvalue of the NTK Gram matrix is bounded away from zero [32, 7, 75]. With \(\lambda_{n}>0\) is introduced, there is an additional term \(\lambda_{n}n\mathbf{I}\) in the NTK Gram matrix so that this assumption is valid naturally. 2) It helps to stabilize the inversion of the NTK Gram matrix if any computation is needed. 3) It can be naturally linked to the well-known kernel ridge regression, which we leverage in this work to develop uncertainty quantification approaches. On the other hand, we note that with \(\lambda_{0}=\lim_{n\to\infty}\lambda_{n}>0\), the "best" predictor \(h^{*}\) in the neural network is not exactly the same as the ground-truth regression function \(g^{*}_{\pi}:=\mathbb{E}_{(X,Y)\sim\pi}[Y|X=x]\). In fact, Proposition B.2 shows that \(h^{*}=(\mathcal{O}_{K}+\lambda_{0}I)^{-1}\mathcal{O}_{K}g^{*}_{\pi}\neq g^{*} _{\pi}\). However, previous work [103, 101] shows that with some mild conditions, \(h^{*}-g^{*}_{\pi}\to 0\) in some sense when \(\lambda_{0}\to 0\). Therefore, we may use a very small \(\lambda_{0}\) in practice to make the error \(h^{*}-g^{*}_{\pi}\) negligible, as we did in our experiments.
2. Proposition 3.1 (Proposition C.3) is the theoretical foundation of this paper to develop our framework for efficient uncertainty quantification and reduction for neural networks. We recognize the limitation of this proposition: The linearized neural network assumption (Assumption C.1) must hold to guarantee that the exact equality (2) holds in Proposition 3.1. In reality, the network can never be infinitely wide, and thus an additional error will appear in (2). Unfortunately, with finite network width, this error might be roughly estimated up to a certain order but still involves some unknown constants that hide beneath the data and network [32, 7, 75], which is thus extremely difficult to quantify precisely in practice. Hence, this work does not deal with such an error from finite network width. Instead, we assume that the linearized neural network assumption readily holds as the starting point for developing our uncertainty quantification framework. Equivalently, one may view our work as uncertainty quantification for linearized neural networks.
## Appendix D Proofs
In this section, we provide technical proof of the results in the paper.
Proof of Theorem 3.2.: In terms of estimation bias, we note that
\[\mathbb{E}[\hat{h}_{n}^{m}(x)|\mathcal{D}]=\frac{1}{m}\sum_{i=1}^{m}\mathbb{E}[ \hat{h}_{n,\theta_{i}^{b}}(x)|\mathcal{D}]=\mathbb{E}[\hat{h}_{n,\theta^{b}}(x )|\mathcal{D}]=\hat{h}_{n}^{*}(x).\]
as shown in (3). This implies that the deep ensemble predictor and the single network predictor have the same conditional procedural mean given the data. In terms of variance, note that \(\hat{h}_{n,\theta_{i}^{b}}(x),...,\hat{h}_{n,\theta_{m}^{b}}(x)\), derived in (2), are conditionally independent given \(\mathcal{D}\). Therefore the deep ensemble predictor gives
\[\text{Var}(\hat{h}_{n}^{m}(x)|\mathcal{D})=\frac{1}{m^{2}}\sum_{i=1}^{m}\text{ Var}(\hat{h}_{n,\theta_{i}^{b}}(x)|\mathcal{D})=\frac{1}{m}\text{Var}(\hat{h}_{n, \theta^{b}}(x)|\mathcal{D})\]
while the single model prediction gives \(\text{Var}(\hat{h}_{n,\theta^{b}}(x)|\mathcal{D})\). Now using conditioning, we have
\[\text{Var}(\hat{h}_{n}^{m}(x))\] \[= \text{Var}(\mathbb{E}[\hat{h}_{n}^{m}(x)|\mathcal{D}])+\mathbb{E }[\text{Var}(\hat{h}_{n}^{m}(x)|\mathcal{D})]\] \[= \text{Var}(\hat{h}_{n}^{*}(x))+\frac{1}{m}\mathbb{E}[\text{Var}( \hat{h}_{n,\theta^{b}}(x)|\mathcal{D})]\] \[\leq \text{Var}(\hat{h}_{n}^{*}(x))+\mathbb{E}[\text{Var}(\hat{h}_{n, \theta^{b}}(x)|\mathcal{D})]\] \[= \text{Var}(\hat{h}_{n,\theta^{b}}(x))\]
as desired.
Proof of Theorem 3.3.: Recall that \(\hat{\phi}^{\prime}_{n,\theta^{b}}(\cdot)\) is obtained by training an auxiliary neural network with data \(\{(x_{1},\bar{s}(x_{1})),(x_{2},\bar{s}(x_{2})),...,(x_{n},\bar{s}(x_{n}))\}\) and the initialization parameters \(\theta^{b}\). Then Proposition 3.1 immediately shows that
\[\hat{\phi}^{\prime}_{n,\theta^{b}}(x)=s_{\theta^{b}}(x)+\mathbf{K}(x,\mathbf{x})^{T}( \mathbf{K}(\mathbf{x},\mathbf{x})+\lambda_{n}n\mathbf{I})^{-1}(\bar{s}(\mathbf{x})-s_{\theta^{b}} (\mathbf{x})).\]
Note that this auxiliary network starts from the same initialization \(\theta^{b}\) as in \(\hat{h}_{n,\theta^{b}}(x)\). Then we have
\[\hat{\phi}_{n,\theta^{b}}(x) =\hat{\phi}^{\prime}_{n,\theta^{b}}(x)-\bar{s}(x)\] \[=s_{\theta^{b}}(x)-\bar{s}(x)+\mathbf{K}(x,\mathbf{x})^{T}(\mathbf{K}(\mathbf{x}, \mathbf{x})+\lambda_{n}n\mathbf{I})^{-1}(\bar{s}(\mathbf{x})-s_{\theta^{b}}(\mathbf{x}))\] \[=\hat{h}_{n,\theta^{b}}(x)-\hat{h}_{n}^{*}(x)\]
This shows that
\[\hat{h}_{n}^{*}(x)=\hat{h}_{n,\theta^{b}}(x)-(\hat{\phi}^{\prime}_{n,\theta^{b }}(x)-\bar{s}(x))=\hat{h}_{n,\theta^{b}}(x)-\hat{\phi}_{n,\theta^{b}}(x)\]
as desired.
Proof of Theorem 3.5.: Theorem 3.3 implies that
\[\hat{h}_{n,\theta^{b}}(x)-\hat{\phi}_{n,\theta^{b}}(x)=\hat{h}_{n}^{*}(x)\]
which is the solution to the following problem
\[\hat{h}_{n}^{*}(\cdot)=\bar{s}(\cdot)+\min_{g\in\mathcal{H}}\frac{1}{n}\sum_{i =1}^{n}[(y_{i}-\bar{s}(x_{i})-g(x_{i}))^{2}]+\lambda_{n}\|g\|_{\mathcal{H}}^{2}\]
Therefore, its corresponding population risk minimization problem is:
\[h^{*}(\cdot)=\bar{s}(\cdot)+\min_{g\in\mathcal{H}}\mathbb{E}_{\pi}[(Y-\bar{s}( X)-g(X))^{2}]+\lambda_{0}\|g\|_{\mathcal{H}}^{2}\]
Applying Proposition 3.4 to \(\hat{h}_{n}^{*}\) and \(h^{*}\), we have that
\[\sqrt{n}\left((\hat{h}_{n}^{*}(x)-\bar{s}(x))-(h^{*}(x)-\bar{s}(x))\right) \Rightarrow\mathcal{N}(0,\sigma_{\theta^{b}}^{2}(x))\]
In other words,
\[\sqrt{n}\left(\hat{h}_{n,\theta^{b}}(x)-\hat{\phi}_{n,\theta^{b}}(x)-h^{*}(x) \right)\Rightarrow\mathcal{N}(0,\sigma_{\theta^{b}}^{2}(x))\]
This shows that asymptotically as \(n\rightarrow\infty\) we have
\[\mathbb{P}\left(-q_{1-\frac{\alpha}{2}}\leq\frac{\hat{h}_{n,\theta^{b}}(x)- \hat{\phi}_{n,\theta^{b}}(x)-h^{*}(x)}{\sigma_{\theta^{b}}(x)/\sqrt{n}}\leq q_ {1-\frac{\alpha}{2}}\right)\to 1-\alpha\]
where \(q_{\alpha}\) is the \(\alpha\)-quantile of standard Gaussian distribution \(\mathcal{N}(0,1)\). Hence
\[\left[\hat{h}_{n,\theta^{b}}(x)-\hat{\phi}_{n,\theta^{b}}(x)-\frac{\sigma_{ \theta^{b}}(x)}{\sqrt{n}}q_{1-\frac{\alpha}{2}},\hat{h}_{n,\theta^{b}}(x)- \hat{\phi}_{n,\theta^{b}}(x)+\frac{\sigma_{\theta^{b}}(x)}{\sqrt{n}}q_{1- \frac{\alpha}{2}}\right]\]
is an asymptotically exact \((1-\alpha)\)-level confidence interval of \(h^{*}(x)\).
Proof of Theorem 3.6.: Note that the statistics
\[\hat{h}_{n^{\prime},\theta^{b}}^{j}(x)-\hat{\phi}_{n^{\prime},\theta^{b}}^{j} (x)\]
from the \(j\)-th batch is i.i.d. for \(j\in[m^{\prime}]\). Moreover, by Theorem 3.5, we have the asymptotic normality:
\[\sqrt{n^{\prime}}\left(\hat{h}_{n^{\prime},\theta^{b}}(x)-\hat{\phi}_{n^{ \prime},\theta^{b}}(x)-h^{*}(x)\right)\Rightarrow\mathcal{N}(0,\sigma_{\theta ^{b}}^{2}(x)),\]
as \(n\rightarrow\infty\) meaning \(n^{\prime}=n/m^{\prime}\rightarrow\infty\). Therefore by the property of Gaussian distribution and the principle of batching, we have
\[\frac{\sqrt{n^{\prime}}}{\sigma_{\theta^{b}}(x)}(\psi_{B}(x)-h^{*}(x)) \Rightarrow\frac{1}{m^{\prime}}\sum_{i=1}^{m^{\prime}}Z_{i}\]
and
\[\frac{\psi_{B}(x)-h^{*}(x)}{S_{B}(x)/\sqrt{m^{\prime}}}\Rightarrow\frac{ \frac{1}{m^{\prime}}\sum_{i=1}^{m^{\prime}}Z_{i}}{\sqrt{\frac{1}{m^{\prime}( m^{\prime}-1)}\sum_{j=1}^{m^{\prime}}(Z_{j}-\frac{1}{m^{\prime}}\sum_{i=1}^{m^{ \prime}}Z_{i})^{2}}}\]
for i.i.d. \(\mathcal{N}(0,1)\) random variables \(Z_{1},...,Z_{m^{\prime}}\), where we use the continuous mapping theorem to deduce the weak convergence. Note that
\[\frac{\frac{1}{m^{\prime}}\sum_{i=1}^{m^{\prime}}Z_{i}}{\sqrt{\frac{1}{m^{ \prime}(m^{\prime}-1)}\sum_{j=1}^{m^{\prime}}(Z_{j}-\frac{1}{m^{\prime}}\sum_ {i=1}^{m^{\prime}}Z_{i})^{2}}}\stackrel{{\text{\small d}}}{{=}}t_ {m^{\prime}-1}\]
Hence, asymptotically as \(n\rightarrow\infty\) we have
\[\mathbb{P}\left(-q_{1-\frac{\alpha}{2}}\leq\frac{\psi_{B}(x)-h^{*}(x)}{S_{B}( x)/\sqrt{m^{\prime}}}\leq q_{1-\frac{\alpha}{2}}\right)\to 1-\alpha\]
where \(q_{\alpha}\) is the \(\alpha\)-quantile of the t distribution \(t_{m^{\prime}-1}\) with degree of freedom \(m^{\prime}-1\). Hence
\[\left[\psi_{B}(x)-\frac{S_{B}(x)}{\sqrt{m^{\prime}}}q_{1-\frac{\alpha}{2}}, \psi_{B}(x)+\frac{S_{B}(x)}{\sqrt{m^{\prime}}}q_{1-\frac{\alpha}{2}}\right]\]
is an asymptotically exact \((1-\alpha)\)-level confidence interval of \(h^{*}(x)\).
Proof of Theorem 3.7.: Note that the statistics
\[\hat{h}_{n^{\prime},\theta^{b}}^{*b}(x)-\hat{\phi}_{n^{\prime},\theta^{b}}^{*b} (x)\]
from the \(b\)-th replication is i.i.d. for \(b\in[R]\) conditional on the dataset \(\mathcal{D}\). By Theorem 3.5, we have the asymptotic normality:
\[\sqrt{n}\left(\hat{h}_{n,\theta^{b}}(x)-\hat{\phi}_{n,\theta^{b}}(x)-h^{*}(x) \right)\Rightarrow\mathcal{N}(0,\sigma_{\theta^{b}}^{2}(x)),\]
as \(n\to\infty\). By Theorem 2 in [23], we have the following asymptotic normality:
\[\sqrt{n}\left(\hat{h}_{n,\theta^{8}}^{sb}(x)-\hat{\phi}_{n,\theta^{8}}^{sb}(x)- \left(\hat{h}_{n,\theta^{8}}(x)-\hat{\phi}_{n,\theta^{8}}(x)\right)\right) \Rightarrow\mathcal{N}(0,\sigma_{\theta^{8}}^{2}(x)),\]
as \(n\to\infty\) conditional on the data \(\mathcal{D}\). Therefore Assumption 1 in [71] is satisfies and thus Theorem 1 in [71] holds, showing that asymptotically as \(n\to\infty\), we have
\[\mathbb{P}\left(-q_{1-\frac{n}{2}}\leq\frac{\psi_{C}(x)-h^{*}(x)}{S_{C}(x)} \leq q_{1-\frac{n}{2}}\right)\to 1-\alpha\]
where \(q_{\alpha}\) is the \(\alpha\)-quantile of the t distribution \(t_{R}\) with degree of freedom \(R\). Hence
\[\left[\psi_{C}(x)-S_{C}(x)q_{1-\frac{n}{2}},\psi_{C}(x)+S_{C}(x)q_{1-\frac{n}{ 2}}\right]\]
is an asymptotically exact \((1-\alpha)\)-level confidence interval of \(h^{*}(x)\).
Proof of Theorem b.6.: Recall that \(\Phi\) is the feature map associated with the kernel \(k\) of the RKHS \(\mathcal{H}\). We apply the influence function formula in [24] (see also [29, 47]) to calculate the infinitesimal jackknife:
\[IF(z;T,\pi_{n})=-S_{\pi_{n}}^{-1}(2\lambda_{0}g_{\pi_{n},\lambda_{0}})+ \mathcal{L}^{\prime}(z_{y}-g_{\pi_{n},\lambda_{0}}(z_{x}))S_{\pi_{n}}^{-1} \Phi(z_{x})\]
where \(S_{P}:\mathcal{H}\to\mathcal{H}\) is defined by
\[S_{P}(f)=2\lambda_{0}f+\mathbb{E}_{P}[\mathcal{L}^{\prime\prime}(Y,g_{P, \lambda_{0}}(X))f(X)\Phi(X)]\]
To compute this, we need to obtain \(S_{\pi_{n}}\) and \(S_{\pi_{n}}^{-1}\). Since the loss function is \(\mathcal{L}(\hat{y},y)=(\hat{y}-y)^{2}\), we have
\[S_{\pi_{n}}(f)=2\lambda_{0}f+\mathbb{E}_{\pi_{n}}[\mathcal{L}^{\prime\prime}( Y,g_{\pi_{n},\lambda_{0}}(X))f(X)\Phi(X)]=2\lambda_{0}f+\frac{2}{n}\sum_{j=1}^{n}f (x_{j})\Phi(x_{j}).\]
Suppose \(S_{\pi_{n}}^{-1}(2\lambda_{0}g_{\pi_{n},\lambda_{0}})=\tilde{g}_{1}\). Then at \(x_{0}\), we have
\[2\lambda_{0}g_{\pi_{n},\lambda_{0}}(x_{0})=S_{\pi_{n}}(\tilde{g}_{1}(x_{0}))= 2\lambda_{0}\tilde{g}_{1}(x_{0})+\frac{2}{n}\sum_{j=1}^{n}\tilde{g}_{1}(x_{j} )k(x_{0},x_{j}).\]
Hence,
\[\tilde{g}_{1}(x_{0})=g_{\pi_{n},\lambda_{0}}(x_{0})-\frac{1}{\lambda_{0}n} \sum_{j=1}^{n}\tilde{g}_{1}(x_{j})k(x_{0},x_{j}).\]
This implies that we need to evaluate \(\tilde{g}_{1}(x_{j})\) on training data first, which is straightforward by letting \(x_{0}=x_{1},...,x_{n}\):
\[\tilde{g}_{1}(\mathbf{x})=g_{\pi_{n},\lambda_{0}}(\mathbf{x})-\frac{1}{\lambda_{0}n} \mathbf{k}(\mathbf{x},\mathbf{x})\tilde{g}_{1}(\mathbf{x})\]
so
\[\tilde{g}_{1}(\mathbf{x})=(\mathbf{k}(\mathbf{x},\mathbf{x})+\lambda_{0}n\mathbf{I})^{-1}(\lambda _{0}n\mathbf{I})g_{\pi_{n},\lambda_{0}}(\mathbf{x})\]
and
\[\tilde{g}_{1}(x_{0}) =g_{\pi_{n},\lambda_{0}}(x_{0})-\frac{1}{\lambda_{0}n}\mathbf{k}(x_{0 },\mathbf{x})^{T}\tilde{g}_{1}(\mathbf{x})\] \[=g_{\pi_{n},\lambda_{0}}(x_{0})-\mathbf{k}(x_{0},\mathbf{x})^{T}(\mathbf{k}( \mathbf{x},\mathbf{x})+\lambda_{0}n\mathbf{I})^{-1}g_{\pi_{n},\lambda_{0}}(\mathbf{x})\]
Next we compute \(S_{\pi_{n}}^{-1}\Phi(z_{x})=\tilde{g}_{2}\). At \(x_{0}\), we have
\[k(z_{x},x_{0})=\Phi(z_{x})(x_{0})=S_{\pi_{n}}(\tilde{g}_{2}(x_{0}))=2\lambda_{0 }\tilde{g}_{2}(x_{0})+\frac{2}{n}\sum_{j=1}^{n}\tilde{g}_{2}(x_{j})k(x_{0},x_{ j})\]
Hence
\[\tilde{g}_{2}(x_{0})=\frac{1}{2\lambda_{0}}k(z_{x},x_{0})-\frac{1}{\lambda_{0} n}\sum_{j=1}^{n}\tilde{g}_{2}(x_{j})k(x_{0},x_{j})\]
This implies that we need to evaluate \(\tilde{g}_{2}(x_{j})\) on training data first, which is straightforward by letting \(x_{0}=x_{1},...,x_{n}\):
\[\tilde{g}_{2}(\mathbf{x})=\frac{1}{2\lambda_{0}}\mathbf{k}(z_{x},\mathbf{x})-\frac{1}{ \lambda_{0}n}\mathbf{k}(\mathbf{x},\mathbf{x})\tilde{g}_{2}(\mathbf{x})\]
so
\[\tilde{g}_{2}(\mathbf{x})=(\mathbf{k}(\mathbf{x},\mathbf{x})+\lambda_{0}n\mathbf{I})^{-1}\left( \frac{n}{2}\mathbf{I}\right)\mathbf{k}(z_{x},\mathbf{x})\]
and
\[\tilde{g}_{2}(x_{0}) =\frac{1}{2\lambda_{0}}k(z_{x},x_{0})-\frac{1}{\lambda_{0}n}\mathbf{ k}(x_{0},\mathbf{x})^{T}\tilde{g}_{2}(\mathbf{x})\] \[=\frac{1}{2\lambda_{0}}k(z_{x},x_{0})-\frac{1}{2\lambda_{0}}\mathbf{ k}(x_{0},\mathbf{x})^{T}(\mathbf{k}(\mathbf{x},\mathbf{x})+\lambda_{0}n\mathbf{I})^{-1}\mathbf{k}(z_{x}, \mathbf{x})\]
Combing previous results, we obtain
\[IF(z;T,\pi_{n})(x_{0})\] \[= -g_{\pi_{n},\lambda_{0}}(x_{0})+\mathbf{k}(x_{0},\mathbf{x})^{T}(\mathbf{k}( \mathbf{x},\mathbf{x})+\lambda_{0}n\mathbf{I})^{-1}g_{\pi_{n},\lambda_{0}}(\mathbf{x})\] \[+2(z_{y}-g_{\pi_{n},\lambda_{0}}(z_{x}))\left(\frac{1}{2\lambda_ {0}}k(z_{x},x_{0})-\frac{1}{2\lambda_{0}}\mathbf{k}(x_{0},\mathbf{x})^{T}(\mathbf{k}(\mathbf{ x},\mathbf{x})+\lambda_{0}n\mathbf{I})^{-1}\mathbf{k}(z_{x},\mathbf{x})\right)\] \[= \mathbf{k}(x_{0},\mathbf{x})^{T}(\mathbf{k}(\mathbf{x},\mathbf{x})+\lambda_{0}n\mathbf{I })^{-1}\left(g_{\pi_{n},\lambda_{0}}(\mathbf{x})-\frac{1}{\lambda_{0}}(z_{y}-g_{ \pi_{n},\lambda_{0}}(z_{x}))\mathbf{k}(z_{x},\mathbf{x})\right)\] \[-g_{\pi_{n},\lambda_{0}}(x_{0})+\frac{1}{\lambda_{0}}(z_{y}-g_{ \pi_{n},\lambda_{0}}(z_{x}))k(z_{x},x_{0})\] \[= \mathbf{k}(x_{0},\mathbf{x})^{T}(\mathbf{k}(\mathbf{x},\mathbf{x})+\lambda_{0}n\mathbf{I })^{-1}M_{z}(\mathbf{x})-M_{z}(x_{0})\]
as desired.
Proof of Theorem b.7.: Recall from Section B.2, we use the equivalence notations:
\[IF(z;T,P)=T^{\prime}_{P}(\delta_{z}-P).\]
First, we prove the following Claim:
\[\sup_{z\in X\times\mathcal{Y}}\|T^{\prime}_{\pi}(\delta_{z}-\pi)-T^{\prime}_{ \pi_{n}}(\delta_{z}-\pi_{n})\|_{\mathcal{H}}\to 0,\quad a.s. \tag{23}\]
From Lemma A.6. and Lemma A.7. in [47], we have
\[T^{\prime}_{P}(Q)=-S_{P}^{-1}(W_{P}(Q)).\]
In this equation,
\[W_{P}:\text{lin}(B_{S})\to\mathcal{H},\quad Q\mapsto\mathbb{E}_{Q}[\mathcal{ L}^{\prime}(Y,g_{P,\lambda_{0}}(X))\Phi(X)]\]
is a continuous linear operator on \(\text{lin}(B_{S})\) where the space \(\text{lin}(B_{S})\) is defined in [47]. Moreover, the operator norm of \(W_{P}\) satisfies \(\|W_{P}\|\leq 1\). In addition,
\[S_{P}:\mathcal{H}\to\mathcal{H},\quad f\mapsto 2\lambda_{0}f+\mathbb{E}_{P}[ \mathcal{L}^{\prime\prime}(Y,g_{P,\lambda_{0}}(X))f(X)\Phi(X)]\]
is an invertible continuous linear operator on \(\mathcal{H}\) where \(\Phi\) is the feature map of \(\mathcal{H}\). This implies that \(S_{P}^{-1}:\mathcal{H}\to\mathcal{H}\) is also a continuous linear operator on \(\mathcal{H}\).
Next, we apply Lemma A.7 in [47] by considering the following two sequences: The first sequence is \(\delta_{z}-\pi_{n}\in\text{lin}(B_{S})\) which obviously satisfies
\[\|(\delta_{z}-\pi_{n})-(\delta_{z}-\pi)\|_{\infty}=\|\pi-\pi_{n}\|_{\infty}\to 0,\quad a.s.\]
The second sequence is \(\pi_{n}\in B_{S}\) which obviously satisfies
\[\|\pi-\pi_{n}\|_{\infty}\to 0,\quad a.s.\]
Following the proof of Lemma A.7 in [47], we have that
\[\|T_{\pi}^{\prime}(\delta_{z}-\pi)-T_{\pi_{n}}^{\prime}(\delta_{z}- \pi_{n})\|_{\mathcal{H}}\] \[= \|S_{\pi}^{-1}(W_{\pi}(\delta_{z}-\pi))-S_{\pi_{n}}^{-1}(W_{\pi_{n }}(\delta_{z}-\pi_{n}))\|_{\mathcal{H}}\] \[\leq \|S_{\pi}^{-1}(W_{\pi}(\delta_{z}-\pi))-S_{\pi}^{-1}(W_{\pi_{n}}( \delta_{z}-\pi))\|_{\mathcal{H}}+\|S_{\pi}^{-1}(W_{\pi_{n}}(\delta_{z}-\pi))-S_ {\pi}^{-1}(W_{\pi_{n}}(\delta_{z}-\pi_{n}))\|_{\mathcal{H}}\] \[+\|S_{\pi}^{-1}(W_{\pi_{n}}(\delta_{z}-\pi_{n}))-S_{\pi_{n}}^{-1}( W_{\pi_{n}}(\delta_{z}-\pi_{n}))\|_{\mathcal{H}}\] \[\leq \|S_{\pi}^{-1}\|\|W_{\pi}(\delta_{z}-\pi)-W_{\pi_{n}}(\delta_{z} -\pi)\|_{\mathcal{H}}+\|S_{\pi}^{-1}\|\|W_{\pi_{n}}\|\|(\delta_{z}-\pi)-( \delta_{z}-\pi_{n})\|_{\infty}\] \[+\|S_{\pi}^{-1}-S_{\pi_{n}}^{-1}\|\|W_{\pi_{n}}\|\|\delta_{z}-\pi _{n}\|_{\infty}\] \[\leq \|S_{\pi}^{-1}\|W_{\pi}(\pi)-W_{\pi_{n}}(\pi)\|_{\mathcal{H}}+\|S _{\pi}^{-1}\|\|W_{\pi_{n}}\|\|\pi-\pi_{n}\|_{\infty}\quad\text{(since $W_{\pi_{n}}$ is a linear operator)}\] \[+\|S_{\pi}^{-1}-S_{\pi_{n}}^{-1}\|\|W_{\pi_{n}}\|\left(\|\delta_{z }-\pi\|_{\infty}+\|\pi-\pi_{n}\|_{\infty}\right)\]
Therefore, we only need to study the three terms in the last equation. Note that the first two terms are independent of \(z\), and thus it follows from Step 3 and Step 4 in the proof of Lemma A.7 in [47] that
\[\sup_{z\in\mathcal{X}\times\mathcal{Y}}\|S_{\pi}^{-1}\|\|W_{\pi} (\pi)-W_{\pi_{n}}(\pi)\|_{\mathcal{H}}+\|S_{\pi}^{-1}\|\|W_{\pi_{n}}\|\|\pi- \pi_{n}\|_{\infty}\] \[= \|S_{\pi}^{-1}\|W_{\pi}(\pi)-W_{\pi_{n}}(\pi)\|_{\mathcal{H}}+\| S_{\pi}^{-1}\|\|W_{\pi_{n}}\|\|\pi-\pi_{n}\|_{\infty}\to 0,\quad a.s.\]
To show that the third term also satisfies that
\[\sup_{z\in\mathcal{X}\times\mathcal{Y}}\|S_{\pi}^{-1}-S_{\pi_{n}}^{-1}\|\|W_{ \pi_{n}}\|\left(\|\delta_{z}-\pi\|_{\infty}+\|\pi-\pi_{n}\|_{\infty}\right) \to 0,\quad a.s.,\]
we only need to note the following fact:
1) \(\|S_{\pi}^{-1}-S_{\pi_{n}}^{-1}\|\to 0,\ a.s.\), by Step 2 in the proof of Lemma A.7 in [47]. Moreover, this equation is independent of \(z\).
2) \(\|W_{\pi_{n}}\|\leq 1\) by Step 1 in the proof of Lemma A.7 in [47]. Moreover, this equation is independent of \(z\).
3) \(\|\pi-\pi_{n}\|_{\infty}<+\infty,\ a.s.\), since \(\|\pi-\pi_{n}\|_{\infty}\to 0,\ a.s.\). Moreover, this equation is independent of \(z\).
4) \(\sup_{z\in\mathcal{X}\times\mathcal{Y}}\|\delta_{z}-\pi\|_{\infty}<+\infty\). To see this, we note that by definition
\[\sup_{z\in\mathcal{X}\times\mathcal{Y}}\|\delta_{z}-\pi\|_{\infty}=\sup_{z\in \mathcal{X}\times\mathcal{Y}}\sup_{g\in\mathcal{G}}\left|\int gd(\delta_{z} -\pi)\right|=\sup_{z\in\mathcal{X}\times\mathcal{Y}}\sup_{g\in\mathcal{G}} \left|g(z)-\int gd\pi\right|\leq 2\sup_{g\in\mathcal{G}}\|g\|_{\infty}\]
where the last term is independent of \(z\) and the space \(\mathcal{G}=\mathcal{G}_{1}\cup\mathcal{G}_{2}\cup\{b\}\) is defined in [47]. \(\sup_{g\in\mathcal{G}_{1}}\|g\|_{\infty}\leq 1\) by the definition of \(\mathcal{G}_{1}\). \(\sup_{g\in\{b\}}\|g\|_{\infty}<+\infty\) by our additional assumption on \(b\). Since both \(\mathcal{X}\) and \(\mathcal{Y}\) are bounded and closed, \(\sup_{x\in\mathcal{X}}\sqrt{k(x,x)}\) is bounded above by, say \(\kappa<\infty\). Thus, for every \(h\in\mathcal{H}\) with \(\|h\|_{\mathcal{H}}\leq C_{1}\), we have \(\|h\|_{\infty}\leq C_{1}\kappa\). By definition of \(\mathcal{G}_{2}\), for any \(g\in\mathcal{G}_{2}\), we can write \(g(x,y)=\mathcal{L}^{\prime}(y,f_{0}(x))f_{1}(x)\) with \(\|f_{0}\|_{\mathcal{H}}\leq c_{0}\) and \(\|f_{1}\|_{\mathcal{H}}\leq 1\). Note that \(\|f_{1}\|_{\mathcal{H}}\leq 1\) implies that \(\|f_{1}\|_{\infty}\leq\kappa\) and \(\|f_{0}\|_{\mathcal{H}}\leq c_{0}\) implies that \(\|f_{0}\|_{\infty}\leq c_{0}\kappa\). Thus Assumption B.4 shows that \(\|\mathcal{L}^{\prime}(y,f_{0}(x))\|_{\infty}\leq b^{\prime}_{\rm co\kappa}(y)\) which is bounded on \(\mathcal{Y}\) by our additional assumption (uniformly for \(f_{0}\)). Hence we conclude that \(\sup_{g\in\mathcal{G}_{2}}\|g\|_{\infty}\leq\kappa\sup_{y\in\mathcal{Y}}b^{ \prime}_{\rm co\kappa}(y)<+\infty\). Summarizing the above discussion, we obtain that
\[\sup_{z\in\mathcal{X}\times\mathcal{Y}}\|\delta_{z}-\pi\|_{\infty}\leq 2\sup_{g\in \mathcal{G}}\|g\|_{\infty}<+\infty\]
Combining the above points 1)-4), we conclude that
\[\sup_{z\in\mathcal{X}\times\mathcal{Y}}\|S_{\pi}^{-1}-S_{\pi_{n}}^{-1}\|\|W_{ \pi_{n}}\|\left(\|\delta_{z}-\pi\|_{\infty}+\|\pi-\pi_{n}\|_{\infty}\right)\to 0, \quad a.s.\]
Hence, we obtain our Claim (23).
Applying \(k_{x_{0}}\) to (23), the reproducing property of \(k\) implies that
\[\sup_{z\in\mathcal{X}\times\mathcal{Y}}\left|T_{\pi}^{\prime}(\delta_{z}-\pi)(x _{0})-T_{\pi_{n}}^{\prime}(\delta_{z}-\pi_{n})(x_{0})\right|\to 0,\quad a.s.\]
Note that since \(\mathcal{X}\) and \(\mathcal{Y}\) are bounded and closed by our assumptions, we have that
\[\left|\int_{z\in\mathcal{X}\times\mathcal{Y}}\left(T^{\prime}_{\pi} (\delta_{z}-\pi)(x_{0})\right)^{2}d\pi(z)-\int_{z\in\mathcal{X}\times\mathcal{Y }}\left(T^{\prime}_{\pi_{n}}(\delta_{z}-\pi_{n})(x_{0})\right)^{2}d\pi(z) \right|\] \[\leq |\pi(\mathcal{X}\times\mathcal{Y})|\times\sup_{z\in\mathcal{X} \times\mathcal{Y}}\left|T^{\prime}_{\pi}(\delta_{z}-\pi)(x_{0})-T^{\prime}_{ \pi_{n}}(\delta_{z}-\pi_{n})(x_{0})\right|\] \[\times\left|\int_{z\in\mathcal{X}\times\mathcal{Y}}\left(T^{ \prime}_{\pi}(\delta_{z}-\pi)(x_{0})+T^{\prime}_{\pi_{n}}(\delta_{z}-\pi_{n}) (x_{0})\right)d\pi(z)\right|\] \[\to 0,\quad a.s.\]
where we use the fact that \(\int_{z\in\mathcal{X}\times\mathcal{Y}}\left(T^{\prime}_{\pi}(\delta_{z}-\pi) (x_{0})\right)^{2}d\pi(z)=\xi^{2}(x_{0})<+\infty\). On the other hand, it follows from the strong law of large numbers that
\[\frac{1}{n}\sum_{z_{i}\in\mathcal{D}}\left(T^{\prime}_{\pi_{n}}(\delta_{z_{i}} -\pi_{n})(x_{0})\right)^{2}-\int_{z\in\mathcal{X}\times\mathcal{Y}}\left(T^{ \prime}_{\pi_{n}}(\delta_{z}-\pi_{n})(x_{0})\right)^{2}d\pi(z)\to 0,\quad a.s.\]
Hence, we conclude that
\[\frac{1}{n}\sum_{z_{i}\in\mathcal{D}}\left(T^{\prime}_{\pi_{n}}(\delta_{z_{i}} -\pi_{n})(x_{0})\right)^{2}-\int_{z\in\mathcal{X}\times\mathcal{Y}}\left(T^{ \prime}_{\pi}(\delta_{z}-\pi)(x_{0})\right)^{2}d\pi(z)\to 0,\quad a.s.\]
In other words,
\[\hat{\xi}^{2}(x_{0})=\frac{1}{n}\sum_{z_{i}\in\mathcal{D}}IF^{2}(z_{i};T,\pi_{ n})(x_{0})\to\int_{z\in\mathcal{X}\times\mathcal{Y}}IF^{2}(z;T,\pi)(x)d\pi(z)= \xi^{2}(x_{0}),\quad a.s.\]
For confidence intervals, the proof is straightforward. Theorem B.5 shows that as \(n\to\infty\) we have
\[\frac{g_{\pi_{n},\lambda_{n}}(x_{0})-g_{\pi,\lambda_{0}}(x_{0})}{\xi(x_{0})/ \sqrt{n}}\Rightarrow\mathcal{N}(0,1)\]
By Slutsky's theorem and \(\frac{\xi(x_{0})}{\xi(x_{0})}\to 1,\ a.s.,\) we have
\[\frac{g_{\pi_{n},\lambda_{n}}(x_{0})-g_{\pi,\lambda_{0}}(x_{0})}{\hat{\xi}(x_{ 0})/\sqrt{n}}\Rightarrow\mathcal{N}(0,1)\]
This implies that asymptotically as \(n\to\infty\), we have
\[\mathbb{P}\left(-q_{1-\frac{\alpha}{2}}\leq\frac{g_{\pi_{n},\lambda_{n}}(x_{0 })-g_{\pi,\lambda_{0}}(x_{0})}{\hat{\xi}(x_{0})/\sqrt{n}}\leq q_{1-\frac{ \alpha}{2}}\right)\to 1-\alpha\]
where \(q_{\alpha}\) is the \(\alpha\)-quantile of standard Gaussian distribution \(\mathcal{N}(0,1)\). Hence
\[\left[g_{\pi_{n},\lambda_{n}}(x_{0})-\frac{\hat{\xi}(x_{0})}{\sqrt{n}}q_{1- \frac{\alpha}{2}},g_{\pi_{n},\lambda_{n}}(x_{0})+\frac{\hat{\xi}(x_{0})}{ \sqrt{n}}q_{1-\frac{\alpha}{2}}\right]\]
is an asymptotically exact \((1-\alpha)\)-level confidence interval of \(g_{\pi,\lambda_{0}}(x)\).
## Appendix E Experiments: Details and More Results
### Experimental Details
We provide more details about our experimental implementation in Section 4.
Throughout our experiments, we use a two-layer fully-connected neural network as the base predictor based on the NTK specifications in Section C (Proposition C.3). However, we need to resolve the conflicts between the theoretical assumptions therein (e.g., continuous-time gradient flow) and practical implementation (e.g., discrete-time gradient descent), and at the same time, guarantee that the training procedure indeed operates in the NTK regime so that the first-order Taylor expansion (linearized neural network assumption) works well [61, 32, 75, 110]. Therefore, we will use the following specifications in all experiments.
1. The network adopts the NTK parametrization, and its parameters are randomly initialized using He initialization. The ReLU activation function is used in the network.
2. The network has \(32\times n\) hidden neurons in its hidden layer where \(n\) is the size of the entire training data. The network should be sufficiently wide so that the NTK theory holds.
3. The network is trained using the regularized square loss (1) with regularization hyper-parameter \(\lambda_{n}\equiv 0.1^{10}\).
4. The network is trained using the (full) batch gradient descent (by feeding the whole dataset).
5. The learning rate is properly tuned based on the specific dataset, typically around \(0.5\) and between \(0.1\) and \(0.7\).
6. The training epochs are properly tuned based on the specific dataset, typically around 300-2000. The epochs should not be too small since the training needs to converge to a good solution, but it should also not be too large because we need to stipulate that the training procedure indeed operates in the NTK regime (which is an area around the initialization). Note that in practice, we cannot use the continuous-time gradient flow, and the network can never be infinitely wide. Therefore, with the fixed learning rate in gradient descent, we do not greatly increase the number of epochs so that the training procedure will likely find a solution that is not too far from the initialization.
7. We set \(m^{\prime}=4\) in the PNC-enhanced batching approach and \(R=4\) in the PNC-enhanced cheap bootstrap approach.
8. In DropoutUQ, the dropout rate is a crucial hyper-parameter. We find that the dropout rate has a significant impact on the interval width of DropoutUQ: A large dropout rate produces a wide interval since the dropout rate is linked to the variance of the prior Gaussian distribution [40]. Therefore, to make fair comparisons between different approaches, we adjust the dropout rate in DropoutUQ so that they have a similar or larger interval width as PNC-enhanced batching or PNC-enhanced cheap bootstrap.
9. All experiments are conducted on a single GeForce RTX 2080 Ti GPU.
### Additional Experiments
In this section, we present additional experimental results on more datasets. These experimental results further support our observations and claims in Section 4, demonstrating the robustness and effectiveness of our proposals.
Synthetic Datasets 2: \(X\sim\text{Unif}([0,0.2]^{d})\) and \(Y\sim\sum_{i=1}^{d}\exp(X^{(i)})+(X^{(i)})^{2}+\mathcal{N}(0,0.01^{2})\). The training set \(\mathcal{D}=\{(x_{i},y_{i}):i=1,...,n\}\) is formed by i.i.d. samples of \((X,Y)\) with sample size \(n\). We use \(x_{0}=(0.1,0.1,...,0.1)\) as the fixed test point in the confidence interval task. The rest setup is the same as Section 4. For this dataset, the base network has \(128\times n\) hidden neurons in its hidden layer. More hidden neurons are used in these datasets to guarantee that the network is sufficiently wide and the training procedure operates in the NTK regime. The rest implementation specifications are the same as in Section E.1. The results for constructing confidence intervals are displayed in Table 3, and the results for removing procedural variability are displayed in Table 4.
## Appendix F Additional Methodology and Results
In semi-supervised learning, the unlabeled data appear in the training dataset and typically constitute the majority of the training dataset. Our original PNC algorithm only involves labeled data without
\begin{table}
\begin{tabular}{c|c c c|c c c|c c c} \hline \hline & \multicolumn{2}{c|}{PNC-enhanced batching} & \multicolumn{2}{c|}{PNC-enhanced cheap bootstrap} & \multicolumn{2}{c}{DropoutUQ} \\ & 95\%CI(CR/TW) & 90\%CI(CR/TW) & MP & 95\%CI(CR/TW) & 90\%CI(CR/TW) & MP & 95\%CI(CR/TW) & 90\%CI(CR/TW) & MP \\ \hline \((d=16)\) & & & & & & & & & \\ \(n=128\) & **0.95**/0.2201 & **0.90**/1.627 & 17.858 & **0.95**/0.1905 & **0.90**/1.462 & 17.830 & **0.95**/0.2227 & **0.95**/0.1919 & 17.834 \\ \(n=256\) & 0.9250.1604 & **0.90**/1.186 & 17.857 & **0.975**/0.1293 & **0.90**/0.0993 & 17.849 & **0.95**/0.1602 & **0.925**/0.1368 & 17.831 \\ \(n=512\) & **0.975**/0.1080 & **0.90**/0.798 & 17.846 & **0.95**/0.1077 & **0.90**/0.0827 & 17.843 & 0.80,10.088 & 0.775,0.0929 & 17.836 \\ \(n=1024\) & **0.95**/0.0855 & 0.8750.0632 & 17.845 & **0.975**/0.0790 & **0.95**/0.606 & 17.836 & 0.8250,0.0950 & 0.800,0.0794 & 17.832 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Confidence interval construction on synthetic datasets 2 with different data sizes \(n=128,256,512,1024\) and dimension \(d=16\). The CR results that attain the desired confidence level \(95\%/90\%\) are in **bold**.
utilizing unlabeled data. To handle semi-supervised learning, we propose a variant of PNC algorithm, which is especially useful for leveraging unlabeled data. We term it an Independent Procedural-Noise-Correcting (IPNC) algorithm whose pseudo-code is given in Algorithm 4. Similarly to our work in Section 3.3, we also develop the large-sample asymptotic results for the IPNC predictor in Theorem F.1. This theorem is then used to guide the development of the IPNC-enhanced batching or IPNC-enhanced cheap bootstrap.
In IPNC, we use two independent datasets, one with labeled data to train the base network and one with possibly unlabeled data to train the artificial-label-trained network. This independent training idea stipulated the independence between the two networks in IPNC.
**Input:** Training dataset \(\mathcal{D}_{1,2}=\mathcal{D}_{1}\cup\mathcal{D}_{2}\) with size \(n_{1}+n_{2}\), which is divided into two disjoint (independent) datasets \(\mathcal{D}_{1}\) and \(\mathcal{D}_{2}\) where \(|\mathcal{D}_{1}|=n_{1}\) and \(|\mathcal{D}_{2}|=n_{2}\). \(\mathcal{D}_{1}\) is labeled while \(\mathcal{D}_{1}\) may not be labeled.
**Procedure:**
**1.** Draw \(\theta^{b}\sim P_{\theta^{b}}=\mathcal{N}(0,\mathbf{I}_{p})\) under NTK parameterization. Train a standard base network with data \(\mathcal{D}_{1}\) and the initialization parameters \(\theta^{b}\), which outputs \(\hat{h}_{n_{1},\theta^{b}}(\cdot)\).
**2.** For each \(x_{i}\in\mathcal{D}_{2}\), generate its procedural-shifted label \(\bar{s}(x_{i})=\mathbb{E}_{P_{\theta^{b}}}[s_{\theta^{b}}(x_{i})]\). Train an auxiliary neural network with data \(\mathcal{D}^{\prime}_{2}=\{(x_{i},\bar{s}(x_{i})):x_{i}\in\mathcal{D}_{2}\}\) and the initialization parameters \(\theta^{b}\) (the same one as in Step 1.), which outputs \(\hat{\phi}^{\prime}_{n_{2},\theta^{b}}(\cdot)\). Subtracting \(\bar{s}\), we obtain \(\hat{\phi}_{n_{2},\theta^{b}}=\hat{\phi}^{\prime}_{n_{2},\theta^{b}}-\bar{s}\) in (4).
**Output:** At point \(x\), output \(\hat{h}_{n_{1},\theta^{b}}(x)-\hat{\phi}_{n_{2},\theta^{b}}(x)\).
**Algorithm 4** Independent Procedural-Noise-Correcting (IPNC) Predictor
To derive the results for IPNC algorithm, we start with the following decomposition:
\[\hat{h}_{n,\theta^{b}}(x)-h^{*}(x)=\hat{h}_{n,\theta^{b}}(x)-\hat{h}^{*}_{n}( x)+\hat{h}^{*}_{n}(x)-h^{*}(x).\]
For the first part \(\hat{h}_{n,\theta^{b}}(x)-\hat{h}^{*}_{n}(x)=\hat{\phi}_{n,\theta^{b}}(x)\), as we discussed in Section 3.1, it is the solution to the following empirical risk minimization problem
\[\hat{\phi}_{n,\theta^{b}}(\cdot)=s_{\theta^{b}}(\cdot)-\bar{s}(\cdot)+\min_{g \in\mathcal{H}}\frac{1}{n}\sum_{i=1}^{n}(\bar{s}(x_{i})-s_{\theta^{b}}(x_{i})- g(x_{i}))^{2}+\lambda_{n}\|g\|_{\tilde{\mathcal{H}}}^{2}\]
Its corresponding population risk minimization problem is as follows:
\[\phi_{\theta^{b}}(\cdot):=s_{\theta^{b}}(\cdot)-\bar{s}(\cdot)+\min_{g\in \mathcal{H}}\mathbb{E}_{\pi}[(\bar{s}(X)-s_{\theta^{b}}(X)-g(X))^{2}]+\lambda_ {0}\|g\|_{\tilde{\mathcal{H}}}^{2}.\]
Note that \(\phi_{\theta^{b}}(\cdot)\) is data-free but depends on the random initialization \(\theta^{b}\). Recall that
\[\hat{h}^{*}_{n}(\cdot)=\bar{s}(\cdot)+\min_{g\in\mathcal{H}}\frac{1}{n}\sum_{ i=1}^{n}[(y_{i}-\bar{s}(x_{i})-g(x_{i}))^{2}]+\lambda_{n}\|g\|_{\tilde{\mathcal{H}}}^ {2}\]
and
\[h^{*}(\cdot)=\bar{s}(\cdot)+\min_{g\in\mathcal{H}}\mathbb{E}_{\pi}[(Y-\bar{s}( X)-g(X))^{2}]+\lambda_{0}\|g\|_{\tilde{\mathcal{H}}}^{2}.\]
For \(\hat{\phi}_{n,\theta^{b}}(x)=\hat{h}_{n,\theta^{b}}(x)-\hat{h}^{*}_{n}(x)\) in (4), it is the solution to the following empirical risk minimization problem
\[\hat{\phi}_{n,\theta^{b}}(\cdot)=s_{\theta^{b}}(\cdot)-\bar{s}(\cdot)+\min_{g \in\mathcal{H}}\frac{1}{n}\sum_{i=1}^{n}(\bar{s}(x_{i})-s_{\theta^{b}}(x_{i}) -g(x_{i}))^{2}+\lambda_{n}\|g\|_{\tilde{\mathcal{H}}}^{2} \tag{24}\]
\begin{table}
\begin{tabular}{c|c c c} \hline MSE & One base network & PNC predictor & Deep ensemble (5 networks) Deep ensemble (2 networks) \\ \hline \(d=16,n=128\) & \((1.70\pm 0.37)\times 10^{-2}\) & \(\mathbf{(1.33\pm 0.19)\times 10^{-2}}\) & \((1.41\pm 0.29)\times 10^{-2}\) & \((1.57\pm 0.32)\times 10^{-2}\) \\ \(d=16,n=256\) & \((1.07\pm 0.16)\times 10^{-2}\) & \(\mathbf{(8.51\pm 1.67)\times 10^{-3}}\) & \((8.54\pm 1.71)\times 10^{-3}\) & \((9.75\pm 1.70)\times 10^{-3}\) \\ \(d=16,n=512\) & \((9.83\pm 1.74)\times 10^{-3}\) & \(\mathbf{(6.17\pm 0.93)\times 10^{-3}}\) & \((6.72\pm 0.91)\times 10^{-3}\) & \((7.51\pm 1.03)\times 10^{-3}\) \\ \(d=16,n=1024\) & \((8.49\pm 1.20)\times 10^{-3}\) & \(\mathbf{(4.90\pm 0.36)\times 10^{-3}}\) & \((5.47\pm 0.53)\times 10^{-3}\) & \((6.42\pm 0.74)\times 10^{-3}\) \\ \hline \end{tabular}
\end{table}
Table 4: Reducing procedural variability on synthetic datasets 2 with different data sizes \(n=128,256,512,1024\) and dimension \(d=16\). The best results are in **bold**.
We consider its corresponding population risk minimization problem as follows:
\[\phi_{\theta^{b}}(\cdot):=s_{\theta^{b}}(\cdot)-\bar{s}(\cdot)+\min_{g\in\mathcal{ H}}\mathbb{E}_{\pi}[(\bar{s}(X)-s_{\theta^{b}}(X)-g(X))^{2}]+\lambda_{0}\|g\|_{\mathcal{H}}^ {2}. \tag{25}\]
Proposition B.2 shows that \(\phi_{\theta^{b}}\) is given by
\[\phi_{\theta^{b}}(\cdot)=s_{\theta^{b}}(\cdot)-\bar{s}(\cdot)+(\mathcal{O}_{K }+\lambda_{0}I)^{-1}\mathcal{L}(\bar{s}-s_{\theta^{b}}).\]
For the second part \(\hat{h}_{n}^{*}(x)-h^{*}(x)\), we have (4) and (6). Therefore we have
\[\hat{h}_{n,\theta^{b}}(\cdot)-h^{*}(\cdot)-\phi_{\theta^{b}}(\cdot)=\text{( \ref{eq:h_
**Theorem F.1** (Large-sample asymptotics of the IPNC predictor).: _Suppose that Assumption C.2 holds. Suppose that Assumptions B.3 and B.4 hold when \(Y\) is replaced by \(Y-s_{\theta^{b}}(X)\) or \(\bar{s}(X)-s_{\theta^{b}}(X)\). Input the training data \(\mathcal{D}\) into Algorithm 4. We have_
\[\sqrt{n_{1}}\left(\hat{h}_{n_{1},\theta^{b}}(x)-h^{*}(x)-\phi_{ \theta^{b}}(x)\right)\Rightarrow\mathcal{N}(0,\beta_{2,\theta^{b}}^{2}(x)), \tag{32}\] \[\sqrt{n_{2}}\left(\hat{\phi}_{n_{2},\theta^{b}}(x)-\phi_{\theta^{ b}}(x)\right)\Rightarrow\mathcal{N}(0,\beta_{3,\theta^{b}}^{2}(x)), \tag{33}\]
_where_
\[\beta_{2,\theta^{b}}^{2}(x)=\int_{z}IF^{2}(z;T_{2},\pi)(x)d\pi(z), \;\beta_{3,\theta^{b}}^{2}(x)=\int_{z}IF^{2}(z;T_{3},\pi)(x)d\pi(z). \tag{34}\]
_Let \(n_{2}=\zeta n_{1}\) where \(\zeta>0\) is a fixed constant. An asymptotically (in the sense of \(n_{1}\rightarrow\infty\)) exact \((1-\alpha)\)-level confidence interval of \(h^{*}(x)\) is_
\[\left[\hat{h}_{n_{1},\theta^{b}}(x)-\hat{\phi}_{n_{2},\theta^{b}}( x)-\left(\frac{\beta_{2,\theta^{b}}(x)}{\sqrt{n_{1}}}+\frac{\beta_{3,\theta^{b}}(x) }{\sqrt{n_{2}}}\right)q_{1-\frac{\alpha}{2}},\right.\] \[\left.\hat{h}_{n_{1},\theta^{b}}(x)-\hat{\phi}_{n_{2},\theta^{b}}( x)+\left(\frac{\beta_{2,\theta^{b}}(x)}{\sqrt{n_{1}}}+\frac{\beta_{3,\theta^{b}}(x) }{\sqrt{n_{2}}}\right)q_{1-\frac{\alpha}{2}}\right]\]
_where \(q_{\alpha}\) is the \(\alpha\)-quantile of the standard Gaussian distribution._
Compared with Theorem 3.5, the major difference is that \(\mathcal{D}_{1}\) and \(\mathcal{D}_{2}\) are independent, which results in the independence between (32) and (33), and thus we can directly sum these two equations. Based on the above theorem, the IPNC-enhanced batching or IPNC-enhanced cheap bootstrap can be similarly derived, as shown below.
**IPNC-enhanced batching.** See Algorithm 5.
**Input:** Training dataset \(\mathcal{D}_{1,2}=\mathcal{D}_{1}\cup\mathcal{D}_{2}\) with size \(n_{1}+n_{2}\), which is divided into two disjoint (independent) datasets \(\mathcal{D}_{1}\) and \(\mathcal{D}_{2}\) where \(|\mathcal{D}_{1}|=n_{1}\) and \(|\mathcal{D}_{2}|=n_{2}\). \(\mathcal{D}_{1}\) is labeled while \(\mathcal{D}_{1}\) may not be labeled. The number of batches \(m^{\prime}\geq 2\).
**Procedure: 1.** Split \(\mathcal{D}_{1}\) into \(m^{\prime}\) batches \(\mathcal{D}_{1,1},...,\mathcal{D}_{1,m^{\prime}}\). Split \(\mathcal{D}_{2}\) into \(m^{\prime}\) batches \(\mathcal{D}_{2,1},...,\mathcal{D}_{2,m^{\prime}}\).
**1.** For each \(j\in[m^{\prime}]\), input \(\mathcal{D}_{1,j}\cup\mathcal{D}_{2,j}\) in Algorithm 4 to output \(\hat{h}_{n_{1}^{\prime},\theta^{b}}^{j}(x)-\hat{\phi}_{n_{2}^{\prime},\theta^ {b}}^{j}(x)\). Note that \(n_{1}^{\prime}=\frac{n_{1}}{m^{\prime}}\) and \(n_{2}^{\prime}=\frac{n_{2}}{m^{\prime}}\).
**3.** Compute \(\psi_{B}(x)=\frac{1}{m^{\prime}}\sum_{j=1}^{m^{\prime}}\left(\hat{h}_{n_{1}^{ \prime},\theta^{b}}^{j}(x)-\hat{\phi}_{n_{2}^{\prime},\theta^{b}}^{j}(x)\right)\),
and \(S_{B}(x)^{2}=\frac{1}{m^{\prime}-1}\sum_{j=1}^{m^{\prime}}\left(\hat{h}_{n_{1}^ {\prime},\theta^{b}}^{j}(x)-\hat{\phi}_{n_{2}^{\prime},\theta^{b}}^{j}(x)- \psi_{B}(x)\right)^{2}\).
**Output:** At point \(x\), output \(\psi_{B}(x)\) and \(S_{B}(x)^{2}\).
**Algorithm 5** IPNC-Enhanced Batching
**Theorem F.2** (Exact coverage of IPNC-enhanced batching confidence interval).: _Suppose that Assumption C.2 holds. Suppose that Assumptions B.3 and B.4 hold when \(Y\) is replaced by \(Y-s_{\theta^{b}}(X)\) or \(\bar{s}(X)-s_{\theta^{b}}(X)\). Let \(n_{2}=\zeta n_{1}\) where \(\zeta>0\) is a fixed constant. Then an asymptotically exact \((1-\alpha)\)-level confidence interval of \(h^{*}(x)\) is \([\psi_{B}(x)-\frac{S_{B}(x)}{\sqrt{m^{\prime}}}q_{1-\frac{\alpha}{2}},\psi_{B} (x)+\frac{S_{B}(x)}{\sqrt{m^{\prime}}}q_{1-\frac{\alpha}{2}}]\) where \(q_{\alpha}\) is the \(\alpha\)-quantile of the \(t\) distribution \(t_{m^{\prime}-1}\) with degree of freedom \(m^{\prime}-1\)._
The proof is similar to Theorem 3.6.
**IPNC-enhanced cheap bootstrap.** See Algorithm 6.
**Theorem F.3** (Exact coverage of IPNC-enhanced cheap bootstrap confidence interval).: _Suppose that Assumption C.2 holds. Suppose that Assumptions B.3 and B.4 hold when \(Y\) is replaced by \(Y-s_{\theta^{b}}(X)\) or \(\bar{s}(X)-s_{\theta^{b}}(X)\). Let \(n_{2}=\zeta n_{1}\) where \(\zeta>0\) is a fixed constant. Then an asymptotically exact \((1-\alpha)\)-level confidence interval of \(h^{*}(x)\) is \([\psi_{C}(x)-S_{C}(x)q_{1-\frac{\alpha}{2}},\psi_{C}(x)+S_{C}(x)q_{1-\frac{ \alpha}{2}}]\) where \(q_{\alpha}\) is the \(\alpha\)-quantile of the \(t\) distribution \(t_{R}\) with degree of freedom \(R\)._
The proof is similar to Theorem 3.7.
**Input:** Training dataset \(\mathcal{D}_{1,2}=\mathcal{D}_{1}\cup\mathcal{D}_{2}\) with size \(n_{1}+n_{2}\), which is divided into two disjoint (independent) datasets \(\mathcal{D}_{1}\) and \(\mathcal{D}_{2}\) where \(|\mathcal{D}_{1}|=n_{1}\) and \(|\mathcal{D}_{2}|=n_{2}\). \(\mathcal{D}_{1}\) is labeled while \(\mathcal{D}_{1}\) may not be labeled. The number of replications \(R\geq 1\).
**Procedure: 1.** Input \(\mathcal{D}_{1}\cup\mathcal{D}_{2}\) in Algorithm 4 to output \(\hat{h}_{n_{1},\theta^{b}}(x)-\hat{\phi}_{n_{2},\theta^{b}}(x)\).
**2.** For each replication \(b\in[R]\), resample \(\mathcal{D}_{1}\), i.e., independently and uniformly sample with replacement from \(\{(x_{1},y_{1}),...,(x_{n_{1}},y_{n_{1}})\}\)\(n_{1}\) times to obtain \(\mathcal{D}_{1}^{*b}=\{(x_{1}^{*b},y_{1}^{*b}),...,(x_{n_{1}}^{*b},y_{n_{1}}^{* b})\}\). Similarly, resample \(\mathcal{D}_{2}\) without labels to obtain \(\mathcal{D}_{2}^{*b}=\{x_{n_{1}+1}^{*b},...,x_{n_{1}+n_{2}}^{*b}\}\). Input \(\mathcal{D}_{1}^{*b}\cup\mathcal{D}_{2}^{*b}\) in Algorithm 4 to output \(\hat{h}_{n_{1},\theta^{b}}^{*b}(x)-\hat{\phi}_{n_{2},\theta^{b}}^{*b}(x)\).
**3.** Compute \(\psi_{C}(x)=\hat{h}_{n_{1},\theta^{b}}(x)-\hat{\phi}_{n_{2},\theta^{b}}(x)\),
and \(S_{C}(x)^{2}=\frac{1}{R}\sum_{b=1}^{R}\left(\hat{h}_{n_{1},\theta^{b}}^{*b}(x )-\hat{\phi}_{n_{2},\theta^{b}}^{*b}(x)-\psi_{C}(x)\right)^{2}\).
**Output:** At point \(x\), output \(\psi_{C}(x)\) and \(S_{C}(x)^{2}\).
**Algorithm 6** IPNC-enhanced Cheap Bootstrap
|
2307.06732 | Learning fixed points of recurrent neural networks by reparameterizing
the network model | In computational neuroscience, fixed points of recurrent neural networks are
commonly used to model neural responses to static or slowly changing stimuli.
These applications raise the question of how to train the weights in a
recurrent neural network to minimize a loss function evaluated on fixed points.
A natural approach is to use gradient descent on the Euclidean space of
synaptic weights. We show that this approach can lead to poor learning
performance due, in part, to singularities that arise in the loss surface. We
use a reparameterization of the recurrent network model to derive two
alternative learning rules that produces more robust learning dynamics. We show
that these learning rules can be interpreted as steepest descent and gradient
descent, respectively, under a non-Euclidean metric on the space of recurrent
weights. Our results question the common, implicit assumption that learning in
the brain should be expected to follow the negative Euclidean gradient of
synaptic weights. | Vicky Zhu, Robert Rosenbaum | 2023-07-13T13:09:11Z | http://arxiv.org/abs/2307.06732v2 | # Learning fixed points of recurrent neural networks by reparameterizing the network model
###### Abstract
In computational neuroscience, fixed points of recurrent neural networks are commonly used to model neural responses to static or slowly changing stimuli. These applications raise the question of how to train the weights in a recurrent neural network to minimize a loss function evaluated on fixed points. A natural approach is to use gradient descent on the Euclidean space of synaptic weights. We show that this approach can lead to poor learning performance due, in part, to singularities that arise in the loss surface. We use a reparameterization of the recurrent network model to derive two alternative learning rules that produce more robust learning dynamics. We show that these learning rules can be interpreted as steepest descent and gradient descent, respectively, under a non-Euclidean metric on the space of recurrent weights. Our results question the common, implicit assumption that learning in the brain should be expected to follow the negative Euclidean gradient of synaptic weights.
## 1 Introduction
Recurrent neural network models (RNNs) are widely used in machine learning and in computational neuroscience. In machine learning, they are typically used to learn dynamical responses to time series inputs. In computational neuroscience, RNNs are sometimes used to model dynamical responses of neurons to dynamical stimuli [1, 2], but are also often used to model stationary, fixed point neural responses to static inputs. For example, many phenomena observed in visual cortical circuits, _e.g._, surround suppression, are widely modeled by stationary states of computational models in which recurrent connections model lateral, intralaminar connectivity [3, 4, 5, 6, 7, 8].
A natural approach to learning fixed points of RNNs is to use direct gradient descent on the recurrent weight matrix after the network has converged toward a fixed point. A direct application of this approach, called "truncated backpropagation through time," can be computationally expensive because it requires the application of backpropagation on a computational graph unrolled over many time steps. Moreover, backpropagation through time is difficult to implement or approximate with biologically plausible models of learning [9].
Alternative approached use the implicit equation for fixed points to derive the exact gradients of the loss with respect to the weight matrix at the fixed point, or some approximations to this quantity [10, 11, 12, 13, 14]. These approaches can also be computationally expensive and difficult to implement in biologically plausible models because the gradient derived from the implicit equation involves matrix inverses, which either need to be computed directly or approximated using, for example, iterative methods. In this work, we additionally show that gradient descent on the recurrent weight matrix can lead to poor learning performance because the associated loss landscape has singularities and implicit biases that make it poorly conditioned for gradient-based learning.
When mentioning "gradient descent" above, we were implicitly referring to the Euclidean gradient on weights, which is standard practice. However, several authors have argued that the default use of the Euclidean gradient in gradient descent is not necessarily optimal for studying artificial or biological learning. In machine learning applications, non-Euclidean gradients informed by information theory, such as the natural gradient, are superior in some settings [15, 16, 17]. In computational neuroscience, the use of a Euclidean gradient implicitly assumes a specific choice of units in a biological model and, more generally, assumes a specific parameterization of the model [18, 19, 20]. Different units or different parameterizations of a biological model will yield different gradients and ultimately different learning dynamics. Hence, gradient descent using the Euclidean gradient of the loss with respect to synaptic weights under a specific choice of parameterization might not capture learning dynamics or learned representations in biological neuronal networks.
In this work, we derive two new learning rules for fixed points of recurrent neural networks by reparameterizing the network model. The first learning rule can be viewed as steepest descent with respect to a non-Euclidean metric. The second rule approximates the first one, but it is more efficient and it can be interpreted as gradient descent with a non-Euclidean gradient. We demonstrate empirically that these learning rules exhibit more robust and efficient learning dynamics than standard, Euclidean gradient descent. We also find that the parameter updates produced by these rules point in substantially different directions in parameter space than the negative Euclidean gradient. In addition to providing new, robust learning rules for learning fixed points in recurrent networks, our results question the common, implicit assumption in computational neuroscience that learning should follow the negative Euclidean gradient of synaptic weights.
Code to apply the proposed learning rules and produce all figures in the manuscript can be found at
[https://github.com/RobertRosenbaum/LearningFixedPointsInRNNs](https://github.com/RobertRosenbaum/LearningFixedPointsInRNNs)
## 2 Background and theory
### Model description
We consider a recurrent neural network (RNN) model of the form [21, 22, 1, 2]
\[\tau\frac{d\mathbf{r}}{dt}=-\mathbf{r}+f(W\mathbf{r}+\mathbf{x}) \tag{1}\]
where \(\mathbf{r}(t)\in\mathbb{R}^{N}\) is a vector of model firing rates, \(\tau>0\) is a time constant, \(W\in\mathbb{R}^{N\times N}\) is a recurrent connectivity matrix, \(\mathbf{x}\in\mathbb{R}^{N}\) models external input to the network, and \(f:\mathbb{R}\rightarrow\mathbb{R}\) is a non-negative, non-decreasing activation function or "f-I curve", which is applied pointwise. For a time-constant input, \(\mathbf{x}(t)=\mathbf{x}\), fixed point firing rates satisfy
\[\mathbf{r}=f(W\mathbf{r}+\mathbf{x}). \tag{2}\]
The stability of fixed point firing rates from Eq. (1) is determined by the eigenvalues of the Jacobian matrix,
\[\mathcal{J}=\frac{1}{\tau}[-I+GW] \tag{3}\]
where \(G=\text{diag}(f^{\prime}(\mathbf{z}))\) is a diagonal matrix with entries
\[G_{jj}=f^{\prime}(\mathbf{z}_{j})\]
and \(\mathbf{z}=[W\mathbf{r}+\mathbf{x}]\) is the vector of neural inputs or pre-activations evaluated at their fixed points. Specifically, a fixed point is hyperbolically stable if all eigenvalues of \(\mathcal{J}\) have negative real part.
We can alternatively consider an recurrent neural network model in discrete time of the form
\[\mathbf{r}(n+1)=f(W\mathbf{r}(n)+\mathbf{x}(n)). \tag{4}\]
Eq. (1) is more common in computational neuroscience while Eq. (4) is more common in machine learning, but they are closely related. Eq. (4) has the same fixed points as Eq. (2), but hyperbolic stability is obtained when eigenvalues of \(GW\) have magnitude less than \(1\). Hence, if a fixed point is stable for Eq. (4), it is also stable for Eq. (1), but the converse is not true. In this work, we focus on the continuous system in Eq. (1), but our approach and learning rules can also be applied to the discrete system in Eq. (4).
In machine learning applications, RNNs are often used to learn mappings from input time series, \(\mathbf{x}(t)\), to output time series, \(\mathbf{r}(t)\), and they are often trained using backpropagation through time. In computational neuroscience, RNNs of the form in Eq. (1) are often studied for their fixed point properties, for example to study orientation selectivity and surround suppression among other phenomena [3, 4, 5, 6, 7, 8], but the weights in these studies are often chosen by hand, not learned. As a combination of these perspectives, we are interested in _learning_ mappings from static inputs, \(\mathbf{x}(t)=\mathbf{x}\), to their associated fixed points, \(\mathbf{r}\), given by Eq. (2).
Specifically, consider a supervised learning task with a cost function of the form
\[J(W)=\frac{1}{m}\sum_{i=1}^{m}L(\mathbf{r}^{i},\mathbf{y}^{i})\]
where \(\mathbf{x}^{i}\) is an input, \(\mathbf{y}^{i}\) is a label, \(L\) is a loss function, and \(\mathbf{r}^{i}=f(W\mathbf{r}^{i}+\mathbf{x}^{i})\) is the fixed point that the network converges to under input \(\mathbf{x}^{i}\). This learning task presents unique challenges because fixed points are defined _implicitly_ by Eq. (2) instead of explicitly as a function of \(\mathbf{x}^{i}\), and also because we only wish to learn _stable_ fixed points. The data set \(\{(\mathbf{x}^{i},\mathbf{y}^{i})\}_{i}\) can be the entire data set in the case of full-batch learning, or a mini-batch in the case of stochastic learning. Updates to \(W\) during learning can be written as
\[W\gets W+\Delta W\]
where
\[\Delta W=\frac{1}{m}\sum_{i=1}^{m}\Delta W^{i}\]
Here, \(\Delta W^{i}\) is an update rule that can depend on \(\mathbf{x}^{i}\), \(\mathbf{y}^{i}\), \(W^{i}\), and \(\mathbf{r}^{i}\). Below, we derive and compare three different update rules, \(\Delta W^{i}_{1}\), \(\Delta W^{i}_{2}\), and \(\Delta W^{i}_{3}\), for minimizing \(J\).
### Gradient descent on the recurrent weight matrix.
The first learning rule we consider is direct gradient descent of the loss surface with respect to \(W\) using the Euclidean gradient,
\[\Delta W^{i}_{1}=-\eta_{W}\nabla_{W}L(\mathbf{r}^{i},\mathbf{y}^{i}) \tag{5}\]
where \(\eta_{W}>0\) is a learning rate and \(\nabla_{W}\) refers to the standard, Euclidean gradient with respect to \(W\). If the fixed point, \(\mathbf{r}^{i}\), is hyperbolically stable, then the Jacobian matrix from Eq. (3) has eigenvalues with negative real part, so \(I-G^{i}W=-\tau\mathcal{J}\) is invertible and we have (see Appendix A.1)
\[\Delta W^{i}_{1}=-\eta_{W}G^{i}\left[I-G^{i}W\right]^{-T}\left(\nabla_{\mathbf{r}^ {i}}L\right)\left(\mathbf{r}^{i}\right)^{T}. \tag{6}\]
where \(G^{i}=\text{diag}(f^{\prime}(\mathbf{x}^{i}))\) evaluated at the fixed point and \(U^{-T}\) denotes the inverse transpose of a matrix, \(U\). If \(G^{i}_{jj}\neq 0\) for all \(j\), then \(G^{i}\) is invertible so Eq. (6) can be simplified to get
\[\Delta W^{i}_{1}=-\eta_{W}\left[\left[G^{i}\right]^{-1}-W\right]^{-T}\left( \nabla_{\mathbf{r}^{i}}L\right)\left(\mathbf{r}^{i}\right)^{T} \tag{7}\]
Evaluating Eqs. (6) and (7) directly is computationally expensive because they require the calculation of a matrix inverse. Truncated backpropagation through time and other methods provide alternative approaches to approximating \(\Delta W_{1}\)[10, 11, 12, 13, 14], but note that truncated backpropagation through time requires the storage of a large computational graph, making it memory inefficient. Moreover, we show in examples below that using \(\Delta W_{1}\) to update weights can lead to poor learning performance. We next propose an alternative update rule based on a nonlinear reparameterization of the model.
### A new learning rule from reparameterizing the RNN
To motivate the reparameterized model, first consider the special case of a linear network defined by
\[f(z)=z\]
In this case, \(G=I\) is the identity matrix and Eq. (2) for the fixed point can be written as
\[\mathbf{r}=[I-W]^{-1}\mathbf{x}.\]
This is a linear model in the sense that \(\mathbf{r}\) is a linear function of \(\mathbf{x}\), but the nonlinear dependence of the cost on \(W\) (especially a nonlinearity involving matrix inverses) produces complicated and computationally expensive update from Eq. (6).
Instead of performing gradient descent with respect to \(W\), we propose instead to first apply a nonlinear change of coordinates to obtain new parameters,
\[A=F(W):=[I-W]^{-1}. \tag{8}\]
If we parameterize the model in terms of \(A\) instead of \(W\), then fixed points satisfy the standard linear model
\[\mathbf{r}=A\mathbf{x} \tag{9}\]
which is linear in the input, \(\mathbf{x}\), _and_ the parameters, \(A\). Gradient descent of the loss with respect to \(A\) gives the standard update rule for a linear, single-layer neural network
\[\begin{split}\Delta A^{i}&=-\eta_{A}\nabla_{A}L( \boldsymbol{r}^{i},\boldsymbol{y}^{i})\\ &=-\eta_{A}\left(\nabla_{\boldsymbol{r}^{i}}L\right)\left( \boldsymbol{x}^{i}\right)^{T}\\ &=-\eta_{A}\left(\nabla_{\boldsymbol{r}^{i}}L\right)\left( \boldsymbol{r}^{i}\right)^{T}A^{-T}\end{split} \tag{10}\]
where we distinguish between the learning rate, \(\eta_{A}\), used for the reparameterized model and the learning rate, \(\eta_{W}\), used for the original parameterization. Eq. (10) gives a gradient-based update to the new parameter, \(A\), but our original RNN model is parameterized by \(W\). To update our original parameters, we need to change the \(\Delta A\) from Eq. (10) back to \(W\) coordinates. To do this, note that we want to find a value for \(\Delta W\) that satisfies \(A+\Delta A=F(W+\Delta W)\) whenever \(A=F(W)\) and \(\Delta A\) comes from Eq. (10). In other words, the update to \(W\) is given by
\[\begin{split}\Delta W_{2}^{i}&=F^{-1}(F(W)+\Delta A ^{i}))-W\\ &=-\left[[I-W]^{-1}-\eta_{A}\left(\nabla_{\boldsymbol{r}^{i}}L \right)\left(\boldsymbol{r}^{i}\right)^{T}[I-W]^{T}\right]^{-1}+I-W\end{split} \tag{11}\]
where \(F^{-1}(A)=I-A^{-1}\) is the inverse of \(F(W)\).
To summarize this approach, if Eq. (11) is used to update parameters, \(W\), under the linear fixed point model, \(\mathbf{r}=f(W\mathbf{r}+\mathbf{x})\) with \(f(z)=z\), then the learning dynamics will be identical to standard linear regression of parameters, \(A\), on the model \(\mathbf{r}=A\mathbf{x}\).
Since gradient descent with respect to \(A\) in Eq. (10) represents steepest descent of the loss surface in the new parameter space of \(A\) and since Eq. (11) gives the same updates in the original
parameter space of \(W\), the learning rule in Eq. (5) corresponds to steepest descent of the cost, \(J(W)\), using a non-Euclidean metric defined by
\[d(W_{1},W_{2})=\|F(W_{1})-F(W_{2})\| \tag{12}\]
where \(\|B\|=\sqrt{\text{Tr}(BB^{T})}\) is the Euclidean or Frobenius norm on matrices. Note that \(d(\cdot,\cdot)\) is a metric when restricted to the space of all matrices, \(W\), for which \(I-W\) is invertible. Hence, if we restrict to \(W\) that yield hyperbolically stable fixed points, Eq. (5) corresponds to steepest descent with respect to a non-Euclidean metric. However, the metric \(d\) is not necessarily generated by an inner product, so Eq. (11) cannot be called _gradient_ descent since the notion of a gradient requires a metric induced by an inner product. In Section 2.4, we show that an approximation to \(\Delta W_{2}^{i}\) produces gradient descent with a non-Euclidean gradient. Moreover, in Section 3, we present examples showing that \(\Delta W_{2}\) is better suited to learning fixed points than the standard approach to gradient descent represented by \(\Delta W_{1}\). But first, we need to generalize the derivation of \(\Delta W_{2}\) to arbitrary activation functions.
Eq. (11) was derived for the specific case \(f(z)=z\), but we can extend it to a model with arbitrary \(f(z)\). To do so, we first linearize Eq. (2) to obtain a linearized fixed point equation,
\[\mathbf{r}=G[W\mathbf{r}+\mathbf{x}] \tag{13}\]
which has a closed form solution given by
\[\mathbf{r}=[I-GW]^{-1}G\mathbf{x}. \tag{14}\]
Note, again, that \(I-GW\) is invertible whenever \(\mathbf{r}\) is a hyperbolically stable fixed point.
Given Eq. (14), a natural choice of new parameters would be
\[A=[I-GW]^{-1}G, \tag{15}\]
because it would again produce a (linearized) model of the form \(\mathbf{r}=A\mathbf{x}\). Note that under the linear model \(f(z)=z\), we have \(G=I\), and recover the parameterization in Eq. (8), so Eq. (15) is a generalization of Eq. (8). However, the update rule to \(W\) derived from gradient descent on \(A\) from the parameterization in Eq. (15) is susceptible to blowup or singularities when some values of \(G_{jj}=f^{\prime}(\mathbf{z}_{j})\) become small in magnitude or zero. To see why this is the case, suppose \(G_{jj}=\mathcal{O}(\epsilon)\) is small for some \(j\) and consider an update to \(W\) of the form \(W=W+\Delta W\). Then the resulting update to \(\mathbf{r}_{j}\) is, to linear order in \(\epsilon\),
\[\Delta\mathbf{r}_{j} =\sum_{k}G_{jj}\Delta W_{jk}r_{k}\] \[=\mathcal{O}(\epsilon\Delta W).\]
On the other hand, an update of the form \(A=A+\Delta A\) gives
\[\Delta\mathbf{r}_{j} =\sum_{k}\Delta A_{jk}\mathbf{r}_{k}\] \[=\mathcal{O}(\Delta A).\]
Hence, if we want \(\Delta W\) to produce the same change, \(\Delta\mathbf{r}\), produced by \(\Delta A\), then we must have \(\Delta W\sim\mathcal{O}(\Delta A/\epsilon)\). This will cause large changes to \(W\) in response to inputs for which \(G\) has small elements at the fixed point, ultimately undercutting the model's performance (see Appendix A.2 for more details). In the extreme case that \(G_{jj}=0\) for some \(j\), updates to \(W\) do not impact \(\mathbf{r}\) (_i.e._, \(\Delta\mathbf{r}_{j}=0\) for any \(\Delta W\) under the linear approximation \(\mathbf{r}=G[W\mathbf{r}+\mathbf{x}]\)), so we cannot derive a \(\Delta W\) to match a given \(\Delta A\), _i.e._, the reparameterization in Eq. (15) is ill-posed.
To circumvent these problems, we instead take the parameterization
\[A=F(W):=[G-GWG]^{-1} \tag{16}\]
in place of Eq. (15). Under the linearized fixed point equation in Eq. (13), we then obtain the linear model
\[\mathbf{r}=GAG\mathbf{x}\]
which generalizes Eq. (9). This equation is linear in \(\mathbf{x}\) and in the new parameters, \(A\). Hence, learning \(A\) is again a linear regression problem, albeit with the extra \(G\) terms. These extra \(G\) terms prevent singularities and blowup when \(G_{jj}\) terms become small or zero because \(\Delta r_{j}=\mathcal{O}(\epsilon\Delta A)\) is small whenever we make an update of the form \(A=A+\Delta A\) with \(G_{jj}=\mathcal{O}(\epsilon)\) small. Under the simple linear model \(f(z)=z\), we have \(G=I\), and recover the parameterization in Eq. (8), so that Eq. (16) (like Eq. (15)) is a generalization of Eq. (8).
Note that each input (_i.e._, each \(i\)) will potentially have a different gain matrix, \(G^{i}=\text{diag}(f^{\prime}(\mathbf{z}^{\mathbf{i}}))\), so each sample will have a potentially different value of \(A^{i}=[G^{i}-G^{i}WG^{i}]^{-1}\) as well. The gradient-based update of the loss, \(L(\mathbf{r}^{i},\mathbf{y}^{i})\), with respect to \(A^{i}\) for each sample becomes
\[\Delta A^{i} =-\eta_{A}\nabla_{A^{i}}L(\mathbf{r}^{i},\mathbf{y}^{i})\] \[=-\eta_{A}G^{i}\left(\nabla_{\mathbf{r}^{i}}L\right)(\mathbf{r}^{i})^{T}[ G^{i}]^{-1}A^{-T}\]
Using the same approach used to derive Eq. (11) above, we can again derive an update to \(W\) given by
\[\Delta W_{2}^{i} =F^{-1}(F(W)+\Delta A^{i})-W \tag{17}\] \[=-\left[\left[I-G^{i}W\right]^{-1}G^{i}-\eta_{A}\left[G^{i} \right]^{2}\left(\nabla_{\mathbf{r}^{i}}L\right)(\mathbf{r}^{i})^{T}\left[I-G^{i}W \right]^{T}G^{i}\right]^{-1}\] \[\quad+\left[[G^{i}]^{-1}-W\right].\]
This update can only be evaluated directly in the situation where \(G_{jj}^{i}\neq 0\) for all \(j\) so that the inverse of the gain matrix, \(G\), exists. However, note that \([W_{2jk}^{i}\to 0\) as \(G_{jj}^{i}\to 0\), as expected, so in situations where \(G_{jj}^{i}=0\), it is consistent to take \([W_{2}^{i}]_{jk}=0\). Note also that Eq. (17) is equivalent to Eq. (11) whenever \(G=I\), as expected, since Eq. (17) generalizes Eq. (11) to the case of arbitrary \(f\).
### A simpler learning rule from linearizing the reparameterized rule
The reparameterized rule in Eq. (17) is rather a complicated learning rule, and the matrix inverses can be computationally expensive to compute or approximate. If we assume that \(\eta_{A}>0\) is small, then we can approximate Eq. (17) by applying Taylor expansion to linear order in \(\eta_{A}\). This gives the linearized parameterized rule (see Appendix A.3 for details),
\[\Delta W_{3}^{i}=-\eta_{A}\left[I-WG^{i}\right]G^{i}\left(\nabla_{\mathbf{r}^{i}} L\right)(\mathbf{r}^{i})^{T}\left[I-G^{i}W\right]^{T}\left[I-G^{i}W\right] \tag{18}\]
In contrast to Eqs. (6) and (17) for \(\Delta W_{1}^{i}\) and \(\Delta W_{2}^{i}\), Eq. (18) for \(\Delta W_{3}^{i}\) does not require the computation of matrix inverses. Like \(\Delta W_{1}^{i}\) and \(\Delta W_{2}^{i}\), \(\Delta W_{3}^{i}\) satisfies \(\Delta W_{jk}\to 0\) whenever \(G_{jj}\to 0\), but unlike Eq. (17) for \(\Delta W_{2}^{i}\), Eq. (18) for \(\Delta W_{3}^{i}\) can be evaluated directly when \(G_{jj}=0\) for some \(j\).
Notably, \(\Delta W_{3}^{i}\) can be interpreted as gradient descent of the loss function with a non-Euclidean gradient. To see why this is the case, first note that \(\Delta W_{3}^{i}\) is related to \(\Delta W_{1}^{i}\) according to
\[\Delta W_{3}^{i}=B^{i}\Delta W_{1}^{i}C^{i}, \tag{19}\]
where
\[B^{i}=[I-WG^{i}][I-WG^{i}]^{T}\]
and
\[C^{i}=[I-G^{i}W]^{T}[I-G^{i}W].\]
Here and for the remainder of this section, we take \(\eta_{A}=\eta_{W}=\eta\) to highlight the relationship between the two update rules, but constant scalar coefficients do not affect these results.
Using Eq. (19), we may conclude that \(\Delta W^{i}_{3}\) is equivalent to gradient descent of the loss with respect to \(W\) using a non-Euclidean gradient. To explain this statement in more detail, note that the gradient of \(L(\mathbf{r}^{i},\mathbf{y}^{i})\) with respect to \(W\) depends on the choice of metric or geometry [18]. Given an inner product, \(\langle\cdot,\cdot\rangle_{a}\), on \(\mathbb{R}^{N\times N}\) the gradient of a function, \(F:\mathbb{R}^{N\times N}\rightarrow\mathbb{R}\), on the geometry imposed by \(\langle\cdot,\cdot\rangle_{a}\) evaluated at \(W\in\mathbb{R}^{N\times N}\) is the unique matrix \(\nabla^{a}_{W}F\in\mathbb{R}^{N\times N}\) such that for every \(U\in\mathbb{R}^{N\times N}\)[23, 18],
\[\langle\nabla^{a}_{W}F,U\rangle_{a}=\lim_{\epsilon\to 0}\frac{F(W+ \epsilon U)-F(W)}{\epsilon}.\]
The standard Euclidean gradient, \(\nabla=\nabla^{E}\), on matrices is given by taking the geometry produced by the Euclidean or Frobenius inner product,
\[\langle U,V\rangle_{E}=\sum_{jk}U_{jk}V_{jk}=\text{Tr}(UV^{T}).\]
Recall that \(\Delta W^{i}_{1}\) is defined by the Euclidean gradient,
\[\Delta W^{i}_{1}=-\eta\nabla^{E}_{W}L(\mathbf{r}^{i},\mathbf{y}^{i})\]
where \(L(\mathbf{r}^{1},\mathbf{y}^{i})\) is interpreted as a function of \(W\). We claim that
\[\Delta W^{i}_{3}=-\eta\nabla^{B}_{W}L(\mathbf{r}^{i},\mathbf{y}^{i}) \tag{20}\]
where \(\nabla^{B}_{W}\) is the gradient under the geometry defined by the inner product,
\[\langle U,V\rangle_{B} =\text{Tr}(B^{-1}UC^{-1}V^{T})\] \[=\langle B^{-1}U,VC^{-1}\rangle_{E}.\]
Note that we can use the cyclic property of the trace operator to write
\[\langle U,V\rangle_{B} =\text{Tr}(B^{-1}UC^{-1}V^{T})\] \[=\text{Tr}\left([I-WG]^{-T}[I-WG]^{-1}U[I-GW]^{-1}[I-GW]^{-T}V^{T}\right)\] \[=\text{Tr}\left([I-WG]^{-1}U[I-GW]^{-1}[I-GW]^{-T}V^{T}[I-WG]^{-T}\right)\] \[=\langle\mathcal{L}U,\mathcal{L}V\rangle_{E}\]
where \(\mathcal{L}:\mathbb{R}^{N\times N}\rightarrow\mathbb{R}^{N\times N}\) is a linear operator on \(N\times N\) matrices defined by
\[\mathcal{L}(U)=[I-WG]^{-1}U[I-GW]^{-1}.\]
Hence, \(\langle\cdot,\cdot\rangle_{B}\) can be viewed as a Euclidean inner product on linearly transformed coordinates. This confirms that \(\langle\cdot,\cdot\rangle_{B}\) defines an inner product on square matrices whenever \([I-WG]\) and \([I-GW]\) are non-singular. For notational convenience here and below, we do not write the explicit dependence of \(B\), \(C\), or \(\mathcal{L}\) on \(i\), but they _do_ depend on \(i\) through \(G^{i}\). In other words, there are distinct matrices, \(B\) and \(C\), and therefore distinct inner products, \(\langle\cdot,\cdot\rangle_{B}\), at each gradient descent iteration. Given Eq. (19), we can prove Eq. (20) by showing that
\[\nabla^{B}_{W}L=B\left[\nabla^{E}_{W}L\right]C. \tag{21}\]
To show Eq. (21), first define the \(N\times N\) standard basis matrices \(\mathbf{1}^{jk}\in\mathbb{R}^{N\times N}\) entrywise by
\[\mathbf{1}_{j^{\prime}k^{\prime}}^{jk}=\begin{cases}1&j=j^{\prime}\text{ and }k=k^{\prime}\\ 0&\text{otherwise}\end{cases}.\]
for \(j,k=1,\ldots,N\). Now compute the inner product of the gradient with \(\mathbf{1}^{jk}\),
\[\begin{split}\left\langle\left[\nabla^{B}L\right],\mathbf{1}^{ jk}\right\rangle_{B}&=\left\langle B^{-1}\left[\nabla^{B}L\right],\mathbf{1}^{ jk}C^{-1}\right\rangle_{E}\\ &=\text{Tr}\left(B^{-1}\left[\nabla^{B}L\right]C^{-1}\left[ \mathbf{1}^{jk}\right]^{T}\right)\\ &=\sum_{n=1}^{N}\left[B^{-1}\left[\nabla^{B}L\right]C^{-1}\mathbf{1}^ {kj}\right]_{n,n}\\ &=\sum_{n,m=1}^{N}\left[B^{-1}\left[\nabla^{B}L\right]C^{-1} \right]_{n,m}[\mathbf{1}^{kj}]_{m,n}\\ &=\left[B^{-1}\left[\nabla^{B}L\right]C^{-1}\right]_{jk}\end{split} \tag{22}\]
where the last line follows from the fact that \(\mathbf{1}_{n,m}^{kj}=1\) when \(n=k\) and \(m=j\), and it is equal to zero for all other \(j,k\). But we also have, from the definition of a gradient,
\[\left\langle\left[\nabla^{B}L\right],\mathbf{1}^{jk}\right\rangle_{B}=\lim_{ \epsilon\to 0}\frac{J(W+\epsilon\mathbf{1}^{jk})-J(W)}{\epsilon}=\frac{ \partial J}{\partial W_{jk}}=\left[\nabla^{E}L\right]_{jk}. \tag{23}\]
Since Eqs. (22) and (23) apply for all indices \(j,k\), we may conclude that
\[B^{-1}\left[\nabla^{B}L\right]C^{-1}=\left[\nabla^{E}L\right]\]
and therefore that
\[\left[\nabla^{B}L\right]=B\left[\nabla^{E}L\right]C,\]
which concludes our proof.
In summary, if \(W\) is updated according to \(\Delta W_{3}^{i}\) from Eq. (18), then this is equivalent to performing gradient descent on the loss with respect to the weight matrix under the geometry defined by the new inner product, \(\langle U,V\rangle_{B}\). Below, we present examples showing that this geometry is better suited to learning \(W\) than gradient descent with respect to the standard Euclidean geometry. Specifically, \(\Delta W_{3}^{i}\) learns more robustly than \(\Delta W_{1}^{i}\).
## 3 Experiments and results
We next evaluate and interpret each of the learning rules derived above on two different learning tasks.
### Learning fixed points in a linear model.
For demonstrative purposes, we first consider an example of linear regression with mean-squared loss. Specifically, we consider \(f(z)=z\) with
\[L(\mathbf{r},\mathbf{y})=\|\mathbf{r}-\mathbf{y}\|^{2}.\]
where \(\|\cdot\|\) is the Euclidean norm on \(\mathbb{R}^{N}\). Note that \(G=I\) is the identity in this case. We define the \(N\times m\) matrices, \(X=\begin{bmatrix}\mathbf{x}^{1}&\mathbf{x}^{2}&\ldots\mathbf{x}^{m}\end{bmatrix}\), \(Y=\begin{bmatrix}\mathbf{y}^{1}&\mathbf{y}^{2}&\ldots\mathbf{y}^{m}\end{bmatrix}\), and \(R=\begin{bmatrix}\mathbf{r}^{1}&\mathbf{r}^{2}&\ldots\mathbf{r}^{m}\end{bmatrix}= [I-W]^{-1}X\). The cost function can be written as
\[J(W)=\frac{1}{m}\left\|[I-W]^{-1}X-Y\right\|^{2}. \tag{24}\]
It is useful to also write the cost in terms of the parameters \(A=[I-W]^{-1}\) to get a standard quadratic cost function,
\[J_{A}(A)=\frac{1}{m}\left\|AX-Y\right\|^{2}. \tag{25}\]
For this problem, minimizers of \(J(W)\) and \(J_{A}(A)\) can be found explicitly. Before continuing to empirical examples, we derive and discuss these explicit minimizers.
#### 3.1.1 Computing explicit minimizers in a linear model.
In the under-parameterized case (\(N\leq m\) when all matrices full rank), \(J_{A}(A)\) has a unique minimizer defined by
\[A^{*}=YX^{+}\]
where \(X^{+}=X^{T}(XX^{T})^{-1}\) is the Moore-Penrose pseudo-inverse of \(X\) when \(N\leq m\). Therefore, \(J(W)\) has a unique minimizer at
\[W^{*}=I-[A^{*}]^{-1}=XX^{T}(YX^{T})^{-1}\]
under the assumption that \(A^{*}\) is invertible.
The over-parameterized case (\(N>m\) when matrices are full rank) is more relevant and interesting. In this case, there are infinitely many choices of \(W\) and \(A\) for which \(J(W)=0\) and \(J_{A}(A)=0\). The problem of choosing a solution to \(J_{A}(A)=0\) is a standard least squares problem and a common approach is to take
\[A^{*}=YX^{+}\]
where \(X^{+}=(X^{T}X)^{-1}X^{T}\) is the Moore-Penrose pseudo-inverse of \(X\) when \(N>m\). It is tempting to use this value of \(A^{*}\) and then take \(W^{*}=I-[A^{*}]^{-1}\). However, note that \(A^{*}\) is the solution to \(AX=Y\) that minimizes the Frobenius norm of \(A\), _i.e._,
\[A^{*}=\operatorname*{argmin}_{A}\left\|A\right\|\text{ s.t. }\ AX=Y.\]
Therefore, \(W^{*}=I-[A^{*}]^{-1}\) represents a solution, \(W\), that minimizes the norm of \(A=[I-W]^{-1}\). Since the Jacobian matrix is given by \(\mathcal{J}=(-I+W)/\tau=-A^{-1}/\tau\), stability is promoted by \(W\) having a small spectral radius (all eigenvalues of \(W\) must have real part less than \(1\) for stability). Hence, \(W^{*}=I-[A^{*}]^{-1}\) is a poor choice for \(W^{*}\). Minimizing the Frobenius norm of \(A\) will tend to push the eigenvalues of \(A\) toward zero, which can lead to large eigenvalues of \(W=I-A^{-1}\) and \(\mathcal{J}=-A^{-1}/\tau\), promoting unstable fixed points. Instead, to find a good optimizer, \(W^{*}\), we can find solutions that minimize the norm of \(W\) instead of \(A\). To this end, we can solve
\[W^{*}=\operatorname*{argmin}_{W}\left\|W\right\|\text{ s.t. }\ [I-W]^{-1}X=Y.\]
To solve this problem, we re-write it in a more standard form
\[W^{*}=\operatorname*{argmin}_{W}\left\|W\right\|\text{ s.t. }\ WY=Y-X.\]
This problem has the solution
\[W^{*}=[Y-X]Y^{+} \tag{26}\]
where \(Y^{+}=(Y^{T}Y)^{-1}Y^{T}\) is the Moore-Penrose pseudo-inverse of \(Y\) when \(N>m\). This is the solution with minimal Frobenius norm on \(W\) and is therefore more likely than \(I-[A^{*}]^{-1}\) to have a small spectral radius and therefore more likely to give stable fixed points. Hence, Eq. (26) provides a good optimizer in the over-parameterized case (\(N>m\)).
#### 3.1.2 Visualizing the loss landscape of a linear model.
For empirical examples, we first generated inputs, \(\mathbf{x}^{i}\), independently from a Gaussian distribution and generated targets \(\mathbf{y}^{i}\) using a ground truth weight matrix, \(\hat{W}\), and adding noise. Specifically, we define
\[\begin{split} X&\sim\sigma_{x}Z_{N\times m}\\ Y&\sim\left[I-\hat{W}\right]^{-1}X+\sigma_{y}Z_{N \times m}\end{split} \tag{27}\]
where \(\sigma_{x}=0.1\) controls the magnitude of the inputs, \(\sigma_{y}=0.01\), and each \(Z_{N\times m}\) represents an \(N\times m\) matrix of independent, standard, Gaussian random variables. The ground truth weight matrix is generated by
\[\hat{W}\sim\frac{\sigma_{w}}{\sqrt{N}}Z_{N\times N}.\]
Following Girko's circular law, the eigenvalues of \(\hat{W}\) lie approximately within a circle of radius \(\sigma_{w}\) with high probability [24]. Hence, we take \(\sigma_{w}=0.5<1\) to control the spectral radius of the circle to be less than \(1\), so that all eigenvalues of the Jacobian matrix, \(\mathcal{J}=(-I+\hat{W})/\tau\), are negative and fixed point firing rates are stable under the ground truth parameters, \(\hat{W}\).
The cost landscape, \(J(W)\), cannot easily be visualized as a function of \(W\) for \(N>1\) because \(W\) has \(N^{2}\) dimensions, so even \(N=2\) would be difficult to visualize. To help visualize \(J(W)\), we first plotted it on a random line segment passing through \(W^{*}\) in \(\mathbb{R}^{N\times N}\). Specifically, we defined the parameterized function
\[W(t)=W^{*}+\frac{ct}{\sqrt{N}}Z_{N\times N} \tag{28}\]
where \(c=2.5\) scales the maximum magnitude of the perturbation and \(t\) was varied from \(-1\) to \(1\) to create the visualization of \(J(W(t))\) (Figure 1A). This corresponds to plotting \(J(W)\) along a one-dimensional slice of the space \(\mathbb{R}^{N\times N}\) on which \(W\) lives. Note that the true minimizer, \(W=W^{*}\), is sampled when \(t=0\). The values of \(W\) sampled by the process can produce stable or unstable fixed points. Making the approximation \(W^{*}\approx\hat{W}\), we have that \(\rho(W(t))\approx\sqrt{\sigma_{w}^{2}+c^{2}t^{2}}\) and therefore an approximate stability condition is given by \(|t|<\sqrt{1-\sigma_{w}^{2}}/c\approx 0.346\).
Figure 1A shows the resulting cost curve for five random values of \(Z\) with the blue dashed lines marking the approximate stability boundary. The cost is relatively well behaved within the boundary, but poorly conditioned outside of the boundary because of the singularities produced by the matrix inverses in Eq. (24). Specifically, outside of the stability region, the spectral radius of \(W\) is larger than \(1\) so some eigenvalues are near \(1\) in magnitude. As a result, the \([I-W]^{-1}\) in Eq. (24) can lead to very large values of \(J(W)\).
To further visualize the loss landscape, we repeated the procedure above in two dimensions by sampling values of \(W\) from a random _plane_ passing through \(W^{*}\). Specifically, we defined the parameterized function
\[W(t_{1},t_{2})=W^{*}+\frac{c}{\sqrt{N}}(t_{1}Z_{1}+t_{2}Z_{2}) \tag{29}\]
where \(Z_{1},Z_{2}\sim N_{N\times N}(0,1)\), \(t_{1}\) and \(t_{2}\) were each varied from \(-1\) to \(1\) to create the visualization of \(J(W(t))\), and \(c=2.5\) scales the perturbation (Figure 1B). Note that \(W(t_{1},t_{2})=W^{*}\) when \(t_{1}=t_{2}=0\), so the center of the square corresponds to the minimum cost, \(J=0\). The approximate stability condition becomes \(\sqrt{t_{1}^{2}+t_{2}^{2}}<\sqrt{1-\sigma_{w}^{2}}/c=0.346\), so the approximate stability boundary is a circle (Figure 1B, dashed blue curve). Singularities create intricate ridges of large cost outside of the stability boundary (Figure 1B).
In summary, Figure 1 shows that the cost landscape, \(J(W)\), is extremely poorly conditioned outside of the stability region, _i.e._, when \(W\) has a spectral radius larger than \(1\). Note, however, that the effective cost landscape, \(J_{A}(A)\), of the reparameterized model is a simple quadratic
landscape, given by Eq. (25). Gradient-based learning according to \(\Delta W_{1}\) must traverse the poorly conditioned landscape from Figure 1. But the learning dynamics of the reparameterized rule, \(\Delta W_{2}\), are equivalent to those produced by \(A\) traversing a comparatively well-behaved quadratic landscape. We show in empirical examples below that this difference helps \(\Delta W_{2}\) and its linear approximation, \(\Delta W_{3}\), perform more robustly than \(\Delta W_{1}\).
#### 3.1.3 Gradient descent on the recurrent weight matrix in a linear model.
We first perform direct gradient descent on \(J\) with respect to \(W\) using \(\Delta W_{1}\). The gradient-based update rule from Eq. (7) can be written as
\[\begin{split}\Delta W_{1}&=\frac{1}{m}\sum_{i=1}^{ m}\Delta W_{1}^{i}\\ &=\frac{-2\eta_{W}}{m}\left[I-W^{T}\right]^{-1}\left[R-Y\right]R ^{T}.\end{split} \tag{30}\]
Empirical simulations show relatively poor learning performance (Figure 2A). Learning is slow for small learning rates, but larger learning rates fail to converge to good minima. Recall that the true minimum is zero because the model is over-parameterized. We next show that the linearized approximation to \(\Delta W_{2}\) performs similarly well.
#### 3.1.4 Learning using the reparameterized rule in a linear model.
For this linear example, the reparameterized learning rule from Eqs. (11) and (17) can be written as
\[\Delta W_{2}=\left[I-W\right]-\left(\left[I-W\right]^{-1}-\frac{2\eta_{A}}{m} (R-Y)X^{T}\right)^{-1}. \tag{31}\]
Recall that the learning dynamics produced by Eq. (31) are equivalent to those produced by learning the standard quadratic cost function, \(J_{A}(A)\), along with the standard gradient-based
Figure 1: **Visualizing the cost landscape for a linear model.****A)** The cost function \(J(W(t))\) as a function of \(t\) from Eq. (28). This represents the cost evaluated along five random line segments in \(\mathbb{R}^{N\times N}\), each passing through \(W^{*}\) at \(t=0\). Two blue dash lines show the stability boundary, \(|t|=0.346\). The vertical axis is cutoff at \(J=1000\) to better visualize the curves. Blue and black circles show stable and unstable initial conditions used for learning. **B)** The cost function \(J(W(t_{1},t_{2}))\) from Eq. (29). This represents the cost evaluated on a randomly oriented square with center at \(W^{*}\). The color axis is cutoff at \(J=1000\).
Figure 2: **Performance of three different learning rules for a linear regression problem.****A)** The cost function, \(J(W)\), from Eq. (24) for five different learning rates, \(\eta=\eta_{W}\), using direct gradient descent on the weight matrix, \(\Delta W_{1}^{i}\) from Eq. (30). **B)** Same as A, but for the reparameterized learning rule, \(\Delta W_{2}^{i}\) from Eq. (31) with \(\eta=\eta_{A}\). **C)** Same as B, but for the linearized rule, \(\Delta W_{3}^{i}\) from Eq. (33).
update rule,
\[\Delta A=-\frac{2\eta_{A}}{m}(AX-Y)X^{T}, \tag{32}\]
which is often called the "delta rule."
The behavior of the learning dynamics under Eq. (32) in the overparameterized case is well understood [25, 26, 27, 28]. Specifically, \(A\) tends toward solutions to \(AX=Y\) that minimize the distance, \(\|A-A_{0}\|\), of \(A\) from its initial condition under the Frobenius norm. As a result, \(\Delta W_{2}\) finds solutions, \(W\), to \([I-W]^{-1}X=Y\) that minimize the distance, \(d(W,W_{0})\), of \(W\) from its initial condition under the metric, \(d\), defined in Eq. (12). In addition, since the Jacobian matrix is given by \(\mathcal{J}=-A^{-1}/\tau\), we may conclude that \(\Delta W_{2}\) finds solutions that minimize the distance, \(\left\|\mathcal{J}_{0}^{-1}-\mathcal{J}^{-1}\right\|\), between the inverse Jacobian and its initial condition under the Frobenius norm.
In comparison to the gradient-based update, \(\Delta W_{1}\), from Eq. (30) (Figure 2A), we see that \(\Delta W_{2}\) from Eq. (31) performs much more robustly (Figure 2B). The cost reliably converges toward zero with increasing rates of convergence at larger learning rates.
#### 3.1.5 Learning using the linearized reparameterized rule in a linear model.
The linearized, reparameterized update rule from Eq. (18) for this linear model can be written as
\[\Delta W_{3}=-\frac{2\eta_{A}}{m}\left[I-W\right](R-Y)X^{T}\left[I-W\right]. \tag{33}\]
This approximated learning rule gives a simpler equation that is more efficient to compute, but still shows excellent agreement with the reparameterized rule, \(\Delta W_{2}\) (Figure 2C, compare to B).
Note that Eq. (33) does not require any explicit computation of matrix inverses with the exception of the computation of firing rates \(R=[I-W]^{-1}X\). However, since \(R\) is left-multiplied by \([I-W]\) in Eq. (33), we can get rid of this matrix inverse and write \(\Delta W_{3}\) in the form
\[\Delta W_{3}=-\frac{2\eta_{A}}{m}\bigg{(}XX^{T}\left[I-W\right]-\left[I-W \right]YX^{T}\left[I-W\right]\bigg{)}. \tag{34}\]
Note that the equation \(R=[I-W]^{-1}X\) and Eq. (34) are specific to the linear case, \(f(z)=z\). When using a nonlinear \(f(z)\), fixed point firing rates, \(R\), cannot generally be computed in closed form, but must be approximated by directly simulating Eq. (1) until convergence.
#### 3.1.6 Comparing the direction of updates.
To check the similarity between the updates from each learning rule, we calculated the angle between the updates at each iteration, defined by
\[\theta_{\alpha\beta}=cos^{-1}\left(\frac{\Delta W_{\alpha}\cdot\Delta W_{ \beta}}{\sqrt{(\Delta W_{\alpha}\cdot\Delta W_{\alpha})(\Delta W_{\beta} \cdot\Delta W_{\beta})}}\right)\]
for \(\alpha,\beta\in\{1,2,3\}\) where \(A\cdot B=\text{Tr}(A^{T}B)\) is the Frobenius inner product. For sufficiently small learning rates, any update, \(\Delta W\), that decreases the cost must satisfy \(\Delta W\cdot\Delta W_{1}>0\) where \(\Delta W_{1}\) is the gradient-based update [29] since the change in cost can be written as
\[\Delta J =\Delta W\cdot\nabla_{W}J+\mathcal{O}(\eta^{2}) \tag{35}\] \[=-\frac{\eta_{W}}{m}\Delta W\cdot\Delta W_{1}+\mathcal{O}(\eta^{ 2}).\]
Additionally, \(\Delta W_{2}\rightarrow\Delta W_{3}\) as \(\eta_{A}\to 0\). Hence, we should expect that \(\theta_{\alpha\beta}<90^{\circ}\) for all pairs, \(\alpha\) and \(\beta\).
Figure 3 shows the angles, \(\theta_{12}\) and \(\theta_{23}\), during learning. The angles, \(\theta_{13}\), were virtually identical to \(\theta_{23}\), so they are not shown. In each example, we used the same update, \(\Delta W_{\beta}\), to update \(W\) throughout learning. Hence, the two updates, \(dW_{\alpha}\) and \(dW_{\beta}\), were compared starting at the same initial \(W\) at each learning step.
The angle, \(\theta_{12}\), between the gradient-based updates and the reparameterized updates is relatively close to \(90^{\circ}\) (Figure 3A), indicating that they point in different directions, nearly as different as possible under the condition that they both decrease the cost. Unsurprisingly, \(\theta_{23}\) is near zero (Figure 3B), indicating that the reparameterized rule is similar to its linearization.
### Training fixed points on a nonlinear categorization task.
So far, for demonstrative purposes, we considered only simple examples of linear regression in which closed equations for optima are known. We next consider an example of image categorization using the MNIST hand-written digit benchmark.
The learning goal is to minimize a cross-entropy loss on \(C=10\) classes using one-hot encoded labels. Specifically,
\[L(\mathbf{y}^{i},\mathbf{s}^{i})=-\mathbf{y}^{i}\cdot\log(\mathbf{s}^{i})\]
where \(\mathbf{y}^{i}\) is a "one-hot" encoded label for digit \(i\),
\[\mathbf{s}^{i}_{l}=\frac{e^{\mathbf{z}^{i}_{l}}}{\sum\limits_{k=1}^{C}e^{\mathbf{z}^{i}_{ k}}},\]
is the softmax output, and \(\mathbf{z}^{i}\in\mathbb{R}^{C}\) is a logit computed from a random projection of fixed point rates of a recurrent network. Specifically,
\[\mathbf{z}^{i}=W_{\text{out}}\mathbf{r}^{i}\]
where \(W_{\text{out}}\in\mathbb{R}^{C\times N}\) is a fixed, random readout matrix and \(\mathbf{r}^{i}=f(W\mathbf{r}^{i}+\mathbf{x}^{i})\) is the fixed point from an \(N\times N\) recurrent network with input \(i\). Inputs are flattened \(28\times 28\) MNIST images, \(\mathbf{p}^{i}\in\mathbb{R}^{M}\), where \(M=28*28=784\) and we multiply them by a fixed, random read-in matrix to form the input to the network,
\[\mathbf{x}^{i}=W_{\text{in}}\mathbf{p}^{i}\]
where \(W_{\text{in}}\in\mathbb{R}^{N\times M}\) and \(N=300\) is the number of neurons in the network. We did not train \(W_{\text{out}}\) or \(W_{\text{in}}\) because we wanted to focus on the effectiveness of learning the recurrent weight
Figure 3: **Angles and correlations between weight updates.****A)** Angle (\(\theta_{12}\)) between the weight updates for the gradient-based and reparameterized learning rules. **B)** Same as A, but comparing the parameterized rule with its linearization.
matrix, \(W\). We used a hyperbolic target activation function, \(f(z)=\tanh(z)\). To compute \(\mathbf{r}\), we simulated Eq. (1) using a forward Euler scheme for \(500\) time steps with \(\tau=100dt\) where \(dt\) is the step size used in the Euler method. The model was trained on \(3\) epochs of the MNIST data set using a batch size of \(512\).
For this learning task, the gradient-based update rule from Eq. (7) can be written as
\[\Delta W_{1}=-\frac{\eta_{W}}{m}\sum_{i=1}^{m}\left[\left[G^{i}\right]^{-1}-W^ {T}\right]^{-1}W_{\text{out}}^{T}\left[\mathbf{s}^{i}-\mathbf{y}^{i}\right]\left[\mathbf{r }^{i}\right]^{T}.\]
We found that this gradient based learning rule performed poorly (Figure 4). Small learning rates learned slowly, as expected, while larger learning rates produced instabilities that caused the loss and accuracy to jump erratically during learning. Indeed, analysis of the Jacobian matrices showed that fixed points became unstable for the two largest learning rates considered in Figure 4.
We next tested the linearized, reparameterized update rule, \(\Delta W_{3}\). We did not include results for \(\Delta W^{2}\) because, as in the linear examples considered above, they are very similar to \(\Delta W_{3}\), and they are computationally more expensive to calculate. For this learning task, \(\Delta W_{3}\) can be written as
\[\Delta W_{3}=-\frac{\eta_{A}}{m}\sum_{i=1}^{m}\left[I-WG^{i}\right]G^{i}W_{ \text{out}}^{T}\left[\mathbf{s}^{i}-\mathbf{y}^{i}\right]\left[\mathbf{r}^{i}\right]^{T} \left[I-G^{i}W^{T}\right]\left[I-G^{i}W\right].\]
Using this linearized, reparameterized update rule significantly improved learning performance (Figure 5). Learning performance improved consistently with increasing learning rates and higher accuracy was achieved without instabilities. Analysis of the Jacobian matrices showed that fixed points were stable for all of the learning rates considered in Figure 5. We conclude that the
Figure 4: **Gradient based learning on a non-linear classification task. Results from training the fixed points of a recurrent network to categorize MNIST digits using the gradient-based update rule, \(\Delta W_{1}\). A,B,C,D) Training and testing losses and accuracies evaluated at each step over the course of \(3\) epochs.**
linearized, reparameterized learning rule can improve the learning of fixed points in non-linear recurrent neural network models.
## 4 Discussion
In summary, we have shown that when learning fixed points of recurrent neural network models, the direct application of gradient descent with respect to the recurrent weight matrix under the Euclidean geometry is computationally expensive and not robust. Badly conditioned loss surfaces can cause ineffective learning. Moreover, matrix inverses in the equations for the gradients are expensive to evaluate or approximate.
We derived two alternative learning rules derived from a reparameterization of the recurrent network model. These learning rules perform more robustly than the standard gradient descent approach. Moreover, one of the two learning rules is simpler and more computationally efficient. The learning rules can be interpreted as steepest descent and gradient descent on the recurrent weight matrix under a non-Euclidean metric. Our results support recent calls to re-consider the default use of Euclidean gradients on parameters in machine learning [15, 16, 17] and computational neuroscience [18, 20].
Recently, authors have argued that the use of Euclidean gradients for modeling learning in the brain is justified because any learning rule that takes small steps and reduces the loss must be positively correlated with the negative Euclidean gradient [29]. Put another way, the angle between the parameter updates and the negative Euclidean gradient must be less than \(90^{\circ}\) (see Eq. (35) and surrounding discussion). While this is true of the learning rules that we studied, the angle is very close to \(90^{\circ}\) in practice, indicating only a weak correlation. Hence, our work shows
Figure 5: **A reparameterized learning rule on a non-linear classification task.** Results from training the fixed points of a recurrent network to categorize MNIST digits using the gradient-based update rule, \(\Delta W_{3}\). **A,B,C,D)** Training and testing losses and accuracies evaluated at each step over the course of \(3\) epochs. Compare to Figure 4.
that the Euclidean gradient is not always strongly correlated with effective learning rules.
We focused on a single, fully connected recurrent layer, which limits the ease with which our model can be applied to larger data sets. Partly for this reason, we only considered the relatively simple MNIST data set as a benchmark. Future work could extend our results to multi-layer recurrent networks in which read-in and read-out matrices are trained and in which at least some fully connected layers are replaced by convolutional connectivity. These extensions will allow our approach to be applied to larger and more challenging datasets.
Fixed points of recurrent neural networks are widely used in computational neuroscience to model static neural responses to static stimuli [3, 4, 5, 6, 7, 8] and our results could be useful for these modeling approaches. On the other hand, recurrent neural networks in machine learning are almost exclusively used for time-varying inputs. Our results rely on the assumption of a time-constant input, \(\mathbf{x}(t)=\mathbf{x}\), which limits their direct application to many machine learning problems. Moreover, even in neuroscience, the assumption of a static stimulus is only an approximation. Natural stimuli are dynamical. However, if fixed points are approached faster than the stimulus changes (_i.e._, \(\tau\) is faster than \(\mathbf{x}(t)\)) then the response, \(\mathbf{r}(t)\), is approximated by the fixed point in Eq. (2) and our results provide an approximation. Moreover, a combination of our fixed point learning rules with dynamical learning rules, such as backpropagation through time, could improve learning in situations where some components of the input are static and others are dynamical. Future work should test whether our learning rules can be combined with backpropagation through time to improve performance on tasks with multiple timescales.
## 5 Acknowledgments
This material is based upon work supported by the Air Force Office of Scientific Research (AFOSR) under award number FA9550-21-1-0223 and National Science Foundation (NSF) award numbers DMS-1654268 and DBI-1707400.
Appendix
### Derivation of the Direct Gradient Descent Update, \(\Delta W_{1}\)
Here, we derive Eq. (6) for direct gradient descent on \(W\). To derive Eq. (6), it is sufficient to show that
\[\nabla_{W}L=\left(\mathbf{r}\left[\nabla_{\mathbf{r}}L\right]^{T}\left[I-GW\right]^{-1}G \right)^{T}.\]
Here we are considering a single input, label, and fixed point - \(\mathbf{x}\), \(\mathbf{y}\), and \(\mathbf{r}\) - so we can omit the \(i\) superscripts that appear in Eq. (6). Note that \(\nabla_{W}L(\mathbf{r}(W))\) is a matrix with elements
\[\frac{\partial L}{\partial W_{jk}}=\left[\nabla_{\mathbf{r}}L(\mathbf{r},\mathbf{y})\right] \cdot\frac{\partial\mathbf{r}}{\partial W_{jk}}. \tag{36}\]
To derive \(\frac{\partial\mathbf{r}}{\partial W_{jk}}\), we first derive the change of firing rate, \(\Delta\mathbf{r}\), to linear order in an update \(\Delta W_{jk}\). Consider an initial \(\mathbf{r}_{0}\) satisfying \(\mathbf{r}_{0}=f(W_{0}\mathbf{r}_{0}+\mathbf{x})\) and an update to \(W\) defined by \(W=W_{0}+\Delta W\) for some \(\Delta W\). The new fixed point satisfies \(\mathbf{r}=f(W\mathbf{r}+\mathbf{x})\) and we wish to compute \(\Delta\mathbf{r}=\mathbf{r}-\mathbf{r}_{0}\) to linear order in \(\Delta W\). Define \(\mathbf{z}_{0}=W_{0}\mathbf{r}_{0}+\mathbf{x}\) and \(\mathbf{z}=W\mathbf{r}+\mathbf{x}\). Then
\[\Delta\mathbf{r} =f(\mathbf{z})-f(\mathbf{z}_{0})\] \[=f(\mathbf{z}_{0})+f^{\prime}(\mathbf{z}_{0})(\mathbf{z}-\mathbf{z}_{0})-f(\mathbf{z }_{0})+O(\mathbf{z}-\mathbf{z}_{0})^{2}\] \[=G(\mathbf{z}-\mathbf{z}_{0})+O(\mathbf{z}-\mathbf{z}_{0})^{2}.\]
To linear order in \(\Delta\mathbf{r}\), we have
\[\Delta\mathbf{r} =G(\mathbf{z}-\mathbf{z}_{0})\] \[=G\left((W\mathbf{r}+\mathbf{x})-(W_{0}\mathbf{r}_{0}+\mathbf{x}_{0})\right)\] \[=G\left((W_{0}+\Delta W)\mathbf{r}-W_{0}\mathbf{r}_{0}\right)\] \[=G(W_{0}\mathbf{r}+\Delta W\mathbf{r}-W_{0}\mathbf{r}_{0})\] \[=G(W_{0}\Delta\mathbf{r}+\Delta W\mathbf{r})\] \[\Delta\mathbf{r}-GW_{0}\Delta\mathbf{r} =G\Delta W\mathbf{r}\] \[[I-GW_{0}]\Delta\mathbf{r} =G\Delta W\mathbf{r}.\]
As a result, we have that
\[\frac{\partial\mathbf{r}}{\partial W_{jk}}=[I-GW]^{-1}G\mathbf{1}^{jk}\mathbf{r}\]
which is interpreted as a column vector. Here, \(\mathbf{1}^{jk}\) is the matrix with all entries equal to zero except for element \((j,k)\), which is equal to \(1\). Eq. (6) then follows from the following Lemma.
**Lemma 1**.: \[[I-GW]^{-1}G\mathbf{1}^{jk}\mathbf{r}=\mathbf{r}_{k}\left[[I-GW]^{-1}G\right]_{(:,j)}\] (37)
_where \(\mathbf{r}_{k}\) is the \(k\)th element of \(\mathbf{r}\) and \(B_{(:,j)}\) denotes the \(j\)th column of a matrix, \(B\)._
Proof.: We first calculate \(\mathbf{1}^{11}\mathbf{r}\), \(\mathbf{1}^{12}\mathbf{r}\), and \(\mathbf{1}^{21}\mathbf{r}\):
\[\mathbf{1}^{11}\mathbf{r}=\begin{bmatrix}1&0&\ldots&0\\ 0&\cdot&\ldots&\cdot\\ \cdot&\cdot&\ldots&\cdot\\ 0&0&\ldots&0\end{bmatrix}\begin{bmatrix}\mathbf{r}_{1}\\ \cdot\\ \cdot\\ \cdot\\ \mathbf{r}_{M}\end{bmatrix}=\begin{bmatrix}\mathbf{r}_{1}\\ 0\\ \cdot\\ \cdot\\ 0\end{bmatrix}=\mathbf{r}_{1}I(:,1)\]
\[\mathbf{1}^{12}\mathbf{r}=\begin{bmatrix}0&1&\dots&0\\ 0&\dots&\dots&\cdot\\ \cdot&\cdot&\dots&\cdot\\ 0&0&\dots&0\end{bmatrix}\begin{bmatrix}\mathbf{r}_{1}\\ \cdot\\ \cdot\\ \cdot\\ \cdot\\ \cdot\\ \cdot\\ \cdot\end{bmatrix}=\begin{bmatrix}\mathbf{r}_{2}\\ 0\\ \cdot\\ \cdot\\ 0\end{bmatrix}=\mathbf{r}_{2}I(:,1)\] \[\mathbf{1}^{21}\mathbf{r}=\begin{bmatrix}0&0&\dots&0\\ 1&\dots&\cdot\\ \cdot&\cdot&\dots&\cdot\\ 0&\dots&\cdot\\ 0&0&\dots&0\end{bmatrix}\begin{bmatrix}\mathbf{r}_{1}\\ \cdot\\ \cdot\\ \cdot\\ \cdot\\ \cdot\\ \cdot\end{bmatrix}=\begin{bmatrix}0\\ \mathbf{r}_{1}\\ \cdot\\ \cdot\\ 0\end{bmatrix}=\mathbf{r}_{1}I(:,2).\]
Denote \(A:=[I-GW]^{-1}G\), so \(A\mathbf{1}^{11}\mathbf{r}=\mathbf{r}_{1}A_{(:,1)}\), \(A\mathbf{1}^{12}\mathbf{r}=\mathbf{r}_{2}A_{(:,1)}\), and \(A\mathbf{1}^{21}\mathbf{r}=\mathbf{r}_{1}A_{(:,2)}\). Notice that they are column vectors. WLOG, \(A\mathbf{1}^{jk}\mathbf{r}=\mathbf{r}_{k}A_{(:,j)}\)
\[LHS=\nabla_{w}L(\mathbf{r}(W))=\begin{bmatrix}\frac{dL}{dW_{1}}& \frac{dL}{dW_{12}}&\dots&\dots&\frac{dL}{dW_{M}}\\ \frac{dL}{dW_{1}}&\frac{dL}{dW_{22}}&\dots&\dots&\frac{dL}{dW_{M}}\\ \frac{dL}{dW_{1}}&\frac{dL}{dW_{22}}&\dots&\dots&\frac{dL}{dW_{M}}\\ \frac{dL}{dW_{1}}&\frac{dL}{dW_{M2}}&\dots&\dots&\frac{dL}{dW_{M}}\\ \end{bmatrix}\] \[=\begin{bmatrix}\mathbf{r}_{1}[\nabla_{\mathbf{r}}L(\mathbf{r})]\cdot A_{(:,1 )}&\mathbf{r}_{2}[\nabla_{r}L(\mathbf{r})]\cdot A_{(:,1)}&\dots&\mathbf{r}_{M}[\nabla_{\bm {r}}L(\mathbf{r})]\cdot A_{(:,1)}\\ \mathbf{r}_{1}[\nabla_{\mathbf{r}}L(\mathbf{r})]\cdot A_{(:,2)}&\mathbf{r}_{2}[\nabla_{\mathbf{r }}L(\mathbf{r})]\cdot A_{(:,2)}&\dots&\mathbf{r}_{M}[\nabla_{\mathbf{r}}L(\mathbf{r})]\cdot A_ {(:,2)}\\ \dots&\dots&\mathbf{r}_{k}[\nabla_{\mathbf{r}}L(\mathbf{r})]\cdot A_{(:,j)}&\dots\\ \dots&\dots&\dots&\mathbf{r}_{M}[\nabla_{\mathbf{r}}L(\mathbf{r})]\cdot A_{(:,M)}\end{bmatrix},\] \[RHS=\big{(}\mathbf{r}[\nabla_{\mathbf{r}}L(\mathbf{r})]^{T}[I-GW]^{-1}G \big{)}^{T}=\big{(}\mathbf{r}[\nabla_{\mathbf{r}}L(\mathbf{r})]^{T}A\big{)}^{T}\]
\[=\begin{bmatrix}\mathbf{r}_{1}[\nabla_{\mathbf{r}}L(\mathbf{r})]^{T}A_{(:,1 )}&\mathbf{r}_{1}[\nabla_{\mathbf{r}}L(\mathbf{r})]^{T}A_{(:,2)}&\dots&\mathbf{r}_{1}[\nabla_{ \mathbf{r}}L(\mathbf{r})]^{T}A_{(:,M)}\\ \mathbf{r}_{2}[\nabla_{\mathbf{r}}L(\mathbf{r})]^{T}A_{(:,1)}&\mathbf{r}_{2}[\nabla_{\mathbf{r}}L( \mathbf{r})]^{T}A_{(:,2)}&\dots&\mathbf{r}_{2}[\nabla_{\mathbf{r}}L(\mathbf{r})]^{T}A_{(:,M)} \\ \dots&\dots&\dots&\dots\\ \dots&\dots&\dots&\dots\\ \mathbf{r}_{M}[\nabla_{\mathbf{r}}L(\mathbf{r})]^{T}A_{(:,1)}&\mathbf{r}_{M}[\nabla_{\mathbf{r}} L(\mathbf{r})]^{T}A_{(:,2)}&\dots&\mathbf{r}_{M}[\nabla_{\mathbf{r}}L(\mathbf{r})]^{T}A_{(:,M)} \end{bmatrix}^{T}\] \[=\begin{bmatrix}\mathbf{r}_{1}[\nabla_{\mathbf{r}}L(\mathbf{r})]^{T}A_{(:,1 )}&\mathbf{r}_{2}[\nabla_{\mathbf{r}}L(\mathbf{r})]^{T}A_{(:,2)}&\dots&\mathbf{r}_{1}[\nabla_{ \mathbf{r}}L(\mathbf{r})]^{T}A_{(:,M)}\\ \mathbf{r}_{2}[\nabla_{\mathbf{r}}L(\mathbf{r})]^{T}A_{(:,1)}&\mathbf{r}_{2}[\nabla_{\mathbf{r}}L( \mathbf{r})]^{T}A_{(:,2)}&\dots&\mathbf{r}_{2}[\nabla_{\mathbf{r}}L(\mathbf{r})]^{T}A_{(:,M)} \\ \dots&\dots&\dots&\dots\\ \mathbf{r}_{M}[\nabla_{\mathbf{r}}L(\mathbf{r})]^{T}A_{(:,1)}&\mathbf{r}_{M}[\nabla_{\mathbf{r}} L(\mathbf{r})]^{T}A_{(:,2)}&\dots&\mathbf{r}_{M}[\nabla_{\mathbf{r}}L(\mathbf{r})]^{T}A_{(:,M)} \end{bmatrix}^{T}\] \[=\begin{bmatrix}\mathbf{r}_{1}[\nabla_{\mathbf{r}}L(\mathbf{r})]\cdot A_{(:, 1)}&\mathbf{r}_{2}[\nabla_{\mathbf{r}}L(\mathbf{r})]\cdot A_{(:,1)}&\dots&\mathbf{r}_{M}[ \nabla_{\mathbf{r}}L(\mathbf{r})]\cdot A_{(:,1)}\\ \mathbf{r}_{1}[\nabla_{\mathbf{r}}L(\mathbf{r})]\cdot A_{(:,2)}&\mathbf{r}_{2}[\nabla_{\mathbf{r }}L(\mathbf{r})]\cdot A_{(:,2)}&\dots&\mathbf{r}_{M}[\nabla_{\mathbf{r}}L(\mathbf{r})]\cdot A _{(:,2)}\\ \dots&\dots&\mathbf{r}_{k}[\nabla_{\mathbf{r}}L(\mathbf{r})]\cdot A_{(:,j)}&\dots\\ \mathbf{r}_{1}[\nabla_{\mathbf{r}}L(\mathbf{r})]\cdot A_{(:,M)}&\mathbf{r}_{2}[\nabla_{\mathbf{r }}L(\mathbf{r})]\cdot A_{(:,M)}&\dots&\dots&\dots\\ \mathbf{r}_{M}[\nabla_{\mathbf{r}}L(\mathbf{r})]\cdot A_{(:,M)}\end{bmatrix}\] \[=LHS.\]
Combining Eq. (36) with Eq. (37) gives
\[\nabla_{W}L=\left(\mathbf{r}\left[\nabla_{\mathbf{r}}L(\mathbf{r})\right]^{T}\left[I-GW \right]^{-1}G\right)^{T}\]
which can be simplified to get Eq. (6) for \(\Delta W_{1}\).
### Analysis of a natural reparameterization and its linear approximation
We now consider the updates given by the reparameterization \(A=\left[G^{-1}-W\right]^{-1}\). The direct reparameterized update, \(\Delta W_{2}\) in this case is given by
\[\Delta W_{2} =-\left[\left(A-\eta_{A}(\nabla_{\boldsymbol{r}}L)(\boldsymbol{x} )^{T}\right)^{-1}-A^{-1}\right]\] \[=-\left[\left(\left[G^{-1}-W\right]^{-1}-\eta_{A}(\nabla_{ \boldsymbol{r}}L)(\boldsymbol{r})^{T}\left[G^{-1}-W^{T}\right]\right)^{-1}-G^{ -1}-W\right].\]
Proof.: Since \(A=\left[G^{-1}-W\right]^{-1}\), we have \(W=G^{-1}-A^{-1}\). Let \(W^{0}\) and \(A^{0}\) represent previous step update before \(W\) and \(A\), then
\[\Delta W =W-W^{0}\] \[=G^{-1}-A^{-1}-\left(\left[G^{0}\right]^{-1}-[A^{0}]^{-1}\right)\] \[=\left(G^{-1}-\left[G^{0}\right]^{-1}\right)-A^{-1}+[A^{0}]^{-1}\] \[=-\left(A^{0}+\Delta A\right)^{-1}+[A^{0}]^{-1}\] \[=-\left(A^{0}-\eta_{A}\left(\nabla_{\boldsymbol{r}}L\right)( \boldsymbol{x})^{T}\right)^{-1}+[A^{0}]^{-1}\] \[=-\left(A^{0}-\eta_{A}\left(\nabla_{\boldsymbol{r}}L\right)( \boldsymbol{r})^{T}\left[A^{0}\right]^{-T}\right)^{-1}+[A^{0}]^{-1}.\]
To get the expression that has only \(G\) and \(W\), we can substitute \(A=[G^{-1}-W]^{-1}\) and \(A^{-1}=G^{-1}-W\), and use \(G=G^{T}\) and \(G^{-T}=G^{-1}\) since \(G\) is a diagonal matrix. This gives
\[\Delta W_{2} =-\left(A-\eta_{A}\left(\nabla_{\boldsymbol{r}}L\right)( \boldsymbol{r})^{T}\,A^{-T}\right)^{-1}+A^{-1}\] \[=-\left(\left[G^{-1}-W\right]^{-1}-\eta_{A}\left(\nabla_{ \boldsymbol{r}}L\right)(\boldsymbol{r})^{T}\left[G^{-1}-W^{T}\right]\right)^{- 1}+\left[G^{-1}-W\right].\]
Note that as \(G_{jj}\to 0\), \(A_{jj}^{-1}=[G_{jj}^{i}]^{-1}-W_{jj}\to\infty\), so this reparameterizatin is poorly behaved in situations where \(G_{jj}^{j}=f^{\prime}(\boldsymbol{z}_{j})\) becomes small or zero because the second term in the sum diverges while the first term does not.
We also show that linearizing this parameterization around \(\eta_{A}=0\) still leads to updates that diverge when elements of \(G\) become small. Following the linearization from Section 2.4, the linearized, reparameterized update is given by
\[\Delta W_{3} =-\eta_{A}A^{-1}(\nabla_{\boldsymbol{r}}L)(\boldsymbol{x})^{T}A^{ -1}\] \[=-\eta_{A}\left[G^{-1}-W\right](\nabla_{\boldsymbol{r}}L)( \boldsymbol{r})^{T}\left[G^{-1}-W^{T}\right]\left[G^{-1}-W\right].\]
Proof.: First note that \(\left.\Delta W_{2}\right|_{\eta_{A}=0}=0\), so we have to linear order in \(\eta_{A}\),
\[\Delta W_{2}=\left.\frac{d\Delta W_{2}}{d\eta_{A}}\right|_{\eta_{A}=0}\eta_{A }+\mathcal{O}(\eta_{A}^{2}) \tag{38}\]
Now let
\[V=A+\Delta A=A-\eta_{A}(\nabla_{\boldsymbol{r}}L)\left(A^{-1}\boldsymbol{r} \right)^{T}\]
then \(\Delta W_{2}=A^{-1}-V^{-1}\) so
\[\frac{d\Delta W_{2}}{d\eta_{A}} =\frac{dA^{-1}}{d\eta_{A}}-\frac{dV^{-1}}{d\eta_{A}}\] \[=V^{-1}\frac{dV}{d\eta_{A}}V^{-1}\]
since \(dA^{-1}/d\eta_{A}=0\). Combining this with Eq. (38) and the definition of \(V\) gives the linearized update,
\[\Delta W_{3} =\left.V^{-1}\frac{dV}{d\eta_{A}}V^{-1}\right|_{\eta_{A}=0}\eta_{A}\] \[=V^{-1}\left(-\left(\nabla_{\boldsymbol{r}}L\right)\left(A^{-1} \boldsymbol{r}\right)^{T}\right)V^{-1}\Bigg{|}_{\eta_{A}=0}\eta_{A}\] \[=-A^{-1}(\nabla_{\boldsymbol{r}}L)\left(\boldsymbol{r}\right)^{T }A^{-T}A^{-1}\eta_{A}\] \[=-\left[G^{-1}-W\right]^{-1}\left(\nabla_{\boldsymbol{r}}L \right)\left(\boldsymbol{r}\right)^{T}\left[G^{-1}-W^{T}\right]\left[G^{-1}-W \right]\eta_{A}.\]
Again, substitute \(A^{-1}=G^{-1}-W\), to get the final expression. Notice that \(\Delta W_{3}=A^{-1}A^{-T}\Delta W_{1}A^{-T}A^{-1}\) so one can let \(B=A^{-1}A^{-T}\) and \(C=A^{-T}A^{-1}\), which are symmetric, and \(\Delta W_{3}=B\Delta W_{1}C\).
Note, again, that \(\Delta W_{3}\) diverges if elements of \(G\) go to zero. Therefore, the natural reparameterization \(A=[G^{-1}-W]^{-1}\) is not well suited for learning.
### Linearization of the corrected reparameterization
Here, we derive the linearized update, \(\Delta W_{3}\), given in Eq. (18). This update rule is derived by expanding \(\Delta W_{2}\) from Eq. (17) to linear order. Recall that \(\Delta W_{2}\) was derived from the reparameterization \(A=[G-GWG]^{-1}\). Let \(U=[G^{-1}-W]^{-1}\), then we can rewrite Eq. (17) as
\[\Delta W_{2}=-\left[U-\eta_{A}G^{2}\left(\nabla_{\boldsymbol{r}}L\right) \left(\boldsymbol{r}\right)^{T}\left[I-GW\right]^{T}G\right]^{-1}+U^{-1}.\]
Now, denote everything inside of the inverse as \(V\) so
\[V=U-\eta_{A}G^{2}\left(\nabla_{\boldsymbol{r}}L\right)\left(\boldsymbol{r} \right)^{T}\left[I-GW\right]^{T}G.\]
Then Eq. 17 can be further rewritten as
\[\Delta W_{2}=U^{-1}-V^{-1}.\]
Now, following the same approach as Appendix A.2, note that \(\left.\Delta W_{2}\right|_{\eta_{A}=0}=0\), so the linearization of \(\Delta W_{2}\) around \(\eta_{A}=0\) is given by
\[\Delta W_{3} =\left.\frac{d\Delta W_{2}}{d\eta_{A}}\right|_{\eta_{A}=0}\eta_{A}\] \[=\left[\frac{dU^{-1}}{d\eta_{A}}-\left(-V^{-1}\frac{dV}{d\eta_{A} }V^{-1}\right)\right]_{\eta_{A}=0}\eta_{A}\] \[=V^{-1}\left(-G^{2}\left(\nabla_{\boldsymbol{r}}L\right)\left( \boldsymbol{r}\right)^{T}\left[I-GW\right]^{T}G\right)V^{-1}\Bigg{|}_{\eta_{A} =0}\eta_{A}\] \[=U^{-1}\left(-G^{2}\left(\nabla_{\boldsymbol{r}}L\right)\left( \boldsymbol{r}\right)^{T}\left[I-GW\right]^{T}G\right)U^{-1}\eta_{A}\] \[=-\eta_{A}[G^{-1}-W]G^{2}\left(\nabla_{\boldsymbol{r}}L\right) \left(\boldsymbol{r}\right)^{T}\left[I-GW\right]^{T}G[G^{-1}-W]\] \[=-\eta_{A}\left[I-WG\right]G\left(\nabla_{\boldsymbol{r}}L\right) \left(\boldsymbol{r}\right)^{T}\left[I-GW\right]^{T}[I-GW].\] |
2305.05318 | How Informative is the Approximation Error from Tensor Decomposition for
Neural Network Compression? | Tensor decompositions have been successfully applied to compress neural
networks. The compression algorithms using tensor decompositions commonly
minimize the approximation error on the weights. Recent work assumes the
approximation error on the weights is a proxy for the performance of the model
to compress multiple layers and fine-tune the compressed model. Surprisingly,
little research has systematically evaluated which approximation errors can be
used to make choices regarding the layer, tensor decomposition method, and
level of compression. To close this gap, we perform an experimental study to
test if this assumption holds across different layers and types of
decompositions, and what the effect of fine-tuning is. We include the
approximation error on the features resulting from a compressed layer in our
analysis to test if this provides a better proxy, as it explicitly takes the
data into account. We find the approximation error on the weights has a
positive correlation with the performance error, before as well as after
fine-tuning. Basing the approximation error on the features does not improve
the correlation significantly. While scaling the approximation error commonly
is used to account for the different sizes of layers, the average correlation
across layers is smaller than across all choices (i.e. layers, decompositions,
and level of compression) before fine-tuning. When calculating the correlation
across the different decompositions, the average rank correlation is larger
than across all choices. This means multiple decompositions can be considered
for compression and the approximation error can be used to choose between them. | Jetze T. Schuurmans, Kim Batselier, Julian F. P. Kooij | 2023-05-09T10:12:26Z | http://arxiv.org/abs/2305.05318v2 | How informative is the Approximation Error from Tensor Decomposition for Neural Network Compression?
###### Abstract
Tensor decompositions have been successfully applied to compress neural networks. The compression algorithms using tensor decompositions commonly minimize the approximation error on the weights. Recent work assumes the approximation error on the weights is a proxy for the performance of the model to compress multiple layers and fine-tune the compressed model. Surprisingly, little research has systematically evaluated which approximation errors can be used to make choices regarding the layer, tensor decomposition method, and level of compression. To close this gap, we perform an experimental study to test if this assumption holds across different layers and types of decompositions, and what the effect of fine-tuning is. We include the approximation error on the features resulting from a compressed layer in our analysis to test if this provides a better proxy, as it explicitly takes the data into account. We find the approximation error on the weights has a positive correlation with the performance error, before as well as after fine-tuning. Basing the approximation error on the features does not improve the correlation significantly. While scaling the approximation error commonly is used to account for the different sizes of layers, the average correlation across layers is smaller than across all choices (i.e. layers, decompositions, and level of compression) before fine-tuning. When calculating the correlation across the different decompositions, the average rank correlation is larger than across all choices. This means multiple decompositions can be considered for compression and the approximation error can be used to choose between them.
## 1 Introduction
Tensor Decompositions (TD) have shown potential for compressing pre-trained models, such as convolutional neural networks, by replacing the optimized weight tensor with a low-rank multi-linear approximation with fewer parameters (Jaderberg et al., 2014; Lebedev et al., 2015; Kim et al., 2016; Garipov et al., 2016; Kossaifi et al., 2019). Common compression procedures (Lebedev et al., 2015; Garipov et al., 2016; Hawkins et al., 2021) work by iteratively applying TD on a selected weight tensor, where each time several _decomposition choices_ have to be made regarding (i) the layer to compress, (ii) the type of decomposition, and (iii) the compression level. Selecting the best hyperparameters for these choices at a given iteration requires costly re-evaluating the full model for each option. Recently, Liebenwein et al. (2021) suggested comparing the _approximation errors_ on the decomposed weights as a more efficient alternative, though they only considered matrix decompositions for which analytical bounds on the resulting performance exist. These bounds rely on the Eckhart-Young-Mirsky theorem. For TD, no equivalent theorem is possible (Vannieuwenhoven et al., 2014). While theoretical bounds are not available for more general TD methods, the same concept could still be practical when considering TDs too. We summarize this as the following general assumption:
**Assumption 1**.: _A lower TD approximation error on a model's weight tensor indicates better overall model performance after compression._
While this assumption appears intuitive and reasonable, we observe several gaps in the existing literature: First, most existing TD compression literature only focuses on a few decomposition choices, e.g. fixing the TD method (Lebedev et al., 2015; Kim et al., 2016). Although various error measures and decomposition choices have been studied in separation, no prior work systematically compares different decomposition errors across multiple decomposition choices. Second, different decomposition errors with different properties have been used throughout the literature (Jaderberg et al., 2014), and it is unclear if some error measure should be preferred. Third, a benefit of TD is that no training data is needed for compression, though if labeled data is available, more recent methods combine TD with a subsequent fine-tuning step. Is the approximation error equally valid for the model performance with and without fine-tuning?
Overall, to the best of the authors' knowledge, no prior work investigates if and which decomposition choices for TD network compression can be made using specific approximation errors. This paper studies empirically to what extent a single decomposition error correlates with the compressed model's performance across varied decomposition choices, identifying how existing procedures could be improved, and providing support for specific practices. Our contributions are as follows:
* A first empirical study is proposed on the correlation between the approximation error on the model weights that result from compression with TD, and the performance of the compressed model1. Studied decomposition choices include the layer, multiple decomposition methods (CP, Tucker, and Tensor Train), and level of compression. Measurements are made using several models and datasets. We show that the error is indicative of model performance, even when comparing multiple TD methods, though useful correlation only occurs at the higher compression levels. Footnote 1: The code for our experiments is available at [https://github.com/JSchuurmans/tddl](https://github.com/JSchuurmans/tddl).
* Different formulations for the approximation error measure are compared, including measuring the error on the features as motivated by the work Jaderberg et al. (2014); Denil et al. (2014) which considers the data distribution. We further study how using training labeled data for additional fine-tuning affects the correlation.
## 2 Related work
There is currently no systematic study on how well the approximation error relates to a compressed neural network's performance across multiple choices of network layers, TD methods, and compression levels. We here review the most similar and related studies where we distinguish works with theoretical versus empirical validation, different approximation error measures, and the role of fine-tuning after compression.
The relationship between the approximation error on the weights and the performance of the model was studied by theoretical analysis for matrix decompositions. Liebenwein et al. (2021) derive bounds on the model performance for SVD-based compression on the convolutional layers, and thus motivate that the SVD approximation error is a good proxy for the compressed model performance. Arora et al. (2018) derive bounds on the generalization error for convolutional layers based on a compression error from their matrix projection algorithm. Baykal et al. (2019) show how the amount of sparsity introduced in a model's layers relates to its generalization performance. While these works show that some theoretical bounds can be found for specific compression methods, such bounds are not available for TDs in general. Other works, therefore, study the relationship for TD empirically. For instance, Lebedev et al. (2015) show how CP decomposition rank affects the approximation error, and the resulting accuracy drop as the rank is decreased. Hawkins et al. (2022) observe that, for networks with repeated layer blocks, the approximation error depends on the convolutional layers within the block.
When considering the model's final task performance, the approximation error on the weights might not be the most relevant measure. To consider the effect on the actual data distribution, Jaderberg et al. (2014) instead propose to compute an error on the approximated output features of a layer after its weights have been compressed. They found that compressing weights by minimizing the error on features, rather than the error on the weights, results in a smaller loss in classification accuracy. However, Jaderberg et al. (2014) do not fine-tune the decomposed model, and only use a toy model with few layers. Denil et al. (2014) try to capture the information from the data via the empirical
covariance matrix. Although this method eliminates the need for multiple passes over the data during the compression step, it is limited to a two-dimensional case. Eo et al. (2021) forego looking at a compression error altogether, and selects the rank based on the accuracy on the validation set. This requires the labels to be present and a forward pass through the whole network, even if the compressed layer is near the input.
Several norms have been used in the literature to quantify the approximation error, which is the difference between the pretrained weights and the compressed weights. The works that decompose pretrained layers (Denton et al., 2014; Jaderberg et al., 2014; Lebedev et al., 2015; Novikov et al., 2015; Kim et al., 2016) explicitly minimize the Frobenius norm. Hawkins et al. (2021); Liebenwein et al. (2021) calculate the relative Frobenius norm, i.e., the norm of the error proportional to the norm of the pretrained weights, to compare the error for layers of different sizes. Still, it remains unclear which error measure is most informative for the compressed model's final performance.
In practice, when training data is still available for compression, fine-tuning for the target task after compressing weights could recover some of the lost performance (Denton et al., 2014; Lebedev et al., 2015; Kim et al., 2016). Adding fine-tuning results in a three-step process: pretrain, compress and fine-tune. Optimization thus alternates between minimizing the error of respectively the features, the weights, and finally the features again. While Denton et al. (2014); Kim et al. (2016) compare compressed model performance before and after fine-tuning, they do not investigate how the fine-tuned network performance relates to the weight compression error. Lebedev et al. (2015) does study the compression error for CP decomposition, but only reports performance with fine-tuning.
## 3 Methodology
We consider the task of compressing a pretrained neural network with TD. While TD is a general technique that could be applied to many types of layers, the focus will be on convolutional layers. Due to their ubiquity and suitability to compare different types of higher-order decompositions, as the layer weights are four-dimensional tensors.
Generally, a compression procedure will iteratively apply TD to the weights of selected layers, making several choices on how and what weight tensor to decompose, while ideally maintaining as much of the original network's performance. In its original uncompressed form, the full-rank weight tensors \(\textbf{W}\in\mathbb{R}^{C\times H\times W\times T}\) of a layer represent a local optimum in the network's parameter space with respect to the training data and loss, where \(C\) is the number of input channels, \(H\) and \(W\) are the height and width of the convolutional kernel, and \(T\) is the number of output channels. When a TD is applied to the weights of a specific layer, this results in a factorized structure \(\widehat{\textbf{W}}\) composed of multiple smaller tensor multiplications, which replaces the original weights in the network. Each time TD is applied, several decomposition choices need to be evaluated:
1. _Layer:_ The layer \(l\) from the set of network layers \(\mathbb{L}=\{1,2,\cdots,L\}\) to decompose.
2. _Method:_ The type of TD method \(m\in\mathbb{M}=\{\text{CP},\text{Tucker},\text{Tensor Train}\}\). The decomposition determines the factorized structure of \(\widehat{\textbf{W}}\).
3. _Compression:_ The compression level \(c\in\mathbb{C}\) for the selected layer. Here \(\mathbb{C}\subset\ (0,1]\) is some finite set of testable levels, and \(c=0.75\) means the number of parameters is reduced by 75% and the factorized layer contains only 25% of the parameters. A given compression level is achieved by decomposing the tensor to some rank \(\mathcal{R}\), depending on the selected TD method (see Section 3.3).
We will refer to \(\mathbb{H}=\mathbb{L}\times\mathbb{M}\times\mathbb{C}\) as the set with possible hyperparameter values for \((l,m,c)\) to consider. Note that compression procedures in the literature might only consider a subset of these choices. For example, a procedure might fix the layer for a given iteration or only consider a single TD method. In practice, it is computationally infeasible to evaluate the compressed network's performance for every possible hyperparameter choice at every compression iteration, especially when optimizing for performance after fine-tuning. Instead, automated compression procedures will efficiently compare an approximation error \(a_{i}=e(\widehat{\textbf{W}}_{i},\textbf{W})\) between the original and decomposed weights using a particular choice of hyperparameters \(h_{i}\in\mathbb{H}\). In doing so one relies on Assumption 1 that a lower approximation error is indicative of better compressed performance \(p_{i}\). If annotated training data is available, additional network fine-tuning on the decomposed structure could result in improved performance \(p_{i}^{*}>p_{i}\).
In this work, we propose to focus on a single iteration and investigate Assumption 1 in isolation of any specific compression procedure. Our aim is thus to assess how well computing an approximation error \(e\) can predict the optimal compression choice from some hypothesis set \(\mathbb{H}\), i.e. the choice that results in the lowest compressed network performance error \(p\), or even performance after fine-tuning \(p^{\star}\). We will study the correlation between approximation error and model performance empirically in our experiments, using the procedure and correlation metric explained in Section 3.1. Details on the different approximation errors that we will explore are covered in Section 3.2. Finally, the considered TD methods are explained in Section 3.3.
### Empirical evaluation of error-performance correlation
Our proposed empirical evaluation procedure will evaluate a large set of hyperparameters \(\mathbb{H}=\{h_{1},h_{2},\cdots\}\) on multiple convolutional neural networks and datasets (see Section 4) for different options of approximation error metric (see Section 3.2). For a given model, dataset, and approximation error metric \(e\), the procedure evaluates for each set of hyperparameter choices \(h_{i}\in\mathbb{H}\) the approximation error \(a_{i}=e(\textbf{\emph{\textbf{W}}}_{i},\textbf{\emph{\textbf{W}}})\), the model performance error \(p_{i}\) on the validation split, and the model performance error \(p_{i}^{\star}\) after additional fine-tuning on the training data. We thus obtain sets of measurements \(\mathbb{A}=\{a_{1},a_{2},\cdots\}\), \(\mathbb{P}=\{p_{1},p_{2},\cdots\}\) and \(\mathbb{P}^{*}=\{p_{1}^{\star},p_{2}^{*},\cdots\}\) for \(\mathbb{H}\).
When comparing two sets of hyperparameters \(h_{i}\in\mathbb{H}\) and \(h_{j}\in\mathbb{H}\), we want to establish if the set with the smaller approximation error results in a smaller performance error of the model. In other words, the concordance of pairs of measurements needs to be established. Concordant pairs have a larger (smaller) performance error when the approximation error is larger (smaller) between two sets of hyperparameter choices, i.e. \(i\) and \(j\) are concordant if \(a_{i}>a_{j}\) and \(p_{i}>p_{j}\) or if \(a_{i}<a_{j}\) and \(p_{i}<p_{j}\), and discordant otherwise. Kendall's \(\tau\) is a measure for the rank correlation (Kendall, 1938) or ordinal association between two order sets, in our case between approximation errors \(e\) and a model performances \(\mathbb{P}\) (or \(\mathbb{P}^{*}\)). To avoid confusion with the concept of tensor rank, we will refer to Kendall's \(\tau\) simply as _correlation_. For this correlation measure, the difference between the number of concordant pairs (\(k\)) and discordant pairs (\(d\)) is scaled with the binomial coefficient \(m(m-1)/2\) to account for the different ways two measurements can be sampled from a total of \(m\) measurements:
\[\tau=2(k-d)/(m(m-1)). \tag{1}\]
Kendall's \(\tau\) can be interpreted as follows: \(\tau=1\) indicates a perfect positive rank correlation, \(\tau=0\) no correlation, and \(\tau=-1\) a strong negative correlation. For a set of hyperparameters \(\mathbb{H}\), a useful approximation error \(e\) would thus result in a \(\tau\) close to \(\pm 1\), indicating it is predictive of the model's performance. Note that Kendall's \(\tau\) does not depend on assumptions about the underlying distribution, whereas Pearson correlation assumes a linear relationship between the two measurements. Kendall's \(\tau\) is used over Spearman's \(\rho\) because the interpretation of con- and discordance pairs for Kendall's \(\tau\) is closely related to our use case of choosing between hyperparameter sets.
### Approximation errors
We now discuss various measures \(e(\textbf{\emph{\textbf{W}}},\textbf{\emph{\textbf{W}}})\) to quantify the approximation error. The basis is to compute some norm on the difference between these tensors, in this work we use the Frobenius norm as is common in the literature (Lebedev et al., 2015; Hawkins et al., 2022). We shall consider three options to scale the norm, which could help make the error more robust when comparing hypotheses with different layers. Additionally, we can consider two options to compute the error on, namely either directly the weights or on the features. In total, we shall thus explore six different approximation errors in this work. An overview is presented in Table 1.
NormalizationThe norm between the difference of the weights is referred to as **absolute** norm and is used in the objective function when decomposing the pretrained weights. The **relative** norm is used in TD literature to compare errors between different layers (Lebedev et al., 2015; Hawkins et al., 2022), as it is invariant to the size of the weights. Alternatively, the norm of the difference can be **scaled** to account for the number of parameters, while keeping the distance from the weights.
Target tensorThe most common option is to compute approximation error on the decomposed layer's _weights_, **W**. However, Jaderberg et al. (2014) achieved promising results basing the decomposition on the approximation error of the features. Errors in some elements of the weight tensor
might be more permissible if they do not affect the resulting feature space. We therefore also consider the expected error on the _features_\(\mathbf{F}=\mathbf{X}\cdot\mathbf{W}\), which is the output tensor of the layer after convolving its input with weights \(\mathbf{W}\) on input data \(\mathbf{X}\). Likewise, approximated weights \(\widehat{\mathbf{W}}\) result in an approximated \(\widetilde{\mathbf{F}}\). In practice, computing the feature space requires input data and is computationally more demanding than computing the weight approximation error. However, it is potentially more representative of an approximation's effect on the output, plus unlike a later fine-tuning step, it could even be used if only data without labels is available.
### Tensor Decomposition methods
Our experiments shall consider three popular decomposition methods for convolutional layers, namely CP (Denton et al., 2014; Jaderberg et al., 2014; Lebedev et al., 2015), Tucker (Kim et al., 2016), and Tensor Train (TT) (Garipov et al., 2016). During the decomposition step the decomposed weights are found by minimizing the approximation error between the pretrained weights and the estimated decomposition: \(\arg\min_{\widehat{\mathbf{W}}}||\mathbf{W}-\widehat{\mathbf{W}}||\). For CP this is done with ALS (Carroll and Chang, 1970; Harshman, 1972), for Tucker with HOSVD (De Lathauwer et al., 2000), and TT-SVD (Oseledets, 2011) for Tensor Train. The ALS algorithm requires a random initialization. We sample from a uniform distribution [0,1) using Tensorly(Kossaifi et al., 2019b). The desired compression level is achieved by finding the corresponding rank, using the package Tensorly-Torch(Kossaifi et al., 2019b). The ranks used for CP, Tucker, and TT are given in Appendix A.1. For completeness, we list all considered decompositions with a 4-way tensor \(\mathbf{W}\).
CP decompositionA rank-\(R\) CP decomposition (Hitchcock, 1927) sums \(R\) rank-one tensors:
\[\widetilde{\mathbf{W}}_{c,y,x,t}^{\text{CP}}=\sum_{r=1}^{R}\mathbf{C}_{c,r}\mathbf{Y}_ {y,r}\mathbf{X}_{x,r}\mathbf{T}_{t,r}. \tag{2}\]
Tucker decompositionA Tucker decomposition (Tucker, 1966) is distinct from a CP by the Tucker core \(\mathbf{G}\in\mathbb{R}^{R_{1}\times R_{2}\times R_{3}\times R_{4}}\). The Tucker rank is defined as the four-tuple \((R_{1},R_{2},R_{3},R_{4})\). Since the dimensions of the convolutional weights are small with respect to the width and height dimensions, it is computationally more efficient to contract these with the Tucker core \(\mathbf{G}\) and form a new core \(\mathbf{H}=\mathbf{G}\times_{2}\mathbf{Y}\times_{3}\mathbf{X}\), where \(\times_{n}\) is the n-mode product (Appendix A.2):
\[\widetilde{\mathbf{W}}_{c,y,x,t}^{\text{Tucker}}=\sum_{r_{1}=1}^{R_{1}}\sum_{ r_{2}=1}^{R_{2}}\sum_{r_{3}=1}^{R_{3}}\sum_{r_{4}=1}^{R_{4}}\mathbf{G}_{r_{1},r_{2},r_{3}, r_{4}}\mathbf{C}_{c,r_{1}}\mathbf{Y}_{y,r_{2}}\mathbf{X}_{x,r_{3}}\mathbf{T}_{t,r_{4}}=\sum_{r_{1} =1}^{R_{1}}\sum_{r_{4}=1}^{R_{4}}\mathbf{H}_{r_{1},y,x,r_{4}}\mathbf{C}_{c,r_{1}} \mathbf{T}_{t,r_{4}}. \tag{3}\]
Tensor Train decompositionAnother alternative is the Tensor Train decomposition (Oseledets, 2011), which decomposes a given tensor as a linear chain of 2-way and 3-way tensors, where the first and last tensors are 2-way. The TT-rank in our four-dimensional case is the 3-tuple \((R_{1},R_{2},R_{3})\):
\[\widetilde{\mathbf{W}}_{c,y,x,t}^{\text{TT}}=\sum_{r_{1}=1}^{R_{1}}\sum_{r_{2} =1}^{R_{2}}\sum_{r_{3}=1}^{R_{3}}\mathbf{C}_{c,r_{1}}\mathbf{Y}_{r_{1},y,r_{2}}\mathbf{X}_ {r_{2},x,r_{3}}\mathbf{T}_{r_{3},t}. \tag{4}\]
## 4 Experiments
This section provides the implementation details and discusses the results of our empirical approach.
\begin{table}
\begin{tabular}{c c c c} \hline \hline & **Absolute** & **Relative** & **Scaled** \\ \hline Weight & \(||\mathbf{W}-\widehat{\mathbf{W}}||\) & \(||\mathbf{W}-\widehat{\mathbf{W}}||/||\mathbf{W}||\) & \(||\mathbf{W}-\widehat{\mathbf{W}}||/n\mathbf{W}\) \\ Feature & \(\mathbb{E}_{\mathbf{X}}\left[||\mathbf{F}-\widetilde{\mathbf{F}}||\right]\) & \(\mathbb{E}_{\mathbf{X}}\left[||\mathbf{F}-\widetilde{\mathbf{F}}||/||\mathbf{F} ||\right]\) & \(\mathbb{E}_{\mathbf{X}}\left[||\mathbf{F}-\widetilde{\mathbf{F}}||/n\mathbf{F} \right]\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Overview of the approximation errors \(e(\widetilde{\mathbf{W}},\mathbf{W})\) considered in our evaluation. Rows show the target tensor to evaluate the error on, and columns the different options for error normalization. \(n_{\mathbf{W}}\) and \(n_{\mathbf{F}}\) are the number of elements in the weight tensor \(\mathbf{W}\) or feature tensor \(\mathbf{F}\) respectively.
### Experimental setup
DatasetsThe experiments are run on the datasets CIFAR-10 (Krizhevsky, 2009) and Fashion-MNIST (Xiao et al., 2017). These datasets are common classification benchmarks for testing TD in CNNs (Cheng et al., 2021; Denil et al., 2014; Wu et al., 2020; Garipov et al., 2016; Hawkins et al., 2021). For both datasets, the original training sets are split into the set used for training and validation. The split is made such that equal class distributions are maintained. The details specific to the datasets are as follows: **CIFAR-10** has 10 classes distributed equally across 60,000 images of \(32\times 32\) pixels with 3 color channels. After our validation split, there are 45,000 images in the training set and 5,000 in the validation set. The test set of 10,000 images remains unchanged. **Fashion-MNIST** has 10 classes distributed equally across 70,000 grayscale images of \(28\times 28\) pixels. After our validation split, there are 55,000 images in the training set and 5,000 in the validation set.
Model architecture and trainingThe models used are ResNet-18 (He et al., 2016) and GaripovNet (Garipov et al., 2016). ResNet is a well-performing state-of-the-art convolutional neural network. GaripovNet is a 7-layer convolutional neural network proposed in Garipov et al. (2016) and used by Hawkins et al. (2022) for image classification. These models enable comparison with other works within and beyond TD for deep learning (Gusak et al., 2019; Garipov et al., 2016; Hawkins et al., 2022; Chu & Lee, 2021; Kossaifi et al., 2020). The following hyperparameters are used: **ResNet-18** is trained with batch size 128, for 300 epochs, with Adam optimizer and a learning rate of \(10^{-3}\). At epochs 100 and 150 the learning rate is multiplied by 0.1. **GaripovNet** is trained with the same settings as the original paper Garipov et al. (2016). The model is trained with Stochastic Gradient Descent (SGD) with a momentum of 0.9 and a learning rate of 0.1, multiplied by 0.1 at epochs 30, 60, and 90.
The validation set is used for early stopping and the selection of the training hyperparameters, i.e. learning rate, schedule, level of annealing, batch size, and optimizer. Training data is augmented with a random crop (padding with 4 pixels and cropping to the original size) and a random horizontal flip. All images are standardized based on the training mean and standard deviation per channel overall training samples. Early stopping is applied for both training the baseline and fine-tuning the decomposed model. The classification error on the test set is used for performance errors \(\mathbb{P}\). To Fine-tune after decomposition and obtain performance errors \(\mathbb{P}^{*}\), the ResNet-18 is optimized for another 25 epochs, and GaripovNet for 10 epochs, using the last learning rate from the training.
Decomposition choicesWe now explain the considered values for the decomposition choices \(\mathbb{H}\) explained in Section 3. For both neural network models, neither the first nor the last layer will be decomposed, as these layers already contain a relatively small amount of parameters. For GaripovNet the other five layers are part of \(\mathbb{L}\). For ResNet-18, \(\mathbb{L}\) contains a selection of eight convolutional layers, details of which can be found in Appendix A.3. The set of TD methods that will be considered is \(\mathbb{M}=\{\text{CP, Tucker, Tensor Train}\}\), which were discussed in Section 3.3. The set of compression levels is \(\mathbb{C}=\{10\%,25\%,50\%,75\%,90\%\}\). Multiple levels of compression are considered as each neural network layer can have different efficiency-performance trade-offs (Lebedev et al., 2015). In the experiments, we evaluate ResNet-18 on CIFAR-10, GaripovNet on CIFAR-10, and GaripovNet on F-MNIST. We exclude ResNet-18 on F-MNIST as the dataset is not sufficiently challenging for this model, and compressing one layer does not lead to a viable impact on the performance due to the model's size and skip-connections.
VarianceThe process of decomposing and fine-tuning is repeated for five independent runs for each choice of layer, decomposition method, and compression level to assess and report variance in the results. Note that due to the stochasticity of the ALS algorithms, the random initializations can result in different CP decomposition estimates. The variance in correlation shown in the plots without fine-tuning results from the randomness in the CP initialization. Fine-tuning adds additional variance through its use of batched SGD. The observed variance with fine-tuning accounts for both the randomness from CP initialization as well as from fine-tuning, thereby representing all sources of randomness in our methodology. All runs for a given model and dataset are based on the same pretrained weights, so this is not a source of reported variance. In total, this results in 600 measurements for ResNet-18 and 375 measurements for GaripovNet per dataset.
### Experimental results
Impact of compression levels on correlationWe start by calculating the correlation across the layers and decomposition methods for multiple runs and calculate the averages grouped by compression levels. This is presented in Figure 1, where the bars are the average correlation \(\tau\) between the Relative Weights and the classification error. The correlation is only based on the Relative Weights, as this is the most common metric in recent literature (Lebedev et al., 2015; Hawkins et al., 2022; Liebenwein et al., 2021). The correlations are grouped by the different levels of compression and represented by different colors. The error bars are \(\pm 1\) standard deviation, representing the variance from multiple runs.
In Figure 1, it can be seen that the larger the compression, the higher the correlation is. This is a positive result for our use case. In the end, we are interested in making decomposition choices when compressing. The more we compress, the higher the correlation and therefore the more certain we are that basing our choice on the approximation error results in the optimal choice. It can also be noted that a certain level of compression is needed to be able to make choices based on the approximation error. For both models and datasets, the correlation is small when the compression is only 10% and 25%. The variance in the correlation at smaller compression levels is larger than at higher compression levels. When the compression is too small the effect on the performance of the model is too small compared to the observed variance, especially after fine-tuning. In the remainder of the experiments, we therefore focus on compression levels of \(\mathbb{C}=\{50\%,75\%,90\%\}\).
Comparison of approximation error measuresWorks such as Liebenwein et al. (2021) have used a single approximation error, e.g. Relative Weights, to identify which layer to compress next, implicitly assuming that relative errors between layers are indicative of the relative model performance differences. We here compare the various approximation error measures, testing the correlation with performance over all decomposition choices. In Figure 2, the correlation is calculated based on measurements of all combinations of layer, decomposition method, and compression level once. The correlations are averaged over runs, as well as the \(\pm 1\) standard deviation is calculated over the runs.
Figure 2 shows that the correlations are generally positive and significantly different from zero. This means that the decomposition choices can (to some extent) be based on the approximation error. There is one exception where the correlation is close to zero, namely for the Absolute Weights measure on ResNet-18. The difference in correlations can be explained by the difference in approximation error between layers, a detailed explanation is provided in Appendix A.4. These results suggest that using Absolute-based approximation errors, while they may show high correlations in some cases, are not generally a reliable indicator for future model performance, and that normalized measures should be used instead.
Figure 1: Correlation \((\tau)\) over layers and decompositions, grouped by the level of compression (color), using the Relative Weights approximation error. Shown in the bars are averages and standard deviation over runs. We observe that a higher compression results in a larger correlation.
Comparing the different approximation error metrics, we observe the highest correlation with the performance for Relative Weights in all tested cases. Interestingly, the magnitude of the correlation for Feature-based measures is similar to or smaller than the correlation for Weight-based measures. Although the findings of Jaderberg et al. (2014) would suggest a stronger correlation for the features, at least before fine-tuning, we do not observe benefits for basing decomposition choices on the approximation error of the features rather than the weights. Possibly pretraining already ensures all the elements in the weight tensor are equally important for the target data distribution, thus a Weight-based error already reliably reflects resulting errors on the output features. Comparing the error bars with and without fine-tuning, the randomness from fine-tuning has a larger impact on the variance in correlation than the randomness from CP initialization. In summary, our results support the use of the Relative Weights approximation error to make decomposition choices.
Impact of fine-tuningMost works use fine-tuning to recover some of the lost performance (Denton et al., 2014; Lebedev et al., 2015; Kim et al., 2016). The right subfigure of Figure 2 shows the mean and standard deviation of five correlations, per model and per dataset after fine-tuning. After fine-tuning, the correlation between the approximation error and the performance error is smaller than before fine-tuning for GaripovNet, as additional training adapts the model and reduces the performance gap between the different choices, but this effect is not observed for ResNet-18 where the correlation was already lower. However, for both models, there is still a clear positive correlation between the approximation error and the performance after fine-tuning. This means that decomposition choices can still be based on the approximation error when intending to perform fine-tuning later, even though different hyperparameters might be optimal without and with fine-tuning.
While the correlation is positive and significantly different from zero, the correlation is only around +0.5 for ResNet-18. We therefore investigate if the correlation is higher when only considering specific decomposition choices next.
Correlation across Layers vs. MethodsIn the previous experiments, we compared how decomposition choices on both different layers and methods correlated with performance. Here we investigate if the correlation is stronger if only one of these choices would be considered. For instance, previous works often only include layers as decomposition choice, and have not compared across decomposition methods. We compare correlation on _all_ choices for both sets (\(\mathbb{L}\times\mathbb{M}\times\mathbb{C}\)) as before, to _layers_ only (\(\mathbb{L}\times\{m\}\times\{c\}\) with reported results averaged for all \(m\in\mathbb{M}\) and \(c\in\mathbb{C}\)), and to _methods_ only (\(\{l\}\times\mathbb{M}\times\{c\}\) with reported results averaged for all \(l\in\mathbb{L}\) and \(c\in\mathbb{C}\)).
Figure 3 shows that before fine-tuning the approximation error has a lower correlation with the performance of the model when considering layers only compared to all decomposition choices. Not all layers of a neural network have the same efficiency-performance trade-off (Lebedev et al., 2015; Hawkins et al., 2021). Therefore, the correlation is lower when we fix the decomposition method and compression level. It is better to combine layers with compression levels (and decomposition
Figure 2: Correlation \((\tau)\) calculated over layers, decompositions, and compression levels, averaged (and standard deviation) over runs, for different approximation errors (colors). The Relative Weights approximation error provides the highest correlation, both before and after fine-tuning.
methods). However, fine-tuning recovers some of the correlation for layers. Across decomposition methods, the correlation before fine-tuning is comparable to the correlation calculated across all decomposition choices. These results suggest decomposition methods can be compared better than just the layers before fine-tuning, although the former is not an optimization choice considered in previous works. Interestingly, for GaripovNet the correlation across decomposition methods drops significantly after fine-tuning. We find that this is due to difficulties in optimizing the CP decomposed layers, since the gradient flow through CP convolutions is a known problem (Silva & Lim, 2008; Lebedev et al., 2015), whereas ResNet does not suffer from this due to its skip connections. We conclude that (unlike current practice) network compression could consider multiple decomposition methods as their approximation errors can be compared, though most reliably when aiming for compression without fine-tuning.
## 5 Conclusion
We have tested Assumption 1, and find that there is a positive correlation between the relative approximation error on the weights and the resulting performance error of the model for a wide range of TD choices, including layers, methods, and compression levels. We further find that using data to compute the approximation error on the features, rather than simply on the model weights directly, does not improve the correlation. Scaling the approximation error with the norm of the original tensor provides the highest and most stable correlation across all compared models and datasets. Our findings suggest that the Relative Weights approximation error is best suited to select among TD decomposition choices.
While these choices can be made across layers, TD methods, and compression levels, we observe that the correlation before fine-tuning is smaller when comparing between layers for a fixed method, than when comparing across methods (here: CP, Tucker, and Tensor Train) for a fixed layer. Integrating multiple types of decompositions within a network compression technique is therefore a potential direction for future work, although care has to be taken when the use case includes later fine-tuning, as the correlation for selecting across decomposition methods can degrade since back-propagation through certain factorized structures remains challenging.
Our experiments are limited to a set of decomposition choices and network layers commonly found in the TD literature. Future work can extend to other decompositions and other types of neural network layers, e.g. fully connected layers. While the weights are matrices, tensor decomposition has been applied to fully connected layers by reshaping the weight matrix into a higher-order tensor. The choice of reshaping then becomes an additional decomposition choice.
Figure 3: Correlation \((\tau)\) calculated over different decomposition choices (colors) for layers and decomposition methods using the Relative Weights error (_all_ is identical to its results in Fig. 2). The reported correlations are averaged over all not-compared choices and runs. The standard deviations are only calculated over runs to make the three groups comparable. Before fine-tuning, across layers has less correlation than across methods, though interestingly this pattern is revered by fine-tuning.
## Reproducibility
The authors find it important that this work is reproducible. To this extent the following efforts have been made: The datasets and test-validation splits are described in Section 4.1. The datasets are collected from PyTorch Vision. The models and hyperparameters used for training are covered in Section 4.1. The implementations of the baseline models are from the PyTorch Model 20o. The models are factorized with Tensorly-Torch (Kossaifi et al., 2019), using CP initialization described in Section 4.1 and ranks provided in Appendix A.1. The experimental setup is explained in Section 4.1. The calculation of metrics is formulated in Section 3. Finally, the code to reproduce these experiments is available at: [https://github.com/JSchuurmans/tddl](https://github.com/JSchuurmans/tddl).
### Acknowledgments
Described results are made possible in part by TU Delft Cohesion subsidy, TERP Cohesion project 2020.
|
2305.12664 | Deep Quantum Neural Networks are Gaussian Process | The overparameterization of variational quantum circuits, as a model of
Quantum Neural Networks (QNN), not only improves their trainability but also
serves as a method for evaluating the property of a given ansatz by
investigating their kernel behavior in this regime. In this study, we shift our
perspective from the traditional viewpoint of training in parameter space into
function space by employing the Bayesian inference in the Reproducing Kernel
Hilbert Space (RKHS). We observe the influence of initializing parameters using
random Haar distribution results in the QNN behaving similarly to a Gaussian
Process (QNN-GP) at wide width or, empirically, at a deep depth. This outcome
aligns with the behaviors observed in classical neural networks under similar
circumstances with Gaussian initialization. Moreover, we present a framework to
examine the impact of finite width in the closed-form relationship using a $
1/d$ expansion, where $d$ represents the dimension of the circuit's Hilbert
space. The deviation from Gaussian output can be monitored by introducing new
quantum meta-kernels. Furthermore, we elucidate the relationship between GP and
its parameter space equivalent, characterized by the Quantum Neural Tangent
Kernels (QNTK). This study offers a systematic way to study QNN behavior in
over- and under-parameterized scenarios, based on the perturbation method, and
addresses the limitations of tracking the gradient descent methods for
higher-order corrections like dQNTK and ddQNTK. Additionally, this
probabilistic viewpoint lends itself naturally to accommodating noise within
our model. | Ali Rad | 2023-05-22T03:07:43Z | http://arxiv.org/abs/2305.12664v1 | # Deep Quantum Neural Networks are Gaussian Process
###### Abstract
The overparameterization of variational quantum circuits, as a model of Quantum Neural Networks (QNN), not only improves their trainability but also serves as a method for evaluating the property of a given ansatz by investigating their kernel behavior in this regime. In this study, we shift our perspective from the traditional viewpoint of training in parameter space into function space by employing the Bayesian inference in the Reproducing Kernel Hilbert Space (RKHS). We observe the influence of initializing parameters using random Haar distribution results in the QNN behaving similarly to a Gaussian Process (QNN-GP) at wide width or, empirically, at a deep depth. This outcome aligns with the behaviors observed in classical neural networks under similar circumstances with Gaussian initialization. Moreover, we present a framework to examine the impact of finite width in the closed-form relationship using a \(1/d\) expansion, where \(d\) represents the dimension of the circuit's Hilbert space. The deviation from Gaussian output can be monitored by introducing new quantum meta-kernels. Furthermore, we elucidate the relationship between GP and its parameter space equivalent, characterized by the Quantum Neural Tangent Kernels (QNTK). This study offers a systematic way to study QNN behavior in over- and under-parameterized scenarios, based on the perturbation method, and addresses the limitations of tracking the gradient descent methods for higher-order corrections like dQNTK and ddQNTK. Additionally, this probabilistic viewpoint lends itself naturally to accommodating noise within our model.
## I Introduction
The capacity for learning and trainability in quantum circuits is an intriguing subject that merits further exploration and discovery. The issue of how trainable variational quantum circuits and quantum neural networks (QNNs) poses a significant challenge that needs to be tackled. The standard approach to training QNN is within the context of _parameter space_. The aim is to find the optimum point in the loss landscape using local gradient-based methods. However, these methods might encounter the Barren Plateau problem [14], where the magnitude of the gradient decreases exponentially as the size of the Hilbert space increases.
Alternatively, we can shift the training process into _functions space_, viewing QNNs as linear combinations of Kernels that exist within the Reproducing Kernel Hilbert Space (RKHS)[20; 21]. The learning task in the parametric space of gate's parameters can be transformed into the task of identifying the correct coefficients of kernel regression parameters, which are of the order of the input dataset. This is typically much less than the dimension of the Hilbert space. The process of identifying these coefficients is usually more convex than implementing the original gradient descent method in the parameter space.
Although the quantum kernel's behavior in different quantum circuit regimes is still a largely uncharted area, recent findings suggest that kernel-based methods can be more successful than gradient methods in mitigating the Barren Plateau problem in under-parameterized regimes[2; 18]. With this evidence, we can now turn our attention in this work to the other part of the spectrum: the over-parameterized regime. We hope that this will enhance our understanding and lay the groundwork for more generic and practical applications in the NISQ era. In this regime, both the functional and parametric methods present more straightforward and comprehensible approaches. Similar to classical Neural Networks (CNNs)[7], the output of the QNN can be determined using the kernel regression method. The kernel employed in this lazy training method is referred to as the Quantum Neural Tangent Kernel (QNTK)[1; 10; 12; 13; 22].
An interesting observation has been made regarding CNN when they are over-parameterized through wide layers or deep networks: their output tends to converge towards a Gaussian distribution and Gaussian Process (GP) [8; 9; 15; 16]. Given this observation, it's feasible to replace the prior distribution of parameters in classical neural networks with a Gaussian process prior. This process shifts the focus from parameter spaces to the space of functions. A Gaussian process is a probabilistic model that defines a distribution over functions. the idea is to use a prior probability on functions, which is specified as a Gaussian process that determines the shape of the function space, and then update this with the observed data to get a posterior distribution over functions. In contrast to kernel regression, in which one typically needs to make decisions about the kernel hyperparameters, while in Gaussian process regression, these are typically learned from the data. Furthermore, while kernel regression provides a point estimate, Gaussian process regression provides a full probabilistic model, allowing for uncertainty estimation in predictions. we can think of Gaussian process regression as a probabilistic, kernel-based method, and it's a generalization of kernel regression that also provides uncertainty estimates.
### Quantum Meta Kernels
We begin by study the standard structure of variational quantum circuits, integral to quantum neural network models. The structure is given by:
\[U(\mathbf{\theta})=\prod_{\ell=1}^{L}U_{i}(\theta_{i})W_{i}, \tag{1}\]
where, \(U_{i}(\theta_{i})\) represents a unitary gate with a variational parameter \(\theta_{i}\). \(W\) stands for a fixed and unparameterized section of the circuit. In a common scenario, \(U_{i}(\theta_{i})\) can be represented as \(U_{i}(\theta)=e^{i\theta_{i}X_{i}}\), where \(X_{i}\) is a Hermitian operator. The layer depth is denoted by \(L\), which is equivalent to the number of parameters \(\mathbf{\theta}=\{\theta_{1},\cdots,\theta_{L}\}\). By defining \(\tilde{U}_{i}(\theta_{i})(\rho):=U_{i}(\theta_{i})\rho U_{i}^{\dagger}(\theta _{i})\) ans \(\tilde{W}_{i}:=W_{i}\rho W_{i}^{\dagger}\). We can express the whole quantum circuit as a quantum model that for a given input \(\rho_{\alpha}\), it return an output (array or scalar):
\[f_{i,\alpha}=\text{Tr}(\mathcal{U}(\mathbf{\theta})(\rho_{\alpha})O_{i}) \tag{2}\]
such that
\[\mathcal{U}(\mathbf{\theta})(\cdot)=\bigcirc_{\ell=1}^{L}(\tilde{U}_{\ell}(\theta _{\ell})\circ\tilde{W}_{\ell})(\cdot) \tag{3}\]
In order to analyze the probabilistic behavior of the outputs from our quantum models, it's necessary to examine the k-point correlation function, given by
\[\mathbb{E}_{\mathbf{\theta}}[f_{i_{1};\alpha_{1}}(\mathbf{\theta})\cdots f_{i_{k}; \alpha_{k}}(\mathbf{\theta})]=\mathbb{E}_{\mathbf{\theta}}[\text{Tr}\Big{(}\mathcal{ U}^{(k)}(\mathbf{\theta})(\rho_{\alpha}^{\otimes k})O^{\otimes k}\Big{)}] \tag{4}\]
Where \(\mathcal{U}^{(k)}\) represents an extension of Eq.3, which is defined as follows:
\[\mathcal{U}^{(k)}(\mathbf{\theta})(\rho_{\alpha}^{\otimes k}):=\bigcirc_{\ell=1}^ {L}(\tilde{U}_{\ell}^{\otimes k}(\theta_{\ell})\circ\tilde{W}_{\ell}^{\otimes k })(\rho_{\alpha}^{\otimes k}) \tag{5}\]
If the gates in a quantum model satisfy the t-design condition
\[\frac{1}{|S|}\sum_{s=1}^{s=|S|}P_{t,t}(U)=\int_{\tilde{U}}d\mu(U)UP_{t,t}(U) \tag{6}\]
then, it is possible to analytically evaluate the expectation values. This holds true for overparameterized and deep circuits, where evidence suggests that the \(t\)-design condition can be met. In such cases, we can derive analytical and closed-form expressions for these expectations for a random Haar measure ensembles:
\[\int_{\tilde{U}(d)}U_{i_{1},j_{1}}\cdots U_{i_{p},j_{p}}U_{i^{ \prime},j^{\prime}}^{\dagger}\cdots U_{i^{\prime}_{p^{\prime}},j^{\prime}_{p}} ^{\dagger}\mu(U)\] \[= \sum_{\alpha,\beta\in S_{p}}\delta_{i_{1}i^{\prime}_{\alpha(1)}} \cdots\delta_{i_{p}i^{\prime}_{\alpha(p)}}\delta_{j_{1}j^{\prime}_{\beta(1)}} \cdots\delta_{j_{p}j^{\prime}_{\beta(p)}}W_{g,d}(\alpha^{-1}\beta), \tag{7}\]
where The \(W_{g,d}\) is Weingarten function defined on the symmetric group \(S_{p}\)[3; 17]
To gain a deeper understanding of the non-Gaussian characteristics of the distribution, it is often beneficial to study the connected \(n\)-point correlator. This can be achieved by removing the contributions from the _vacuum
Figure 1: Encoding the classical or quantum data can be seen in two way. (top): Encoding in Feature space which is Hilbert space, \(|\phi(x)\rangle\in F\). The quantum circuit evolves the feature vector using a parameter \(\theta\) to predict the label. Training aims to find the optimal \(\theta\) that achieves the highest criteria, such as minimizing the loss function, for accurate label prediction. (button):Encoding data as a quantum kernel \(K(\mathcal{X},\cdot)\). One example of quantum kernel is Fidelity kernel defines as: \(K^{F}(x,x^{\prime})=|\langle\phi(x),\phi(x^{\prime})\rangle|^{2}=\text{Tr}[ \rho(x)\rho(x^{\prime})]\). The Reproducing Kernel Hilbert Space (RKHS) is the feature space that has canonical feature space if for \(K(\cdot,x)\in\mathcal{H}\) for all \(x\in\mathcal{X}\) it has a reproducing property \(f(x)=\langle f,K(\cdot,x)\rangle_{\mathcal{H}}\). In this method, a training method for quantum circuits that seeks to find the optimal coefficients \(\{c_{i}\}\), which effectively describe the output function \(f(x)=\sum_{i}c_{i}K(x,x_{i})\) based on the observed data points \(\{x_{i}\}\). This technique is commonly referred to as kernel regression. This method can be extended to Bayesian inference which probabilistic kernel regression and predict the \(p(f)\) instead of \(f\).
state in our observable, according to the Wick's theorem:
\[\mathbb{E}_{\mathbf{\varrho}}[f_{1}f_{2}\cdots f_{n}]=\mathbb{E}_{\mathbf{ \varrho}}[f_{1}f_{2}\cdots f_{n}]|_{\text{connected}}\\ +\sum_{\text{all pairing}}\mathbb{E}_{\mathbf{\varrho}}[f_{a_{1,1}} \cdots f_{\alpha_{m_{1},1}}]\cdots\mathbb{E}_{\mathbf{\varrho}}[f_{\alpha_{m,1}} \cdots f_{\alpha_{m,m}}], \tag{8}\]
where the term "all-pairing" denotes all feasible subgroups with a collective count of \(m\). By adjusting the \(f\to f-\mathbb{E}[f]\) we can observe that \(\mathbb{E}[f]|_{\text{connected}}=\mathbb{E}[-f]_{\text{connected}}\) This assumption implies that the odd order of correlators will vanish.
Through careful calculation, as described in the appendix, we derive the following result for two-point correlator :
\[\mathbb{E}[f_{i_{1},\alpha_{1}}f_{i_{2},\alpha_{2}}]|_{\text{ connected}}\\ =\frac{\operatorname{Tr}\left(O_{i_{2}}O_{i_{1}}\right)}{d-d^{3} }+\frac{\operatorname{Tr}\left(O_{i_{2}}O_{i_{1}}\right)\operatorname{Tr} \left(\rho_{\alpha_{1}}\rho_{\alpha_{2}}\right)}{d^{2}-1}\\ +\frac{\operatorname{Tr}\left(O_{i_{1}}\right)\operatorname{Tr} \left(O_{i_{2}}\right)}{d^{2}-1}+\frac{\operatorname{Tr}\left(O_{i_{1}}\right) \operatorname{Tr}\left(O_{i_{2}}\right)\operatorname{Tr}\left(\rho_{\alpha_{1} }\right)\operatorname{Tr}\left(\rho_{\alpha_{2}}\right)}{d^{2}} \tag{9}\]
and similarly, for four-point correlator we obtain:
\[\mathbb{E}[f_{i_{1},\alpha_{1}}f_{i_{2},\alpha_{2}}f_{i_{3}, \alpha_{3}}f_{i_{4},\alpha_{4}}]|_{\text{connected}}\\ =\frac{d^{4}-8d^{2}+6}{d^{2}(d^{6}-14d^{4}+49d^{2}-36)}\\ [\operatorname{Tr}\left(\rho_{\alpha_{1}}\rho_{\alpha_{2}}\rho_{ \alpha_{3}}\rho_{\alpha_{4}}\right)\times\mathcal{V}_{4}(\{O_{i}\})\\ +\operatorname{Tr}\left(\{\rho_{\alpha_{i}}\rho_{\alpha_{j}}\rho _{\alpha_{k}}\}\right)\times\mathcal{V}_{3}(\{O_{i}\}))\\ +\mathcal{O}(\frac{1}{d^{5}}). \tag{10}\]
Here, \(\mathcal{V}_{k}\) represents a linear combination of functions that map subsets of \(\{O_{i}\},\{\rho_{i}\},i\in[1,\cdots k]\) to scalar values using the \(\operatorname{Tr}(\cdot)\) operator.
For large values of \(d\), we can approximate by expansion in terms of Hilbert space dimension :
\[\mathbb{Q}:=\mathbb{E}[f_{1}f_{2}]=\frac{Q^{[2]}}{d^{2}}+\frac{Q^{[3]}}{d^{3}} +\cdots:=\hat{\mathbb{Q}}+\hat{\mathbb{Q}}+\mathcal{O}(\frac{1}{d^{4}}) \tag{11}\]
and
\[\mathbb{V}:=\mathbb{E}[f_{1}f_{2}f_{3}f_{4}]=\frac{V^{[4]}}{d^{4}}+\frac{V^{[5 ]}}{d^{5}}+\cdots:=\hat{\mathbb{V}}+\hat{\mathbb{V}}+\mathcal{O}(\frac{1}{d^{5 }}) \tag{12}\]
such that for example:
\[\mathbb{E}[f_{i_{1},\alpha_{1}}f_{i_{2},\alpha_{2}}]|_{\text{conn.}}=\frac{1}{ d^{2}}[\operatorname{Tr}(\rho_{\alpha_{1}}\rho_{\alpha_{2}})\operatorname{Tr}(O_{i_{1 }}O_{i_{2}})]+\mathcal{O}(\frac{1}{d^{3}}) \tag{13}\]
To examine the impact of the number of layers, denoted as \(L\), we can shift our perspective back to the
Figure 2: In variational quantum circuits, as a model for quantum neural networks (QNNs), the number of qubits, denoted as \(n\) and defined as \(n=\log(d)\), can be interpreted as the width of the model. To investigate the impact of the width (\(n\)) and depth (\(L\)) on the output distribution, we conducted simulations of a quantum circuit following the described structure. Each layer of the circuit consists of random two-qubit gates followed by entanglement gates. The observables used in the simulations were constructed from the Pauli \(Z\) matrix, forming a complete basis. We generated \(10^{5}\) random Haar instances for each circuit with given values of \(n\) and \(L\). The joint distribution, \(P(f_{i},f_{j})\), of the output was plotted for two fixed indices \(i\) and \(j\). The input of the circuit was constructed from the MNIST dataset, reduced size with the Principal Component Analysis(PAC), and then mapped to the feature space using the ZZZFeaturemap[5]. As observed, as the quantum neural network (QNN) becomes deeper, the output distribution tends to converge more towards a Gaussian distribution.
parametric space. In the classical context, it has been observed that \(\mathbb{E}[f_{1}f_{2}]\) can be approximated by averaging over \(\mathbb{E}[\langle\nabla\mathbf{\theta}f,\nabla\mathbf{\theta}f\rangle]\), based on Bochner's theorem. This concept has been extended to the quantum scenario in [11; 13]. For the structure like Eq.1, if we define:
\[[\mathbb{H}_{ij,\alpha\beta}]_{\mu\nu}:=\frac{\partial f_{i,\alpha}}{\partial \theta_{\mu}}\frac{\partial f_{j,\beta}}{\partial\theta_{\nu}} \tag{14}\]
then the evolution of the output can be described by the following differential equation:
\[df_{i,\alpha}=-\sum_{\mu,\nu}\eta^{\mu\nu}[\mathbb{H}_{ij,\alpha}]_{\mu\nu} \nabla_{f_{j,\alpha}}\mathcal{L}+\mathcal{O}(\eta^{2}). \tag{15}\]
By defining the left and right operators as \(U_{L,\mu}=\prod_{\ell=1}^{\mu-1}W_{\ell}U_{\ell}\), \(U_{R,\mu}=\prod_{\ell=\mu+1}^{L}W_{\ell}U_{\ell}\), \(V_{L,\mu}=U_{L,\mu}W_{\mu}U_{\mu}\), and \(V_{R,\mu}=U_{R,\mu}\), the derivative of the output with respect to a specific parameter (or layer), expressed in a product form, can be represented as follows:
\[[\mathbb{H}_{ij,\alpha\beta}]_{\mu\nu} =\operatorname{Tr}\Bigl{(}U_{R,\mu}^{\dagger}[X_{\mu},U_{\mu}^{ \dagger}W_{\mu}^{\dagger}U_{L,\mu}^{\dagger}O_{i}U_{L,\mu}W_{\mu}U_{\mu}]U_{R,\mu}\rho_{\alpha}\Bigr{)} \tag{16}\] \[\times\operatorname{Tr}\Bigl{(}U_{R,\nu}^{\dagger}[X_{\nu},U_{ \nu}^{\dagger}W_{\nu}^{\dagger}U_{L,\nu}^{\dagger}O_{j}U_{L,\nu}W_{\nu}U_{\nu }]U_{R,\nu}\rho_{\beta}\Bigr{)}\]
Now, by taking the Haar average over all random unitaries, utilizing Equation 7, We obtain
\[\mathbb{E}[\mathbb{H}_{ij,\alpha\beta}]_{\mu\nu}=\frac{2d}{(d-1) (d+1)(d^{2}+d)}\times\] \[\times\operatorname{Tr}(X_{\mu}X_{\nu})=\frac{c}{d^{2}} \operatorname{Tr}(O_{i}O_{j})\operatorname{Tr}(\rho_{\alpha}\rho_{\beta})+ \mathcal{O}(\frac{1}{d^{3}})\]
where we have defined \(c:=\frac{\operatorname{Tr}(X_{\mu}X_{\nu})}{d}\), which is typically of order one (\(\mathcal{O}(1)\)). Based on certain considerations [11], we can assume that the kernel can be approximated by a frozen one and that the variables \(t\) are independent in the later stages of training. For simplicity, let's assume \(\eta_{\mu\nu}=\delta_{\mu\nu}\). Then the sum over all parameter sizes \(L\) arises as follows:
\[\sum_{\mu\nu}\eta^{\mu\nu}[H_{ij,\alpha\beta}]_{\mu\nu}\sim\frac{L}{d^{2}} \operatorname{Tr}(O_{i}O_{j})\operatorname{Tr}(\rho_{\alpha}\rho_{\beta})+ \mathcal{O}(\frac{1}{d^{3}}) \tag{18}\]
This result motivates us to seek a similar property for the quantum neural network as well, akin to what is observed in classical neural networks:
\[\mathbb{Q}(\rho_{x},\rho_{x^{\prime}}) =\mathbb{E}_{\theta\sim p(\theta)}[f_{\theta}(\rho_{x})\cdot f_{ \theta}(\rho_{x^{\prime}})] \tag{19}\] \[\sim\mathbb{E}_{\theta\sim p(\theta)}[(\nabla_{\theta}f_{\theta} (\rho_{x}),\nabla_{\theta}f_{\theta}(\rho_{x^{\prime}}))]=\mathbb{Q}_{\rm NTK}\]
Thus, as we will explore in the subsequent section, the impact of depth, denoted as \(L\), becomes evident through the faster convergence to a Gaussian distribution:
\[f(t)-f(0)\propto(e^{-L\hat{Q}})^{t\eta} \tag{20}\]
## II Bayesian Learning or How to Get rid of Parameters
For a given quantum model, denoted as \(\mathcal{M}\), the circuit architecture associated with it will possess a collection of parameters that may either remain fixed or variable. These variable parameters, referred to as \(\mathbf{\theta}=\{\theta_{i},i\in[1,\cdots,L]\}\) need to be tuned such that the quantum cir
Figure 3: The phase transition of the energy landscape of a quantum neural network (QNN) occurs when transitioning from the underparameterized regime to the overparameterized regime. In the underparameterized regime, where the number of parameters in the QNN is low compared to the dimension of the Hilbert space (or the degrees of freedom), the QNN’s highly non-convex loss landscape contains unfavorable local minima with eigenvalues of similar magnitudes as the global minimum. However, as the number of parameters increases, the landscape becomes simpler and exhibits numerous favorable local minima. In this scenario, training and converging towards the actual global minimum become much easier. b) The effect of overparameterization in quantum neural networks (QNNs) is the emergence of Gaussian-like output distributions. As the QNN becomes overparameterized, meaning that the number of parameters exceeds the necessary amount to fit the data, the output distribution of the QNN tends to exhibit Gaussian characteristics. This observation suggests that overparameterization promotes a smoother and more concentrated output distribution, resembling a Gaussian distribution.
cuit performs the desired output. Since the variational parameters are assumed to be independent of each other, the prior distribution of these parameters is merely the product of the prior distribution of each individual parameter such that \(p(\mathbf{\theta}|\mathcal{M})=\prod_{\ell=1}^{L}p(\theta_{\ell}|\mathcal{M})\).
Recent investigations have delved into how the prior distribution of the \(p(\mathbf{\theta}|\mathcal{M})\) as a method of initialization strategies of quantum neural networks, such as Gaussian initialization and reduced-domain parameter initialization [23]. These studies suggest that such strategies can enhance convergence speed and alleviate the problem of barren plateaus. However, it should be noted that these assumptions do not guarantee that the global minimum can be reached within a reasonable number of iterations using these initialization methods, and there is a risk of getting trapped in undesirable local minima.
### Marginalization of Parameters
When considering a specific model \(\mathcal{M}\), our prior hypotheses regarding the behavior of the observables we seek to learn are taken into account. The probability distribution describing our prior belief and prediction of the outcome of a quantum circuit, denoted as \(p(\mathbb{P}|\mathbf{\theta},\mathcal{M})\), can be expressed as follows:
\[p(f_{\mathbb{P}}|\mathcal{M})=\int\prod_{i}^{L}d\theta_{i}\,p(f_{\mathbb{P}}| \theta_{i},\mathcal{M})\,p(\theta_{i}|\mathcal{M}) \tag{21}\]
Upon observing the data points of interest \(\mathbb{O}\), we can update our beliefs and make predictions for new data points \(\mathbb{P}\) using the Bayes' rule:
\[p(f_{\mathbb{P}}|f_{\mathbb{O}},\mathcal{M})=\int\prod_{i}^{L}d\theta_{i}\,p(f _{\mathbb{O}}|\theta,\mathcal{M})\,p(\theta|f_{\mathbb{O}},\mathcal{M}) \tag{22}\]
However, this equation can be simplified by integrating out the parameters, resulting in the following equation [19]:
\[p(f_{\mathbb{P}}|f_{\mathbb{O}},\mathcal{M})=\frac{p(f_{\mathbb{P}},f_{ \mathbb{O}}|\mathcal{M})}{p(f_{\mathbb{O}}|\mathcal{M})} \tag{23}\]
This formula allows us to update our beliefs and make predictions based on the observed data points without explicitly involving any parameters. In fact, we have bypassed the intermediate step of updating our posterior on the parameters, which can be obtained using different schemes such as Maximum A Posteriori (MAP) or Maximum Likelihood Estimation (MLE). For example, The optimal parameter values with MAP can be found by maximizing the posterior probability \(\mathbf{\theta}^{*}=\operatorname*{arg\,max}_{\theta}p(\theta|\mathbb{O},\mathcal{ M})\).
## III Probabilistic Model of QNN
In this section, we present a proposition for a probabilistic approach to quantum neural networks. In this novel viewpoint of quantum neural networks, the output of the circuit is represented as a probability distribution, contrasting the traditional approach where the circuit's output is a delta function across all potential outcomes. The key advantage of this perspective lies in its ability to incorporate uncertainty within the framework, enabling increased flexibility in noise analysis. Furthermore, owing to the inherent probabilistic nature of quantum circuits, the variance from the expectation becomes more pronounced, underscoring its significance.
In general, assuming that the output distribution is symmetric with respect to the origin and neglecting odd-numbered correlation functions, we can describe the output distribution using Equation 3:
In a general framework, assuming symmetry around the origin in the output distribution and excluding correlations of odd order, we can characterize the output distribution as [19]
\[p(f)=\frac{e^{-S(f)}}{\mathcal{Z}}, \tag{24}\]
where the action \(S(\cdot)\) is defined as:
\[\begin{split} S(f)&=\frac{1}{2}\sum_{\mu,\nu=1}^{n} K^{\mu\nu}f_{\mu}f_{\nu}\\ &+\sum_{m=2}^{\Lambda}\frac{1}{(2m)!}\sum_{\mu_{1},\cdots\mu_{2m}= 1}C^{\mu_{1}\cdots\mu_{2m}}f_{\mu_{1}}\cdots f_{\mu_{2m}}.\end{split} \tag{25}\]
In this formulation, \(C^{\mu_{1}\cdots\mu_{2m}}\) represents the \(m\)-th order couplings, and \(\Lambda\) serves as the truncation threshold for limiting the series expansion up to the \(\Lambda\)-th order. Furthermore, the partition function \(\mathcal{Z}\) in the denominator of Equation 3 is defined as:
\[\mathcal{Z}=\int d^{n}fe^{-S(f)} \tag{26}\]
Here, \(C^{\mu_{1}\cdots\mu_{2m}}\) represents the \(m\)-th order couplings defined as
\[\mathbb{E}[f_{1}f_{2}\cdots f_{m}]=\int df^{m}p(f)f_{1}f_{2}\cdots f_{m}, \tag{27}\]
and \(\Lambda\) is the limit for truncating the series to the \(\Lambda\)-th order. The partition function, \(\mathcal{Z}\) in the denominator of Equation 3, is given by:
\[\mathcal{Z}=\int d^{n}fe^{-\mathcal{S}(f)} \tag{28}\]
As the number of qubits, denoted by \(n\), increases, the Hilbert space dimension expands exponentially as \(d=2^{n}\). In the asymptotic regime where \(d\) approaches infinity, higher-order corrections within the correlation
relation become negligible. Specifically, we observe that \(\mathbb{E}[f_{1}\cdots f_{m}]\sim\mathcal{O}(1/d^{m^{\prime}+2})\) for \(m^{\prime}>0\). This implies that the action can be perturbed with respect to the dimension of the Hilbert space:
\[S=S^{(0)}+\frac{S^{(2)}}{d^{2}}+\frac{S^{(4)}}{d^{4}}+\mathcal{O}\left(\frac{1}{ d^{6}}\right). \tag{29}\]
This perturbation approach bears resemblance to the \(\frac{1}{d}\) expansion technique employed in the domains of statistical mechanics and quantum field theory [19], where \(S^{(0)}\) characterizes the vacuum state of the theory and \(S^{(2)}\) represents the first-order interaction and so on.
### Infinite Width QNN
In this section, we want to study the extreme over-parameterized case \(d\rightarrow\infty\) where all higher order terms than quadratic interaction can be neglected: \(S\approx S^{(0)}+S^{(2)}/d^{2}\). In this regime, it becomes possible to express the output distribution of the quantum neural network as a Gaussian distribution.
Our learning task, aimed at determining the cacheetic of this distribution, entails observing samples from ensembles of data points and subsequently predicting the output of the quantum neural network for new, unseen data points. We represent the observed datapoints and their corresponding sets as \(\mathbb{O}=\{\rho_{\beta},f_{i,\beta}:=y_{\beta,i};\beta\in[1,\cdots|\mathbb{ O}|],i\in[1,\cdots,n]\}\).Here, \(|\mathbb{O}|\) denotes the size of the set of observable operators. In this representation, \(\rho_{\beta}\) denotes the density matrix of the \(\beta\)-th data point from the observed ensemble, while \(y_{i,\beta}:=f_{i,\beta}\) represents the true label of this particular data point. Similarly, we define the prediction sets as \(\mathbb{P}=\{\rho_{\alpha},=f_{j,\alpha};\alpha\in[1,\cdots|\mathbb{P}|],j\in[ 1,\cdots,n]\}\). With these two sets, we can represent our updated posterior as follows:
\[p(f_{\mathbb{O}},f_{\mathbb{P}})=\frac{1}{Z}\exp(-\frac{1}{2}\sum_{i,j=1}^{n} \sum_{\mu_{1},\mu_{2}\in\mathbb{P}\cup\mathbb{O}}K^{ij,\mu_{1}\mu_{2}}f_{i, \mu_{1}}f_{j,\mu_{2}}) \tag{30}\]
where \(Z\) is the normalization factor, given by \(|2\pi K|^{n^{2}}\).
Utilizing Bayes' rule, as presented in Equation 23, we can gain deeper insights into the QNN's output distribution for a particular data point, denoted as \(f_{\mathbb{P}}:=f\), based on previously observed data points, \(f_{\mathbb{O}}\). Similar computations to those used in classical neural networks, as detailed in [19] and outlined in Appendix A, allow us to express this distribution as a Gaussian distribution, or potentially as a Gaussian process \(p(f)=\frac{1}{Z}\exp(S)\) with Gaussian action:
\[p(f) =\frac{1}{Z}\exp(-S),\] \[S =\frac{1}{2}\sum_{\begin{subarray}{c}i,j=1\\ \beta_{1},\beta_{2}\in\mathbb{P}\end{subarray}}^{n}\mathbb{K}^{ij,\beta_{1} \beta_{2}}[f_{i,\beta_{1}}-m_{i,\beta_{1}}^{\infty}][f_{j,\beta_{2}}-m_{j, \beta_{2}}^{\infty}], \tag{31}\]
which is characterized by the following mean and covariance:
\[[m]_{i,\mu} =\sum_{\lambda,\sigma\in\mathbb{O}}K_{ij,\mu\lambda}K^{ij,\lambda \sigma}y_{i,\sigma}:=K(\mathbb{P},\mathbb{O})K^{-1}(\mathbb{O},\mathbb{O}) \mathbf{y},\] \[\mathbb{K}]_{ij,\mu\nu} =K_{ij,\mu\nu}-\sum_{\lambda,\sigma\in\mathbb{O}}K_{ij,\mu\lambda }K^{ij,\lambda\sigma}K_{ij,\sigma\nu}\] \[=K(\mathbb{P},\mathbb{O})-K(\mathbb{P},\mathbb{P})K^{-1}(\mathbb{ O},\mathbb{O})K(\mathbb{O},\mathbb{P}). \tag{32}\]
Here, \(K\) is a two-point correlation function representing the kernel function. Therefore, we can establish the explicit relationship between the mean and covariance in terms of the inputs, denoted by \(\{\rho_{\beta}\}_{\beta\in\mathbb{O}}\) and observables \(\{O\}\):
\[[m]_{i,\beta} =\sum_{\alpha_{1},\alpha_{2}\in\mathbb{O}}\mathbb{E}[f_{i}f_{j}]_{ \beta,\alpha_{1}}\mathbb{E}[f_{i}f_{j}]_{\alpha_{1},\alpha_{2}}^{-1}y_{i,\alpha _{2}}\] \[=\sum_{\alpha_{1},\alpha_{2}\in\mathbb{O}}[\text{Tr}(\rho_{\beta }\rho_{\alpha_{1}})][\text{Tr}(\rho_{\mathbf{z}_{1}}\rho_{\mathbf{z}_{2}})]_{\alpha_{1 }\alpha_{2}}^{-1}y_{i,\alpha_{2}}+\mathcal{O}(\frac{1}{d^{2}}) \tag{33}\]
Furthermore, the covariance can be calculated using the following expression:
\[[\mathbb{K}]_{ij,\beta_{1}\beta_{2}}=\frac{\text{Tr}(O_{i}O_{j}) }{d^{2}}[\text{Tr}(\rho_{\beta_{1}}\rho_{\beta_{2}})\] \[-\sum_{\alpha_{1},\alpha_{2}\in\mathbb{O}}\text{Tr}(\rho_{\beta_{1 }}\rho_{\beta_{2}})[\text{Tr}(\rho_{\mathbf{z}_{1}}\rho_{\mathbf{z}_{2}})]_{\beta_{2} \alpha_{2}}^{-1}\,\text{Tr}(\rho_{\alpha_{2}}\rho_{\beta_{2}})]+\mathcal{O}( \frac{1}{d^{4}}). \tag{34}\]
In literature, the term \(\text{Tr}(\rho_{x}\rho_{x})\) is occasionally denoted as the Quantum fidelity kernel \(K^{\mathbb{F}}(x,x^{\prime})\)[6]. Using this notation, we can make the previous equation more understandable:
\[\mathbb{K}_{ij,\mathbb{P}}=\frac{\text{Tr}(O_{i}O_{j})}{d^{2}}[K^{\mathbb{P}}_{ \mathbb{P},\mathbb{O}}-K^{\mathbb{P}}_{\mathbb{P},\mathbb{P}}(K^{\mathbb{P}})_{ \circ,\circ}^{-1}K^{\mathbb{P}}_{\mathbb{O},\mathbb{P}}]+\mathcal{O}(\frac{1}{d ^{4}}) \tag{35}\]
### Parameter space representation
Within the context of parameter space, our learning endeavor can be seen as the creation of a parametric model \(f_{\text{QNN}}(\rho;\mathbf{\theta})\), accompanied by a distribution \(p(\mathbf{\theta})\). The aim is to locate the best value of \(\mathbf{\theta}=\mathbf{\theta}^{*}\), ensuring minimal error considering available computational resources (like the number of qubits, gates, and observed datapoints). The goal is to have our model closely approximate the actual distribution and align accurately with the real datapoints \(f_{\text{QNN}}(x;\mathbf{\theta}^{*})\approx f(x)\). The aspiration is that, with proper initialization near the actual global minimum, we can locate this global minimum using gradient descent methods. As quantum neural networks lean towards overparameterization, this assumption becomes more credible. Given this assumption, the
output of the Quantum Neural Network (QNN) can be approximated using a Taylor series:
\[\begin{split}& f_{\alpha}^{i,(t)}=f_{\alpha}^{i,(0)}-\eta\sum_{ij, \beta}\mathbb{H}^{ij,\beta\alpha}\epsilon_{j,\beta}\leftrightarrow\mathbb{Q}_{ \text{NTK}}\\ &+\frac{\eta^{2}}{2}\sum_{ij_{1}j_{2},\beta_{1}\beta_{2}}d\mathbb{ H}^{ij_{1}j_{2},\alpha_{\beta}\beta_{2}}\epsilon_{j_{1},\beta_{1}}\epsilon_{j_{2}, \beta_{2}}\leftrightarrow\text{d}\mathbb{Q}_{\text{NTK}}\\ &-\frac{\eta^{3}}{6}\sum_{ij_{1}\cdots\alpha_{3}}d\mathbb{H}^{ij _{1}\cdots\alpha_{3}}\epsilon_{j_{1},\beta_{1}}\epsilon_{j_{2},\beta_{2}} \epsilon_{j_{3}\beta_{3}}\leftrightarrow\text{d}\mathbb{Q}_{\text{NTK}}\\ &+\mathcal{O}(\frac{1}{d^{2}})\end{split} \tag{36}\]
where \(\epsilon=\nabla_{f}\mathcal{L}\) denotes the derivative of the loss function with respect to the output function and \(\mathbb{H}\) is defined as:
\[\mathbb{H}_{ij,\alpha\beta}=\sum_{\mu\nu}\eta_{\mu\nu}\frac{\partial f_{i, \alpha}}{\partial\theta_{\mu}}\frac{\partial f_{j,\beta}}{\partial\theta_{ \nu}}. \tag{37}\]
As the width of the Quantum Neural Network (QNN) approaches infinity, \(\mathbb{H}\) can be approximated by a frozen kernel, referred to as the Quantum Neural Tangent Kernel (QNTK):
\[\mathbb{H}_{ij,\alpha\beta}=\mathbb{Q}_{ij,\alpha\beta}+\mathcal{O}(\frac{1}{d ^{2}}) \tag{38}\]
By comparing the expansion of the parameter space with the functional method outlined in Sec.III, we can draw the following analogy:
\[\begin{split}\mathbb{Q}_{\text{NTK}}:&\mathbb{E}[ \frac{\partial f_{i_{1},\alpha_{1}}}{\partial\theta_{\mu}}\frac{\partial f_{j,\alpha_{2}}}{\partial\theta_{\nu}}]\sim\frac{1}{d^{2}}\operatorname{Tr}( \rho_{\alpha_{1}}\rho_{\alpha_{2}})\operatorname{Tr}(O_{i}O_{j})\\ &\sim\mathbb{E}[f_{i_{1},\alpha_{1}}f_{j,\alpha_{2}}]=K_{ij, \alpha_{1}\alpha_{2}}\end{split} \tag{39}\]
Likewise, for the other terms in the Eq.36 expansion, these are known as quantum meta kernels, which correspond to the third and fourth-order correlations:
\[\begin{split}\text{d}\mathbb{Q}_{\text{NTK}}:&\mathbb{ E}[\frac{\partial^{2}f_{i_{1},\alpha_{1}}}{\partial\theta_{\mu}\partial\theta_{ \nu}}\frac{\partial f_{i_{2},\alpha_{2}}}{\partial\theta_{\lambda}}\frac{ \partial f_{i_{3},\alpha_{3}}}{\partial\theta_{\sigma}}]=0\\ &\sim\mathbb{E}[f_{i_{1},\alpha_{1}}f_{i_{2},\alpha_{2}}f_{i_{3}, \alpha_{3}}],\end{split} \tag{40}\]
and
\[\begin{split}\text{d}\mathbb{Q}_{\text{NTK}}:&\mathbb{ E}[\frac{\partial^{3}f}{\partial\theta_{1}\partial\theta_{2}\partial\theta_{3}} \frac{\partial f}{\partial\theta_{4}}\frac{\partial f}{\partial\theta_{5}} \frac{\partial f}{\partial\theta_{6}}]\\ &+\mathbb{E}[\frac{\partial^{2}f}{\partial\theta_{1}\partial \theta_{2}}\frac{\partial^{2}f}{\partial\theta_{3}\partial\theta_{4}}\frac{ \partial f}{\partial\theta_{5}}\frac{\partial f}{\partial\theta_{6}}]\\ &\leftrightarrow\mathbb{E}[f_{i_{1},\alpha_{1}}f_{i_{2},\alpha_{2} }f_{i_{3},\alpha_{3}}f_{i_{4},\alpha_{4}}]=\mathbb{V}_{i_{1}i_{2}i_{3}i_{4}, \alpha_{1}\alpha_{2}\alpha_{3}\alpha_{4}}\end{split} \tag{41}\]
Solving Eq.36 entails dealing with a complex set of differential equations, which generally do not yield a closed-form solution [19]. However, in the limit of \(d\rightarrow\infty\), it is acceptable to only retain the linear term. The solution to the differential equation, in terms of steps (represented here by the continuous variable \(t\)), can be expressed as:
\[\begin{split} f_{i,\alpha}^{(t)}&\simeq m_{i,\alpha} ^{(t)}+\Sigma_{i,\alpha}^{(t)}=\sum_{\beta_{1},\beta_{2}\in\mathbb{O}}\mathbb{Q }_{\alpha\beta_{1}}\mathbb{Q}^{\beta_{1}\beta_{2}}(1-e^{-\eta\mathbb{Q}t})y_{i,\beta_{2}}\\ &+f_{i,\alpha}^{(0)}-\sum_{\beta_{1},\beta_{2}\in\mathbb{O}} \mathbb{Q}_{\alpha\beta_{1}}\mathbb{Q}^{\beta_{1}\beta_{2}}(1-e^{-\eta \mathbb{Q}t})f_{i,\beta_{2}}^{(0)}.\end{split} \tag{42}\]
In the limit of large iteration times, we can determine the output of the circuit (without actually performing any training) as a linear kernel regression:
\[f_{\text{QNN}}(\rho_{x})\approx f_{\mathbb{Q}_{\text{NTK}}}=\mathbb{Q}_{\text {NTK}}(\rho_{x},\rho_{X})^{T}\cdot\mathbb{Q}_{\text{NTK}}(\rho_{X},\rho_{X})^{ -1}\cdot Y, \tag{43}\]
and the optimized parameters at the end of training would be determined as
\[\mathbf{\theta}^{*}=\mathbf{\theta}^{(0)}-\eta\nabla_{\mathbf{\theta}}f\cdot\mathbb{Q}_{ \text{NTK}}^{-1}(\mathbf{f}-\mathbf{y}) \tag{44}\]
To see the connection with the Gaussian process, let's consider the expected output for a particular input \(\rho_{\alpha}\), which can be given as:
\[\mathbb{E}[f_{i,\alpha}^{(t)}]=\sum_{\beta_{1},\beta_{2}\in\mathbb{O}}\mathbb{ Q}_{\alpha\beta_{1}}\mathbb{Q}^{\beta_{1}\beta_{2}}(1-e^{-\eta\mathbb{Q}t})y_{i,\beta_{2}}, \tag{45}\]
which, in the limit of infinite iterations, converges to \(m_{i,\alpha}^{(\infty)}:=\sum_{\beta_{1},\beta_{2}\in\mathbb{O}}\mathbb{Q}_{ \alpha\beta_{1}}\mathbb{Q}^{\beta_{1}\beta_{2}}y_{i,\beta_{2}}\).
Likewise, the covariance of the circuit's output will take the following form:
\[\begin{split}&\mathbb{E}[f_{i,\alpha_{1}}^{(t)}f_{j,\alpha_{2}}^{(t)}] |_{\text{connected}}=\mathbb{Q}_{\alpha_{1}\alpha_{2}}\\ &-\sum_{\beta_{1},\beta_{2}\in\mathbb{P}}\mathbb{Q}_{\alpha_{2} \beta_{1}}\mathbb{Q}^{\beta_{1}\beta_{2}}(1-e^{-\eta\mathbb{Q}t})\mathbb{Q}_{ \alpha_{1}\beta_{2}}\\ &-\sum_{\beta_{1},\beta_{2}\in\mathbb{P}}\mathbb{Q}_{\alpha_{1} \beta_{1}}\mathbb{Q}^{\beta_{1}\beta_{2}}(1-e^{-\eta\mathbb{Q}t})\mathbb{Q}_{ \alpha_{2}\beta_{2}}\\ &+\sum_{\beta_{1},\cdots,\beta_{4}\in\mathbb{P}}\mathbb{Q}_{\alpha_{1} \beta_{1}}\mathbb{Q}^{\beta_{1}\beta_{2}}[1-e^{-\eta\mathbb{Q}t}]\mathbb{Q}_{ \alpha_{2}\beta_{3}}[1-e^{-\eta\mathbb{Q}t}]\\ &\times\mathbb{Q}^{\beta_{3}\beta_{4}}\mathbb{Q}_{\beta_{4}\beta_{2}}: =\Sigma_{\alpha_{1}\alpha_{2}}^{ij,t}.\end{split} \tag{46}\]
The higher order moments are of the order \(\mathcal{O}(1/d^{2})\) and can be disregarded. Based on this observation, we can infer that the most likely distribution to describe the statistics of the output is a Gaussian process. Upon comparing with the functional description of the output distribution, we observe that we obtain the same distribution as predicted by the Gaussian process, particularly as \(t\rightarrow\infty\), and substituting \(\mathbb{Q}\) with \(\mathbb{K}\):
\[f^{(t)}\sim\mathcal{GP}(m^{(t)},\Sigma^{(t)})\overset{\text{lim}}{\to }\mathcal{GP}(m^{(\infty)},\mathbb{K}) \tag{47}\]
comparing the linear kernel regression method with the Gaussian process,
\[\begin{split}& f\sim f_{\mathbb{Q}_{\text{NTK}}}\leftrightarrow\text{ Kernel Regression}\\ & p(f)\sim\mathcal{N}(\mu,\Sigma)\leftrightarrow\text{Gaussian Process}\end{split} \tag{48}\]
### Finite Width QNN
When working within the finite width regime of the parameter space, it becomes necessary to consider higher-order terms, such as dQNTK and ddQNTK, which are based on the higher-order derivative terms dH and dHH. Using traditional methods to obtain higher-order corrections in the output distribution, such as for dQNTK, can be unfeasible and not provide a closed-form solution. However, in this section, we demonstrate that by utilizing Bayesian learning, we can obtain higher-order corrections of the output distribution in a closed form, at least for the case of dQNTK.
To enhance the model's ability to capture the finite width regime, it becomes necessary to go beyond the Gaussian distribution and utilize representation learning techniques. By incorporating additional correlation functions into the model, it becomes better equipped to accurately represent the underlying reality.
To obtain a nearly-Gaussian (NG) distribution, comparable to ddQNTK, we need to incorporate the next leading order in finite width, which is of \(O(1/d)\). This is because in the nearly-Gaussian distribution, the odd terms are zero or equivalently, dQNTK is zero.
\[S_{\text{NG}}:=\frac{1}{2}\sum_{\mu,\nu}^{n}K^{\mu\nu}f_{\mu}f_{\nu}-\frac{ \lambda}{4!}\sum_{\nu_{1}\nu_{2}\nu_{3}\nu_{4}=1}^{n}V^{\nu_{1}\nu_{2}\nu_{3} \nu_{4}}f_{\nu_{1}}f_{\nu_{2}}f_{\nu_{3}}f_{\nu_{4}}. \tag{49}\]
such that
\[\begin{split} K^{\alpha_{1}\alpha_{2}}&=\hat{ \mathbb{Q}}^{\alpha_{1}\alpha_{2}}+\mathcal{O}(\frac{1}{d^{3}})\\ V^{\alpha_{1}\alpha_{2}\alpha_{3}\alpha_{4}}&= \hat{\mathbb{V}}^{(\alpha_{1}\alpha_{2})(\alpha_{3}\alpha_{4})}+\mathcal{O}( \frac{1}{d^{5}})\end{split} \tag{50}\]
By employing the same calculations as we did for the infinite width scenario, we can determine the contributions of finite width structures to the mean and covariance:
\[\begin{split}&\mathbb{E}[f_{i,\beta}]=m_{i,\beta}^{\infty}+\frac{ 1}{3!}\sum_{j_{1},\beta\in\mathbb{P}}K_{ij_{1},\beta\beta_{1}}\\ &\times[\sum_{\begin{subarray}{c}j_{2}j3j_{4}\\ \alpha_{2}\alpha_{3}\alpha_{4}\in\mathbb{O}\end{subarray}}V^{ji_{1}j_{2}j_{3} j_{4},\beta_{2}\alpha_{3}\alpha_{4}}y_{j_{2},\alpha_{2}}y_{j_{3},\alpha_{3}}y_{j_{4}, \alpha_{4}}]\\ &\sum_{\begin{subarray}{c}j_{1}j_{2}j_{3}\\ \beta_{1}\beta_{2}\beta_{3}\in\mathbb{P}\end{subarray}}[K_{ij_{1},\beta_{ \beta}\beta_{1}}K_{j_{2}j_{3},\beta_{2}\beta_{3}}+K_{ij_{2},\beta\beta_{2}}K_{ ij_{3},\beta_{1}\beta_{3}}\\ &+K_{ij_{3},\beta\beta_{3}}K_{j_{1}j_{2},\beta_{1}\beta_{2}}] \sum_{j_{4},\alpha\in\mathbb{O}}V^{ji_{1}j_{2}j_{3}j_{4},\beta_{1}\beta_{2} \beta_{3}\alpha}y_{j_{4},\alpha}\\ &+\mathcal{O}(\frac{1}{d^{4}}),\end{split} \tag{51}\]
and
\[\begin{split}&\Sigma^{\prime}=\mathbb{E}[f_{i_{1},\mu_{1}}f_{i_{2}, \mu_{2}}]|_{\text{conn.}}=K_{i_{1}i_{2},\mu_{1}\mu_{2}}\\ &-\frac{1}{2}\sum_{\begin{subarray}{c}j_{1}j_{2}j_{3}j_{4}=1\\ j_{1}\nu_{2}\nu_{3}\nu_{4}\end{subarray}}V^{ji_{1}j_{2}j_{3}j_{4},\nu_{1}\nu _{2}\nu_{3}\nu_{4}}K_{i_{1}j_{1},\mu_{1}\nu_{1}}K_{i_{2}j_{2},\mu_{2}\nu_{2}},\\ &\times K_{j_{3}j_{4},\nu_{3}\nu_{4}}\end{split} \tag{52}\]
where \(m_{i,\alpha}^{(\infty)}=\sum_{\beta_{1},\beta_{2}}K_{\alpha_{1}\beta_{1}}K^{ \beta_{1}\beta_{2}}y_{i,\beta_{2}}\) thus e can refine the kernel regression method by incorporating finite-size corrections:
\[m^{\prime}=\bar{f}=m^{(\infty)}+m^{(1/d)}+\mathcal{O}(\frac{1}{d^{2}}) \tag{53}\]
If we aim to maintain the Gaussian approximation, the corrected covariance, which arises from higher orders, and the effects of wiring with others, can be described as
\[f\approx\mathcal{GP}(m^{\prime},\Sigma^{\prime}). \tag{54}\]
However, and in the large \(d\) limit and the output distribution can be approximate better by forth order correlator as :
\[p(f)=p(f|y_{\alpha},\hat{\mathbb{Q}}_{\mu\nu},\hat{\mathbb{V}}_{\mu\nu\delta \sigma}). \tag{55}\]
such that \(\hat{\mathbb{Q}}=\Sigma^{\prime}\) and \(\hat{\mathbb{V}}\) given by (see appendix for more details)
\[\begin{split}&\mathbb{V}_{i_{1}i_{2}i_{3}i_{4},\mu_{1}\mu_{2}\mu_{3} \mu_{4}}=\sum_{j_{1}j_{2}j_{3}j_{4}}\sum_{\nu_{1}\nu_{2}\nu_{3}\nu_{4}}V^{ji_{1 }j_{2}j_{3}j_{4},\nu_{1}\nu_{2}\nu_{3}\nu_{4}}\\ &\times K_{i_{1}j_{1},\mu_{1}\nu_{1}}K_{i_{2}j_{2},\mu_{2}\nu_{2}} K_{i_{3}j_{3},\mu_{3}\nu_{3}}K_{i_{4}j_{4},\mu_{4}\nu_{4}}+\mathcal{O}(\frac{1}{d^{5}}) \end{split} \tag{56}\]
## IV Conclusion
In this study, we studied the variational quantum circuits, which serve as a model for quantum neural networks from the sense of function space, in stead of parmart space. We explored the impact of over-parameterization through the expansion of width and depth. We observed theoretically and by simulation that this led to a shift in the output distribution of the quantum models towards a Gaussian process. We also examined how the finite size of the width can skew the distribution towards non-Gaussianity. We demonstrated how this can be achieved using a \(1/d\) expansion in closed form relation, setting the stage for systematic consideration of higher-order corrections.
Note added: This work was presented informally to the scientific community during the APS March Meeting 2023, and toward the completion of this work, we became aware of work with a similar topic on arXiv [4]. |
2304.13135 | MEDNC: Multi-ensemble deep neural network for COVID-19 diagnosis | Coronavirus disease 2019 (COVID-19) has spread all over the world for three
years, but medical facilities in many areas still aren't adequate. There is a
need for rapid COVID-19 diagnosis to identify high-risk patients and maximize
the use of limited medical resources. Motivated by this fact, we proposed the
deep learning framework MEDNC for automatic prediction and diagnosis of
COVID-19 using computed tomography (CT) images. Our model was trained using two
publicly available sets of COVID-19 data. And it was built with the inspiration
of transfer learning. Results indicated that the MEDNC greatly enhanced the
detection of COVID-19 infections, reaching an accuracy of 98.79% and 99.82%
respectively. We tested MEDNC on a brain tumor and a blood cell dataset to show
that our model applies to a wide range of problems. The outcomes demonstrated
that our proposed models attained an accuracy of 99.39% and 99.28%,
respectively. This COVID-19 recognition tool could help optimize healthcare
resources and reduce clinicians' workload when screening for the virus. | Lin Yang, Shuihua Wang, Yudong Zhang | 2023-04-25T20:26:05Z | http://arxiv.org/abs/2304.13135v1 | # MEDNC: Multi-ensemble deep neural network for COVID-19 diagnosis
###### Abstract
Coronavirus disease 2019 (COVID-19) has spread all over the world for three years, but medical facilities in many areas still aren't adequate. There is a need for rapid COVID-19 diagnosis to identify high-risk patients and maximize the use of limited medical resources. Motivated by this fact, we proposed the deep learning framework MEDNC for automatic prediction and diagnosis of COVID-19 using computed tomography (CT) images. Our model was trained using two publicly available sets of COVID-19 data. And it was built with the inspiration of transfer learning. Results indicated that the MEDNC greatly enhanced the detection of COVID-19 infections, reaching an accuracy of 98.79% and 99.82% respectively. We tested MEDNC on a brain tumor and a blood cell dataset to show that our model applies to a wide range of problems. The outcomes demonstrated that our proposed models attained an accuracy of 99.39% and 99.28%, respectively. This COVID-19 recognition tool could help optimize healthcare resources and reduce clinicians' workload when screening for the virus.
deep learning; neural network; ensemble; COVID-19; diagnosis +
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted date
+
Footnote †: journal: Accepted date
+
Footnote †: journal: Accepted date
+
Footnote †: journal: Accepted date
+
Footnote †: journal: Accepted date
+
Footnote †: journal: Accepted date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted date
+
Footnote †: journal: Accepted:cepted date
+
Footnote †: journal: Accepted: date
+
Footnote †: journal: Accepted date
+
detect COVID-19 infections, chest CT scans image the patient's lungs using tomography[9]. There have been cases where PCR testing came back negative but patients were verified Covid positive with CT scans[10], suggesting that Chest CT has a higher sensitivity for the detection of COVID-19 compared to RT-PCR. With a sensitivity of over 97% and a specificity of roughly 25%, chest CT has been shown in research to be an accurate diagnostic tool for COVID-19[11]. Chest CT has been increasingly employed as an alternative to RT-PCR in the clinical pathway for identifying COVID-19.
However, it might take plenty of time and work for clinicians to evaluate disorders using CT scans in the traditional approach. Thankfully, deep learning for medical image processing may speed up diagnoses and decrease doctor effort. Deep learning is an artificial intelligence frontier that is inspired by the human brain[12]. Deep learning may be able to replace physicians' years of experience and careful examination in detecting a patient's illness[13]. Patients may quickly gain a perspective that is less biased after an assessment[14]. Professionals have been able to quickly detect patients with potential disease infection thanks to the use of deep learning technologies[15].
Recognition of COVID-19 has been achieved using a variety of distinct deep-learning models. Researchers in Ref [16] proposed a ResNet50 model to detect COVID-19 from chest CTs. Rather than chopping off parts of the image, they feed the entire image's wavelet coefficients into the ResNet's foundational model. The outcome was 92.2% accurate. To detect COVID-19 patients, in Ref [17], five deep CNN learning models were compared. The researchers improved the classification efficiency of all five models by combining conventional picture augmentation with CGAN. The results revealed that ResNet50 had the highest degree of accuracy at 82.9%. In Ref[18], eight previously trained models were compared for their ability to identify COVID-19 patients: According to the findings, DenseNet201 performed best, with an accuracy of 85%. When comparing COVID-19 CT scans to those of other patients with pneumonia or healthy individuals, a CNN design based on SqueezeNet was proposed (Ref [19]). As a result of the design, they were able to achieve an accuracy of 85.03%. Ref [20] developed FCONet using a pre-trained CNN model as its backbone. The data shows that FCONet, on average, was nearly perfect. A CT dataset consisting of 361 pictures was used to train a modified version of a pre-trained AlexNet model, as described in Ref [21]. According to the results, the customized CNN model had the highest accuracy, at 94%.
Our paper's contributions are as follows:
* We propose three deep-learning neural networks named MEDNC (FFC-MEDNC, FCO-MEDNC, FO-MEDNC, FFCO-MEDNC) for recognizing COVID-19 with CT scans
* The accuracy of our proposed FFCO-MEDNC has reached 98.79% and 99.82% on two COVID-19 datasets.
* Our models have also been successful in predicting blood cell and brain tumor images with accuracies of 99.28% and 99.39%, respectively. This paper's remaining sections are organized as follows: The second section addresses materials and processes. Section 3 contains the findings. Section 4 contrasts the outcomes to contemporary methods. This study concludes in section 5.2.
This segment concentrates on the development and implementation approach for the MEDNC model. We offer approaches for using DL to differentiate between chest CT scans for COVID-19 illnesses and non-COVID-19. Figure 1 is a graph depicting a flowchart that includes these terms.
### The Dataset
The patient's lungs can be seen clearly in a series of images taken using the CT scan technique on the chest. Images in a CT scan series may or may not show diseased areas; for example, the lung is enclosed at the start and finish of each image series. For COVID-19 anomaly detection, it is necessary to have data samples in which lung internal structures can be seen. Only pictures from the center point of the CT sequence are accessible for use, researchers have discussed methods for autonomously selecting images that are visible on CT scans of the lung[22].
To train a deep learning model using data from a DICOM-formatted CT image sample, the user can perform the following Python code to convert DICOM to PNG. Utilize the dicom.read_file() function to initially load the DICOM image. Then, transform the intercept and slope from the DICOM image header. Present the image utilizing the window at 1500 level at -600 and width information from the image header. Lastly, cv2.convertScaleAbs() function is utilized to convert the DICOM data to PNG format. The following two sections include Covid-19 datasets in PNG format[23].
#### 2.1.1 Covid-19 Dataset A
This work employs a CT scan dataset dubbed SARS-CoV-2 [24] for the COVID-19 recognition challenge. From hospitals in Sao Paulo, Brazil, it gathers 2481 CT scan pictures of people of both sexes. COVID-19 positivity was found in 1252 of the CT images and was not detected in 1229 (not normal).
The resolution of these PNG files, which were created from CT scans, ranges from 104 to 416 pixels on the longest dimension. To guarantee a perfectly balanced dataset, we chose 1229 pictures to represent each of the two categories. One CT scan of a patient with COVID-19 from this dataset is shown in Figure 2a; the arrows point to contaminated tissue. Figure 2b shows a contrast CT scan from a patient who did not have COVID-19. The CT image data used in this analysis is listed in Table 1.
Figure 1: Design flowchart for the COVID-19 identification system
#### 2.1.2 Covid-19 Dataset B
As can be seen in Figure 3, this work applies the suggested MEDNC models to another publicly available CT dataset called COVIDx CT-2A [25]. Random samples of CT images in PNG format with sequential numbers were drawn from both COVID-19/non-COVID-19 datasets. Data from CT images are listed in Table 2.
### Data Preprocessing
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Class** & **Quantities of Samples** & **Format** \\ \hline COVID-19 & 349 & PNG \\ Non-COVID-19 & 349 & PNG \\ \hline \hline \end{tabular}
\end{table}
Table 2: Data from the COVIDx CT-2A dataset of chest CT scan images that were chosen
Figure 3: Patients with and without the COVID-19 virus in their CT scans. (a) a subject with COVID-19. (b) n abnormal (not COVID-19) subject.
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Class** & **Quantities of Samples** & **Format** \\ \hline COVID-19 & 349 & PNG \\ Non-COVID-19 & 349 & PNG \\ \hline \hline \end{tabular}
\end{table}
Table 1: Details about the SARS-CoV-2 dataset of chest CT scan images that were chosen
Figure 2: Patients with and without the COVID-19 virus in their CT scans. (a) CT scan of a subject with COVID-19. (b) A CT scan of an abnormal (not COVID-19) subject.
To begin, a random 60%, 20%, 10%, and 10% split is made between the training, validation, testing A, and testing B sets from the specified dataset. Second, to accommodate the pixel-value representation necessary for image processing, we normalize the range of pixel values from (0, 255) to (0, 1) to train the deep learning model correctly[26]. It can be characterized as follows:
\[{x_{i}}^{\prime} \tag{1}\] \[=\frac{{x_{l}-\min{(x)}}}{{\max(x)-\min(x)}},\]
where min and max are the initial pixel values of 0 and 255, respectively. x is the entire image, while \(i\) is a single pixel within that image. Every dataset used for training, validation, and testing undergoes this rescaling of pixel values.
Moreover, as the input size to the deep learning network must be fixed, all image sizes are rescaled to 224 by 224[27]. It has been found that bigger datasets can improve deep learning classification accuracy. However, it is not always feasible to have a big dataset[28]. Thus, to expand the amount of data without gathering fresh pictures, an augmentation method is utilized[29]. Images from a dataset are enhanced in this work by geometric transformations including rotation and flipping.
### The Proposed Deep Learning Framework
In this section, we proposed a multi-ensemble deep learning neural network for COVID-19 (MEDNC) and compared four deep learning models under MEDNC for COVID-19 lung CT recognition.
#### 2.3.1 MEDNC framework
MEDNC is a framework inspired by CNN. It consists of a feature extractor and a classifier, as well as ensemble learning. The feature extractor is constructed from convolutional and pooling layers. A flatten layer, a fully connected layer, and an output layer compose the classifier. The proposed multi-layer ensemble method consists of four potential ensemble models that can improve the accuracy of COVID-19 screening in comparison to the use of individual models.
In this paper, we use pre-trained feature extractors for easy modeling because it may be difficult to train a deep learning model for recognizing COVID-19 infection from scratch due to a lack of CT scans of COVID-19 subjects. The transfer learning method and several different pre-trained models address this problem[30]. One of transfer learning's key benefits is that it allows data to be trained with fewer samples and in less time [31]. And it allows a newly trained model to benefit from the experience of a previously trained model [32]. Table 3 presents the pre-trained part and the novelty portion of our MEDNC.
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline Feature extractor components & Classifier components & Multi-ensemble technique \\ \hline \(\bullet\) Convolutional layers & \(\bullet\) Flatten layers & \(\bullet\) FFC ensemble \\ \(\bullet\) Pooling layers & \(\bullet\) Fully connected layers & \(\bullet\) FCO ensemble \\ \(\bullet\) Output layers & \(\bullet\) FCO ensemble \\ \hline \end{tabular}
\end{table}
Table 3: Pre-trained and suggested parts of the MEDNC
When it comes to the COVID-19 recognition task, sixteen popular CNN models are selected for the transfer learning algorithm we customized. These include VGG16, ResNet201, ResNet152v2, ResNet50, ResNet101, ResNet101V2, DenseNet201, MobileNet, MobileNetV3 small, MobileNetV2, XceptionNet, ResNet50V2, InceptionResNetV2, NASNet, and DenseNet169. These models were chosen because of their usefulness in computer vision applications. There have been numerous reports of their effectiveness in medical diagnostics[33, 34].
In order to classify images, the selected sixteen models are already reviously trained on the largest dataset ImageNet[35]. Since a large dataset was used to process the selected sixteen models, their learned weights might be applied to medical image recognition. These sixteen models serve as the foundation for further MEDNC model building. To prevent any data loss and make the most of the features extracted for training COVID-19 tasks, the feature extraction layers are frozen after being ImageNet-optimized. Also, to categorize COVID-19 CT images, we pruned the fully connected layers from the original models and created the new classifier network. The details of the suggested classifier are provided below.
A down-sampled feature map is transformed into a one-dimensional array using a flattened layer; a fully connected layer is built by activating connections between neurons in the preceding and following layers with a Rectified Linear Unit (ReLU) [36]. The issue of vanishing gradients can be resolved with the use of ReLU. The following equation is used to arrive at the answer:
\[f(x)=\begin{cases}0,for\ x<0\\ x,for\ x\geq 0\end{cases} \tag{2}\]
To avoid overfitting, a dropout is utilized with a ratio of 0.5 to create a more robust model; A Softmax activation is added to the output layer to determine if a CT scan is a COVID-19 infection or not. In contrast to ReLU, Softmax is typically implemented as a classification algorithm in the final layer of a model [37]. The following equation is one possible way to express it.
\[\begin{split} s(x_{i})&=\ \frac{e^{x_{i}}}{\sum_{j=1}^{n}e^{x_{j}}},\\ &\ \
Predicting the course of disease with any degree of precision is essential in healthcare since wrong decisions have serious financial and lifelong consequences. The predictions of separate deep learning classifiers have a high degree of variation, which is a downside when used alone. When asked to classify the same data, the answers from different models might be highly variable because they are all built and trained independently[38]. The multi-layer ensemble of individual models has the potential to solve these issues compared to one layer ensemble suggested before since it reduces variance and is more generalizable than other ensemble techniques and the use of individual models taken alone. Motivated by this fact, we have proposed a MEDNC framework for accurately classifying COVID-19. Table 4 summarizes the MEDNC framework
#### 2.3.2 FFC-MEDNC
One of the MEDNCs we proposed is the feature-fully connected multi-layer ensemble deep neural network for COVID-19 (FFC-MEDNC). It comprises two parts. The initial ensemble combines pre-trained models at the feature level. The second component is an ensemble technique that incorporates feature ensemble model sets at fully connected layers. The next section describes the FFC-DEDNC model's architecture.
Feature Ensemble
The major objective of ensemble models together at the feature level is to group additional input characteristics that a single model is unable to group. Therefore, this feature ensemble layer generates a dataset that combines all necessary CT scan input attributes. Assuming \(I_{t}=\{i_{1}i_{2,..,}i_{n}\}\) is the COVID-19 dataset used as input, the following is true:
\[F_{x}=\{f_{x1},f_{x2},...f_{xn}\}, \tag{4}\]
where \(f_{x1},f_{x2},...f_{xn}\) are the feature derived from the same input \(i_{n}\) by the transfer learning models. In this instance, \(n=2\).
As a result, we can express the ensemble of various pre-trained models as follows:
Figure 4: The pre-trained feature extractor and the proposed classifier as a whole model
\[F_{e}=Concatenate[f_{x1},f_{x2},...f_{xn}]. \tag{5}\]
Fully Connected Ensemble
The fully connected ensemble layer brings together the fully connected layers of all proposed CNN models to make fewer trainable parameters of the full ensemble model. The goal of this layer is to make a more accurate probability by putting together the results of all fully connected layers. The output of the pre-trained models' fully connected layer \(FC_{xn}\) will be used as a separate input for our model. To finish the FFC-MEDNC model, an output layer was added after the fully connected ensemble layer. For our model, we used the categorical cross-entropy loss function, written as follows:
\[CE=-\sum_{i}^{c}t_{i}\log(f(s)_{i}), \tag{6}\]
where \(t\) stands for the desired values, \(f(s)\) stands for the projected values, and \(i\) stands for the label.
In Figure 5 we can see the overall structure of the FFC-MEDNC model.
#### 2.3.3 Fco-mednc
There are two components to the fully connected-output ensemble deep neural network for COVID-19 recognition (FCO-MEDNC). The first component is a fully connected ensemble that employs a combination of pre-trained models. The second section consists of an output ensemble comprising sets of fully connected ensemble models. The architectures of the FFC-MEDNC model are described in the next section.
Fully Connect Ensemble
Figure 5: The FFC-MEDNC architecture.
At the fully connected ensemble, we have groups of neural networks (two in each group) that had pre-trained feature extractors. The principal purpose of the model ensemble operating at the fully-connected level is to aggregate additional input characteristics that the single-model ensemble cannot. In this way, the CT scan input is used by all three sets of models to generate a dataset with all the needed attributes. This FCO-MEDNC model will use the result of the fully connected layer \(FC_{xn}\) of the proposed single CNN model as a distinct input. Since we are dealing with a value of \(n=2\), there's this:
\[FC_{x}=\{fc_{x1},fc_{x2},...fc_{xn}\}, \tag{7}\]
To get a more precise likelihood of detecting COVID-19 CT scans, the fully connected layers' outputs are merged into one. Consequently, the ensemble of fully connected layers from various pre-trained models can be expressed as follows:
\[FC_{e}=Concatenate[fc_{x1},fc_{x2},...fc_{xn}]. \tag{8}\]
Output Ensemble
Three fully connected ensemble models are put together at the output layer. This method can be shown in the following way:
\[O_{x}=\{O_{x1},O_{x2},...O_{xn}\}, \tag{9}\]
where \(O_{xn}\) is the result of each fully connected ensemble model.
As a result, we can express the ensemble of layers at the output of three fully connected ensemble models as follows:
\[O_{e}=Concatenate[\{O_{x1},O_{x2},...O_{xn}\}], \tag{10}\]
The ensemble model is expected to pick up more features from the combined output to refine its predictions using this approach. Figure. 6 depicts the FCO-MEDNC model's architecture.
2.3.4 FO-MEDNC
There are two sets of ensemble networks that make up the feature-output ensemble deep neural network for COVID-19 recognition (FO-MEDNC). The first part of the network is similar to the FFC-MEDNC that has an ensemble at the feature level, it mixes feature extractors that have already been pre-trained. The second component is an output ensemble technique, which integrates three different feature ensemble models into a single model. Figure 7 presents a diagrammatic representation of the FO-MEDNC model's internal structure.
Figure 6: The FCO-MEDNC’s structural design.
#### 2.3.5 Ffco-MEDNC
Except for the three ensemble methods we described above, we also suggested an extension level 3 for MEDNC. The architecture of the FFCO-MEDNC is shown in Figure. 8. From the figure, we can observe that FFCO-MEDNC has three levels of ensemble sets. First, the feature extractor ensemble is applied. Secondly, the results of the feature extractor ensemble are combined at the fully connected layer. Last but not least, the final level of the ensemble, which is an output ensemble, is used to combine all the input characteristics for COVID-19 screening.
Figure 7: The architecture of the FO-MEDNC.
### Model Evaluation
#### 2.4.1 Experimental Setup
Python 3.7 (Python Software Foundation, Leicester, U.K.), TensorFlow 2.4.0 (Google, Leicester, U.K.), Keras 2.3.1 (Francois Chollet, Leicester, U.K.), and Scikit-Learn 0.20.4 (David Cournapeau, Leicester, U.K) are used to create the networks in Jupiter notebook. The models are trained on a desktop PC equipped with an Nvidia GPU RTX2070S (Nvidia, Leicester, U.K.) and an Intel Xeon E5-2680 v2 processor (Intel, Leicester, U.K.).
#### 2.4.2 Confusion Matrix
Figure 9 depicts a table called confusion matrix, it summarizes the results of a prediction for a classification task[39]. Correct and incorrect estimates are added together and placed in one of four categories. In a True Positive (TP) scenario, both the anticipated and actual outcomes are positive. In a false positive (FP) scenario, the forecast is optimistic whereas the real result is pessimistic. In the case of a True Negative (TN), both the predicted and actual outcomes are negative. A false negative (FN) occurs when the anticipated result is negative but the actual result is positive.
Figure 8: The architecture of the FFCO-MEDNC
#### 2.4.3 Classification Metrics
The model's performance was measured with the following five indicators[40]:
Accuracy is the proportion of accurate to faulty predictions.
\[\text{Accuracy}=\frac{\text{TP+TN}}{\text{TP+FP+TN+FN}}. \tag{11}\]
Precision measures how well a model can identify positive data in a sample.
\[\text{Precision}=\frac{\text{TP}}{\text{TP+FP}}. \tag{12}\]
The sensitivity of a model is measured by how well it can identify positive data.
\[\text{Sensitivity}=\frac{\text{TP}}{\text{TP+FN}}. \tag{13}\]
F1-Scores is a balanced accuracy and recall measurement.
F1
\[=\frac{\text{Precision}\times\text{Recall}\times 2}{\text{Precision}+\text{ Recall}}. \tag{14}\]
The probability of negative samples identified is counted as specificity.
\[\text{Specificity}=\frac{\text{TN}}{\text{TN+FP}}. \tag{15}\]
#### 2.4.4 Monte Carlo Cross-Validation
We use Monte Carlo Cross-Validation as it is more precise than a single validation (hold-out) in evaluating a model's efficacy. Equivalent to repeated random subsampling CV, Monte Carlo Cross-Validation (MCCV) is a method for assessing the quality of a model [41]. Picard and Cook were the first to propose the MCCV approach, which essentially just repeats the holdout technique[42]. The model is trained and validated independently multiple times by randomly splitting the data set into training and validation sets; the results of these validations are then averaged to get a final result that is more accurate and valid. Model performance can be more accurately evaluated with this method than with a single validation (holdout). More precise control over the training and validation iterations is provided by this method rather than k-fold cross-validation.
Figure 9: A visual depiction of the confusion matrix.
The dataset was randomly split into four parts: 60% for training, 20% for validation, 10% for test subset A, and 10% for test subset B. The proposed model was trained on the training data. After each iteration of training, the proposed models are checked against the validation set to ensure they're performing as expected. Next, we put the first ensemble layers of models through on the testing dataset A and use criteria to put a number on how well they performed. When it came time to evaluate the complete FFC/FCO/FO/FFCO-MEDNC, test subset B was put to the test. After 10 iterations, we average the results to ensure a reliable conclusion.
## 3 Results
For training purposes, we use six out of sixteen proposed classifiers as an example to train the FFC/FCO/FO-MEDNC, and eight out of sixteen proposed classifiers for the FFCO-MEDNC. Additional feature extractors and classifiers can be added to the MEDNC framework as desired.
### Results of Proposed Pre-Trained Models
On COVID-19 dataset A, we repeated the procedure that datasets are randomly partitioned, trained, validated, and tested ten times according to MCCV and averaged the results for a more trustworthy conclusion. Table 5 displays the averaged test results of ten holdout runs for proposed models. The COVID-19 datasets are used for the training of the revised models. At the end of each training epoch, the models are checked for accuracy using a validation set. As a result of the fact that this study investigates the distinction between COVID-19 and non-COVID-19 classes, two neurons were used at the output layer [43].
#### 3.1.1 Classification Results
From Table 5, we can see that the pre-trained models with the top-6 highest prediction accuracies are MobileNet (95.92%), ResNet101V2 (95.38%), ResNet152V2 (94.29%), MobileNetV2 (94.08%), DenseNet169 (94.02%), and DenseNet201 (93.67%). Acceptable results with over 80% accuracy were also obtained by models like InceptionResNetV2, VGG16, Xception, and VGG19. About 70% accuracy was achieved by ResNet50 and ResNet101, while 50% accuracy was provided by MobileNetV3Small. Table 6 lays out classification results for COVID-19 dataset B. It can be observed that ResNet152V2 (98.67%), ResNet101V2 (98.33%), MobileNet (98.10%), DenseNet201 (97.02%), DenseNet169 (96.74%) and MobileNetV2 (96.33%) occupied the top six highest accuracy position.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Model & Accuracy & Precision & Sensitivity & F1 & Specificity \\ \hline DenseNet121 & 0.9266 & 0.9303 & 0.9402 & 0.9265 & 0.9701 \\ VGG16 & 0.8694 & 0.8854 & 0.8545 & 0.8728 & 0.8256 \\
**DenseNet201** & **0.9367** & **0.9380** & **0.9102** & **0.9384** & **0.9612** \\ ResNet50 & 0.7041 & 0.7133 & 0.7314 & 0.7269 & 0.7287 \\
**MobileNetV2** & **0.9408** & **0.9306** & **0.9502** & **0.9403** & **0.9320** \\
**ResNet152V2** & **0.9429** & **0.9440** & **0.9442** & **0.9429** & **0.9223** \\ Xception & 0.9031 & 0.9011 & 0.9265 & 0.8999 & 0.8798 \\ VGG19 & 0.9102 & 0.9130 & 0.9410 & 0.9101 & 0.8792 \\ ResNet101 & 0.6980 & 0.7216 & 0.7404 & 0.6897 & 0.6492 \\ \hline \hline \end{tabular}
\end{table}
Table 5: The average results of applying ten iterations of modified pre-trained models to COVID-19 dataset A. Bold indicate the top 6 highest accuracies for these sixteen models.
3.1.2. Confusion Matrix Results
Figure 10 demonstrates that most of the proposed pre-trained models accurately distinguish between CT scans with and without COVID-19. With a 95.71 percent success rate, MobileNet correctly labeled 469 images. ResNet152V2 had a misclassification rate of 92.65 percent (36 of 490 instances). With a 93.47 percent success rate, DensNet201 correctly labeled 228 out of 245 images that were not part of COVID-19. Only five out of 245 COVID-19 images were not recognized by MobileNetV2 (accuracy of 92.65%).
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Model & Accuracy & Precision & Sensitivity & F1 & Specificity \\ \hline DenseNet121 & 0.9533 & 0.9541 & 0.9533 & 0.9533 & 0.9383 \\ VGG16 & 0.8820 & 0.8742 & 0.8726 & 0.8601 & 0.8849 \\
**DenseNet201** & **0.9702** & **0.9761** & **0.9730** & **0.9784** & **0.9706** \\ ResNet50 & 0.7463 & 0.7447 & 0.7549 & 0.7359 & 0.7460 \\
**MobileNetV2** & **0.9633** & **0.9606** & **0.9752** & **0.9643** & **0.9664** \\
**ResNet152V2** & **0.9867** & **0.9869** & **0.9764** & **0.9679** & **0.9811** \\ Xception & 0.9075 & 0.8605 & 0.9251 & 0.8904 & 0.8841 \\ VGG19 & 0.9241 & 0.9163 & 0.8724 & 0.8905 & 0.8694 \\ ResNet101 & 0.7639 & 0.6172 & 0.8481 & 0.7095 & 0.6917 \\
**ResNet101V2** & **0.9833** & **0.9835** & **0.9827** & **0.9889** & **0.9803** \\ NASNet & 0.9196 & 0.8703 & 0.9257 & 0.9084 & 0.8836 \\ ResNet50V2 & 0.9103 & 0.9665 & 0.8749 & 0.9251 & 0.9728 \\
**MobileNet** & **0.9810** & **0.9803** & **0.9861** & **0.9720** & **0.9807** \\ MobileNetV3Small & 0.5000 & 0 & 0 & 0 & 0.5000 \\ InceptionResNetV2 & 0.9205 & 0.8913 & 0.9324 & 0.9073 & 0.8946 \\
**DenseNet169** & **0.9674** & **0.9683** & **0.9456** & **0.9668** & **0.9685** \\ \hline \hline \end{tabular}
\end{table}
Table 6: The average results of applying ten iterations of modified pre-trained models to COVID-19 dataset B. Bold indicate the highest accuracy for these six models.
#### 3.1.3 Learning Curve Results
Figure 11 displays the learning curves for a single run of training and validation for sixteen distinct pre-trained models. When comparing loss rates, the chart shows that MobileNet has the lowest at 10.28 percent and the highest at 95.92 percent, followed by DenseNet201 at 13.31 percent and ResNet152V2 at 20.69 percent. When compared to the training curve, all validation curves show oscillations. This is because the model's training dataset is much larger than the validation dataset, which is why this happens. A lower loss rate and higher accuracy were achieved when using validation data as opposed to training data, as depicted by the plot. This is due to the fact that we used a 0.5 dropout to get 50% of the features set to zero, during the validation phase, all neurons are utilized, leading to improved validation accuracy
Figure 10: The confusion matrix for COVID-19 dataset A from a single hold-out run with 16 modified pre-trained models.
Figure 11: The COVID-19 dataset A learning curves for a single holdout run with sixteen differently pre-trained models.
3.2. Results of Proposed MEDNC
3.2.1. Classification Results
To identify COVID-19 CT images, four ensemble strategies have been implemented and training ten times based on MCCV. In Table 7, we can see that the all four MEDNC models have a higher accuracy rate in predicting COVID-19 dataset A than any of the individual pre-trained models. There was an increase of 2.60 percent in accuracy, 1.80 percent in precision, 0.43 percent in sensitivity, 3.69 percent in specificity, and 2.93 percent in the F1-score. The FFCO-MEDNC model has the highest accuracy (98.79%), precision (99.19%), sensitivity (98.32%), and F1-score (98.80%) of the four ensemble models.
Table 8 displays the results of the classification performed on the COVID-19 dataset B. The results demonstrate that the FFCO-MEDNC achieved the highest levels of accuracy (99.82%), sensitivity (99.67%), precision (99.74%), and specificity (99.89%).
#### 3.2.2 Confusion Matrix Results
As shown in the confusion matrix in Figure 12, when compared to a single CNN model, the number of misclassifications is drastically reduced when four MEDNC models are used instead. In this single run, only six out of two hundred and forty-eight CT scans were incorrectly labeled by the FFC-MEDNC (97.58% accuracy). Classification of 237 out of 248 CT images was achieved by the FCO-MEDNC model (95.56% accuracy), while identification of 237 CT images was achieved by the FO-MEDNC model (95.56% accuracy). Only three out of two hundred and forty-eight CT scans were mislabled by the FFCO-MEDNC model (98.79% accuracy).
\begin{table}
\begin{tabular}{l l l l l l} \hline Model & Accuracy & Precision & Sensitivity & F1 & Specificity \\ \hline FFC-MEDNC & 0.9852 & 0.9763 & 0.9804 & 0.9817 & 0.9937 \\ \hline FCO-MEDNC & 0.9758 & 0.9674 & 0.9762 & 0.9836 & 0.9829 \\ \hline FO-MEDNC & 0.9556 & 0.9438 & 0.9571 & 0.9575 & 0.9313 \\ \hline
**FFCO-MEDNC** & **0.9879** & **0.9919** & **0.9832** & **0.9880** & **0.9918** \\ \hline \end{tabular}
\end{table}
Table 7: The average results of MEDNC models for running 10 times on COVID-19 dataset A. Bold suggest the highest accuracy among those models.
\begin{table}
\begin{tabular}{l l l l l l} \hline Model & Accuracy & Precision & Sensitivity & F1 & Specificity \\ \hline FFC-MEDNC & 0.9964 & 0.9892 & 0.9931 & 0.9947 & 0.9946 \\ \hline FCO-MEDNC & 0.9913 & 0.9925 & 0.9924 & 0.9829 & 0.9971 \\ \hline FO-MEDNC & 0.9906 & 0.9913 & 0.9911 & 0.9958 & 0.9872 \\ \hline
**FFCO-MEDNC** & **0.9982** & **0.9974** & **0.9967** & **0.9925** & **0.9989** \\ \hline \end{tabular}
\end{table}
Table 8: The average results of MEDNC models for running 10 times on COVID-19 dataset B. Bold suggest the highest accuracy among those models.
#### 3.2.3 False Discovery Rate Results
To determine the false discovery rate or FDR, divide the number of false positive test findings by the total number of positive test results. Below is the FDR formula.
\[\text{FDR}=\frac{\text{FP}}{\text{TP}+\text{FP}}\]
In Table 9, we can see that FFC-MEDNC achieved the lowest FDR (0.0081%) out of the six pre-trained models and across all ensemble models.
#### 3.2.4 Learning Curve Results
Figure 13 demonstrates that compared to individually pre-trained models, MEDNC produces significantly more impressive learning curves. In comparison to FO-MEDNC, FCO-MEDNC, and FFCO-MEDNC, which have loss rates of 10.18%, 6.61%, and 1.71%, and accuracy rates of 96.82%, 97.97%, and 99.46%, respectively, FFC-MEDNC has the lowest loss rate at 1.69%. In addition, the difference between the final training and validation values is minimal in the FFCO
\begin{table}
\begin{tabular}{l|l} \hline \hline
**Model** & **FDR** \\ \hline ResNet152V2 & 0.0543 \\ \hline DenseNet201 & 0.0815 \\ \hline MobileNet & 0.0380 \\ \hline MobileNetV2 & 0.0462 \\ \hline ResNet101V2 & 0.0679 \\ \hline DenseNet169 & 0.0715 \\ \hline
**FFC-MEDNC** & **0.0081** \\ \hline FCO-MEDNC & 0.0175 \\ \hline FO-MEDNC & 0.0726 \\ \hline FFCO-MEDNC & 0.0160 \\ \hline \hline \end{tabular}
\end{table}
Table 9: Comparison of the FDR between the COVID-19 dataset A’s pre-trained and ensemble models in a single run. The lowest FDR is highlighted in bold.
Figure 12: The confusion matrix for MEDNC models on COVID-19 dataset A from a single run.
MEDNC model when the training and validation loss reaches a stable stage, indicating that this model is well-fit. Differences between the training and validation values in FO-MEDNC's and loss curves are not optimistic.
### Compare With the State of Art Approaches
Twenty cutting-edge methods were chosen for analysis in this work. We found several research gaps when comparing their methods and findings to ours regarding COVID-19 recognition. Most of the studies used a dataset containing fewer than a thousand COVID-19 pictures, which is insufficient for creating effective and reliable deep learning techniques. There is a risk that the effectiveness of the proposed methods will suffer from a lack of data.
Most studies had an issue with data imbalance, where one group had more pictures than the other. As a result, model precision suffers. Secondly, some additional pre-trained models have yet to be implemented for COVID-19 classification. Moreover, in COVID-19 studies, researchers have not paid enough attention to the effects of using various ensemble methods.
In contrast, we used a CT scan dataset with a perfect balance, which included over 2,000 images. In total, sixteen different pre-trained deep learning models were examined for this project, including some that were not used in the COVID-19 recognition region. Additionally, four ensemble models for identifying COVID-19 CT images have been proposed. Table 10 is a compilation of the results from all the different models. Our research shows that the proposed model is more effective than the listed classifiers.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & & \multicolumn{3}{c}{**Optimized Model**} \\
**Author(s)** & **Model(s)** & **Accuracy** & **F1** & **Recall** & **Precision** \\ Matsuyama, E [16] & ResNet50 + wavelet coefficients & 92.2\% & 91.5\% & 90.4\% & / \\ Loey. M[17] & ResNet50 + augmentation + CGAN & 82.91\% & / & 77.66\% & / \\ Do, C [18] & Modified DenseNet201 & 85\% & / & 79\% & 91\% \\ Polsinelli, M [19] & Modified SqueezeNet & 85.03\% & 86.20\% & 87.55\% & 85.01\% \\ Panwar, H[44] & Modified VGG19 & 94.04\% & & & \\ \hline \hline \end{tabular}
\end{table}
Table 10: Comparison with state-of-the-art approaches
Figure 13: Learning curves for MEDNC models on COVID-19 dataset A in one hold-out run.
Customized DenseNet201, VGG16, ResNet50, InceptionV3 and DenseNet121
\begin{tabular}{c c c c c} Mishra, A [45] & Customized DenseNet201, VGG16, ResNet50, InceptionV3 and DenseNet121 & 88.3\% & 86.7\% & 90.15\% \\ & Customized Xception, ResNet50, VGG16, and Inception-v3 & 96.97\% & & \\ Maghdid.H[21] & Customized CNN, Alexnet & 94.1\% & & 100\% & \\ & Customized XceptionNet, MobileNet, ResNet50, DenseNet121, InceptionV3, VGG16 & 94.12\% & 96.11\% & 96.11\% & 96.11\% \\ Alshazly. H [47] & CovidResNet and CovidDenseNet & 93.87\% & 95.70 & 92.49 & 99.13\% \\ & Customized DenseNet201, InceptionV3, ResNet101, ResNet50 & 95.34\% & & \\ Jaiswal, A [49] & Modified DenseNet201 & 96.25\% & 96.29\% & 96.29\% & 96.29\% \\ Sanagavarapu.S [50] & Ensembled ResNets & 87\% & 84\% & 81\% & 91\% \\ & A large-scale bi-directional generative adversarial & & & 92\% & \\ & network & & & 92\% & \\ Sarker, L [52] & Modified Densenet121 & 96.49\% & 96\% & 96\% & 96\% \\ Shan, F [53] & VB-Net & 91.6\% & & & \\ Wang, S [54] & Modified DenseNet & 85\% & 90\% & 79\% & \\ Gozes, O [55] & Modified ResNet50 & & & 94\% & \\ Wang, S [56] & Modified Inception & 79.3\% & 63\% & 83\% & \\ Li, L [30] & Modified RestNet50 & & & 90\% & \\ Proposed & FFCO-MEDNC & 98.79\% & 98.80\% & 98.32\% & 99.19\% \\ \hline \end{tabular}
3.4 Generality of Proposed Method
To demonstrate the generalizability of our model, we apply our proposed model to two additional datasets, one brain tumor dataset, and one blood cell dataset.
3.4.1 Brain Tumor Dataset
Scanning the brain with radiation is one approach to finding malignant growths[57]. The process of taking an X-ray, which is used to obtain images of the inside of the body, is quick and causes no discomfort to the patient[58]. The Brain Tumor Dataset[59] (JPG format) is an X-ray dataset used for this brain tumor recognition task. There were a confirmed 2341 cases of brain tumors among these images, while the remaining 2087 were deemed to be healthy. To ensure that the data is precisely balanced, we chose 2087 images to represent each category.
#### 3.4.2 Blood Cell Dataset
Since white blood cells (WBC) develop immunity to fight against pathogens and foreign chemicals, it is vital to distinguish the various WBC subsets. Accordingly, we use our proposed models to complete the WBC classification. Specifically, 12,500 JPEG pictures of a blood cell dataset[60] taken using a microscope from one of three different cell types are being used for this purpose. For each of the three types of cells studied, we randomly picked 2497 images. The three kinds of cells are eosinophils, lymphocytes, and monocytes.
We tested our MEDNC (FFC-MEDNC, FCO-MEDNC, FO-MEDNC, and FFCO-MEDNC) on these two datasets, and the results below showed the adaptability of our model towards other medical images other than COVID-19.
#### 3.4.3 Brain Tumor Results
The six pre-trained models with the highest prediction accuracy, according to Table 13, are MobileNet (98.87%), ResNet101V2 (98.08%), ResNet152V2 (97.92%), MobileNetV2 (96.42%), DenseNet169 (96.81%), and DenseNet201 (96.17%). The accuracy of the four MEDNC models in predicting brain tumor dataset is more than that of any of the individual pre-trained models, as shown in Table 14. Accuracy increased by 0.41 percent, precision by 0.51 percent, sensitivity by 0.91 percent, specificity by 1.23 percent, and F1-score by 0.56 percent. The FFCO-MEDNC model has the greatest accuracy (99.39%), sensitivity (99.72%), precision (99.41%), and F1 score
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Class** & **Quantities of Samples** & **Format** \\ \hline Brain tumor & 2087 & JPG \\ Healthy & 2087 & JPG \\ \hline \hline \end{tabular}
\end{table}
Table 11: Information about the Brain Tumor dataset that was chosen
Figure 14: Patients with and without brain tumors in their X-ray scans. (a) An X-ray of a subject with a brain tumor. (b) An X-ray of a healthy subject.
Figure 15: Blood cell dataset. (a) Eosinophil cells. (b) Lymphocyte cells. (c) Monocyte cells.
(99.53%) among the three ensemble models.
#### 3.4.4 Blood Cell Results
According to Table 15, the six pre-trained models with the highest prediction accuracy are MobileNet (96.77%), MobileNetV2 (93.05%), DenseNet169 (92.38%), DenseNet201 (90.76%), ResNet101V2 (89.37%), and ResNet152V2 (86.75%). As shown in Table 16, the accuracy of the four MEDNC models in predicting the blood cell dataset is higher than that of any of the individual pre-trained models. Accuracy improved by 2.26 percent, precision improved by 3.60 percent, sensitivity improved by 6.83 percent, specificity improved by 2.21 percent, and F1-score improved by 3.00 percent. Among the three ensemble models, the FFCO-MEDNC model has the highest accuracy (99.28%), sensitivity (99.84%), and F1 score (99.52%). The FCO-MEDNC has the highest precision (99.63%) and specificity (99.67%)
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Model** & **Accuracy** & **Precision** & **Sensitivity** & **F1** & **Specificity** \\ \hline DenseNet169 & 0.9238 & 0.9220 & 0.9183 & 0.9188 & 0.9383 \\ DenseNet201 & 0.9076 & 0.9026 & 0.8967 & 0.8974 & 0.9168 \\ ResNet152V2 & 0.8675 & 0.8556 & 0.8550 & 0.8535 & 0.9180 \\ ResNet101V2 & 0.8937 & 0.8140 & 0.8663 & 0.8088 & 0.9137 \\ MobileNetV2 & 0.9305 & 0.9236 & 0.9175 & 0.9190 & 0.9762 \\ MobileNet & 0.9677 & 0.9603 & 0.9784 & 0.9667 & 0.9242 \\ \hline \hline \end{tabular}
\end{table}
Table 15: The following are the average results of 10 runs of the six modified pre-trained models with the highest accuracy utilizing the blood cell dataset.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Model** & **Accuracy** & **Precision** & **Sensitivity** & **F1** & **Specificity** \\ \hline DenseNet169 & 0.9681 & 0.9874 & 0.9615 & 0.9679 & 0.9910 \\ DenseNet201 & 0.9617 & 0.9705 & 0.9629 & 0.9598 & 0.9865 \\ ResNet152V2 & 0.9792 & 0.9710 & 0.9728 & 0.9804 & 0.9839 \\ ResNet101V2 & 0.9808 & 0.9811 & 0.9894 & 0.9740 & 0.9934 \\ MobileNet & 0.9887 & 0.9862 & 0.9847 & 0.9785 & 0.9935 \\ MobileNetV2 & 0.9642 & 0.9553 & 0.9613 & 0.9748 & 0.9711 \\ \hline \hline \end{tabular}
\end{table}
Table 13: The average results of 10 runs of the six modified pre-trained models with the highest accuracy utilizing brain tumor data.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Model & Accuracy & Precision & Sensitivity & F1 & Specificity \\ \hline FFC-MEDNC & 0.9928 & 0.9864 & 0.9985 & 0.9927 & 0.9991 \\ \hline FCO-MEDNC & 0.9907 & 0.9925 & 0.9914 & 0.9885 & 0.9953 \\ \hline FO-MEDNC & 0.9856 & 0.9799 & 0.9847 & 0.9813 & 0.9756 \\ \hline
**FFCO-MEDNC** & **0.9939** & **0.9941** & **0.9972** & **0.9953** & **0.9981** \\ \hline \hline \end{tabular}
\end{table}
Table 14: The average results of 10 runs of MEDNC models using brain tumor data are as follows.
### Training Duration and Model Size Outcomes
From Table 17, we can see that MobileNet required the least length of time to finish one epoch of training. However, NASNet was the slowest, taking 32 seconds to complete one epoch of training. Table 17 also displays the sizes of the models. The proposed FFCO-MEDNC has the largest model size at 392.5 MB, while the smallest, MobileNetV3Small, is only 13.26 MB.
## 4 Conclusion
The use of deep learning techniques for diagnosing COVID-19 has recently become a topic of intense interest. Exciting progress has been made, and new insights continue to emerge, thanks to the use of multiple neural networks in this area of study. Among the most important forms of data used to detect COVID-19 symptoms are CT scan images. With the purpose of identifying COVID-19, a plethora of deep learning models have been built and effectively applied. Using chest CT scans as training data, this paper adapts and creates sixteen deep-learning models for recognizing COVID-19. As a follow-up, MEDNC (FFC-MEDNC, FCO-MEDNC, FO-MEDNC, and FFCO-MEDNC) were proposed to improve the performance of screening COVID-19 using the
\begin{table}
\begin{tabular}{|l|l l l|l|} \hline
**Model** & **Accuracy** & **Training Time (Second/Epoch)** & **Parameters (MB)** \\ \hline ResNet50 & 0.7659 & 27 & 103.99 \\ \hline ResNet101V2 & 0.9506 & 29 & 177.08 \\ \hline ResNet152V2 & 0.9456 & 28 & 237.5 \\ \hline ResNet50V2 & 0.9286 & 26 & 103.9 \\ \hline ResNet101 & 0.7302 & 28 & 177.19 \\ \hline VGG16 & 0.8728 & 26 & 63.08 \\ \hline VGG19 & 0.8746 & 26 & 79.87 \\ \hline DenseNet121 & 0.9304 & 29 & 83.28 \\ \hline Densenet169 & 0.9483 & 29 & 86.31 \\ \hline DenseNet201 & 0.9349 & 30 & 84.37 \\ \hline InceptionResNetV2 & 0.8971 & 30 & 213.77 \\ \hline Xception & 0.8978 & 27 & 93.48 \\ \hline NASNet & 0.9108 & 32 & 25.26 \\ \hline MobileNet & 0.9536 & 25 & 19.33 \\ \hline MobileNetV2 & 0.9465 & 27 & 17.52 \\ \hline MobileNetV3Small & 0.5000 & 28 & 13.26 \\ \hline FFC-MEDNC (Ours) & 0.9852 & 31 & 356.4 \\ \hline FCO-MEDNC (Ours) & 0.9758 & 31 & 346.39 \\ \hline FO-MEDNC (Ours) & 0.9556 & 31 & 332.7 \\ \hline FFCO-MEDNC(Ours) & 0.9879 & 31 & 392.5 \\ \hline \end{tabular}
\end{table}
Table 17: A comprehensive evaluation of all models’ training times and weights for the COVID-19 dataset A.
aforementioned two COVID-19 CT scan datasets. The MEDNC models were found to be superior to the customized pre-trained models on the COVID-19 image recognition task.
The findings indicate that FFCO-MEDNC obtained an accuracy of 98.79%, a sensitivity of 98.32%, a precision of 99.19%, and a specificity of 99.18% for COVID-19 dataset A. COVID-19 dataset B improved to 99.82 percent accuracy rate, 99.67 percent sensitivity, 99.74 percent precision, and 99.89 percent specificity.
## 5 Feature Works
Although the proposed COVID-19 recognition system performed exceptionally well, there were still some caveats to this study. Before anything else, the classification outcome may change depending on which 2D image is chosen when deriving a 3D image from a CT scan. Second, additional preprocessing techniques, such as image enhancement, have not been incorporated into this investigation. Image enhancement software could be used to check if there is room for improvement in future work. These results suggest that a fully automated and rapidly performed diagnosis of COVID-19 using deep learning is possible with the help of the proposed MEDNC models. This discovery will help doctors save both time and money in their efforts to detect COVID-19 infections.
**Author Contributions:**
Lin Yang: Conceptualization, Software, Validation, Investigation, Data Curation, Writing - Original Draft, Visualization
Shihihua Wang: Methodology, Formal analysis, Resources, Data Curation, Writing - Review & Editing, Supervision, Funding acquisition
Yudong Zhang: Conceptualization, Methodology, Formal analysis, Investigation, Writing - Review & Editing, Supervision, Project administration,
**Funding:** This paper is partially supported by Medical Research Council, UK (MC_PC_17171); Royal Society, UK (RP202G0230); BHF Accelerator Award, UK (AA/18/3/34220); Hope Foundation for Cancer Research, UK (RM60G0680); GCRF, UK (P202PF11); Sino-UK Industrial Fund, UK (RP202G0289); LIAS, UK (P202ED10, P202RE969); Data Science Enhancement Fund, UK (P202RE237); Fight for Sight, UK (24NN201); Sino-UK Education Fund, UK (OP202006); BBSRC, UK (RM32G0178B8).
**Data Availability Statement:** The two datasets can be accessed from [https://www.kaggle.com/plameneduardo/sarscov2-ctscan-dataset](https://www.kaggle.com/plameneduardo/sarscov2-ctscan-dataset) and [https://www.kaggle.com/hgunraj/covidxct](https://www.kaggle.com/hgunraj/covidxct)
**Conflict of Interest:** The authors declare no conflict of interest.
|
2307.16092 | Feature Transportation Improves Graph Neural Networks | Graph neural networks (GNNs) have shown remarkable success in learning
representations for graph-structured data. However, GNNs still face challenges
in modeling complex phenomena that involve feature transportation. In this
paper, we propose a novel GNN architecture inspired by
Advection-Diffusion-Reaction systems, called ADR-GNN. Advection models feature
transportation, while diffusion captures the local smoothing of features, and
reaction represents the non-linear transformation between feature channels. We
provide an analysis of the qualitative behavior of ADR-GNN, that shows the
benefit of combining advection, diffusion, and reaction. To demonstrate its
efficacy, we evaluate ADR-GNN on real-world node classification and
spatio-temporal datasets, and show that it improves or offers competitive
performance compared to state-of-the-art networks. | Moshe Eliasof, Eldad Haber, Eran Treister | 2023-07-29T23:31:18Z | http://arxiv.org/abs/2307.16092v2 | # ADR-GNN: Advection-Diffusion-Reaction Graph Neural Networks
###### Abstract
Graph neural networks (GNNs) have shown remarkable success in learning representations for graph-structured data. However, GNNs still face challenges in modeling complex phenomena that involve advection. In this paper, we propose a novel GNN architecture based on Advection-Diffusion-Reaction systems, called ADR-GNN. Advection models the directed transportation of information, diffusion captures the local smoothing of information, and reaction represents the non-linear transformation of information in channels. We provide an analysis of the qualitative behavior of ADR-GNN, that shows the benefit of combining advection, diffusion, and reaction. To demonstrate its efficacy, we evaluate ADR-GNN on real-world node classification and spatio-temporal datasets, and show that it improves or offers competitive performance compared to state-of-the-art networks.
## 1 Introduction
Recently, GNNs have been linked to ordinary and partial differential equations (ODEs and PDEs) in a series of works [103; 14; 27; 70; 90; 26; 32]. These works propose to view GNN layers as the time discretization of ODEs and PDEs, and as such they offer both theoretical and practical advantages. First, ODE and PDE based models allow to reason about the behavior of existing GNNs. For instance, as suggested in [14], it is possible to view GCN [45] and GAT [88] as discretizations of the non-linear heat equation. This observation helps to analyze and understand the oversmoothing phenomenon in GNNs [61; 62; 13]. Second, ODE and PDE based GNNs pave the path to the construction and design of GNNs that satisfy desired properties, such as energy-preservation [27; 70], attraction and repulsion forces modeling [90; 26], anti-symmetry [32], as well as reaction-diffusion systems [22]. Nonetheless, the aforementioned architectures still rely on controlled diffusion or wave propagation, as well as non-linear pointwise convolutions. Therefore, as discussed in [69], while there are methods that can alleviate oversmoothing, they may lack expressiveness. We now provide a simple example, known as the graph node feature transportation task, where diffusion, wave propagation, and reaction (pointwise) networks may fail. In this task, the goal is to gather the node information (i.e., features) from several nodes to a single node. Clearly, no diffusion process can express or model such a phenomenon, because diffusion spreads and smooths, rather than transports information [28]. Likewise, a wave-propagation approach cannot express such a phenomenon, because it lacks directionality, which is required for this task. An instance of this problem is illustrated in Figure 1, where we show the source and target node features, and possible advection weights that can achieve the desired target. Later, in Figure 2, we show that popular operators such
as diffusion or reaction cannot model the transition from the source to the target node features, while advection can. Furthermore, the concept of advection appears in many real-world problems and data, such as traffic-flow and-control [8], quantity transportation in computational biology [87], and rainfall forecasting [75]. Motivated by the previously discussed observations and examples, we propose, in addition to learning and combining diffusion and reaction terms, to develop a learnable _advection_ term that is suited to model directed information transportation from the data. The resulting architecture, called _ADR-GNN_, can therefore express various phenomena, from advection, diffusion, to pointwise reactions, as well as their compositions.
**Contributions.** The contributions of this paper are three-fold. Firstly, we develop a novel graph neural advection operator that is mass preserving, stable, and consistent with continuous advection PDEs. This operator enables the modeling of phenomena that involve the transportation of information on graphs by learning the direction of the transportation. Secondly, we propose ADR-GNN, a GNN based on learnable advection-diffusion-reaction (ADR) systems, that can express a wide range of phenomena, including learned directional information flow, diffusion, and pointwise reactions. Thirdly, we demonstrate the efficacy of ADR-GNN on node classification, and spatio-temporal forecasting datasets, achieving improved or competitive results compared to state-of-the-art models.
## 2 Related Work
**Advection-Diffusion-Reaction.** An Advection-Diffusion-Reaction system is a mathematical model that describes the simultaneous existence of three processes: (i) the advection (transport) of information in a medium, (ii) the diffusion (smoothing) of information within that medium, and (iii) pointwise (self) reactions. These systems are used to study and model a wide range of physical, chemical, and biological phenomena. For example, ADR systems can be utilized to track and estimate the location of fish swarms [2], modeling ecological trends [24], and the modeling of turbulent flames in supernovae [44]. However, the aforementioned works rely on a low-dimensional, hand-crafted, non-neural ADR system to be determined, typically by trial and error, often requiring a domain expert. In contrast, in this paper we propose to learn the ADR system for various graph types and tasks.
**Graph Neural Networks as Dynamical Systems.** Adopting the interpretation of convolutional neural networks (CNNs) as discretizations of ODEs and PDEs [72; 20; 97] to GNNs, works like GODE [103], GRAND [14], PDE-GCN\({}_{\text{D}}\)[27], GRAND++ [86] and others, propose to view GNN layers as time steps in the integration of the non-linear heat equation, allowing to control the diffusion (smoothing) in the network, to understand oversmoothing [61; 62; 13] in GNNs. Thus, works like [21; 56; 55; 26] propose to utilize a _learnable_ diffusion term, thereby alleviating oversmoothing. Other architectures like PDE-GCN\({}_{\text{M}}\)[27] and GraphCON [70] propose to mix diffusion and oscillatory processes (e.g., based on the wave equation) to avoid oversmoothing by introducing a feature energy preservation mechanism. Nonetheless, as noted in [69], besides alleviating oversmoothing, it is also important to design GNN architectures with improved expressiveness. Recent examples of such networks are [32] that propose an anti-symmetric GNN to alleviate over-squashing [3], and [90; 22] that formulate a reaction-diffusion GNN to enable non-trivial pattern growth. In this pa
Figure 1: An example of node feature transportation on a graph. Applying the advection weights in (a) to the source (b), yields the target (c). Darker edge colors in (a) indicate greater advection weights.
per, we build on the properties of ADR PDEs, that in addition to modeling diffusive and reactive processes, also allow to capture advective processes such as the transportation of node features.
On another note, CNNs and GNNs are also used to accelerate PDE solvers [66; 53; 50; 12; 73], as well as to generate [74] and compute [6] physical simulations. In this paper, we focus on the view of GNNs as the discretization of ADR PDEs, rather than using GNNs to solve PDEs.
**Advection on Graphs.** Advection is a term used in Physics to describe the transport of a substance in a medium. In the context of graphs, advection is used to express the transport of information (features) on the graph nodes. The underlying process of advection is described by a continuous PDE, and several graph discretization techniques [15; 39] are available. The advection operator has shown its effectiveness in classical (i.e., non-neural) graph methods, from blood-vessel simulations [25], to traffic flow prediction [11]. In this paper, we develop a neural advection operator that is combined with neural diffusion and reaction operators, called ADR-GNN.
## 3 Advection-Diffusion-Reaction Graph Neural Networks
In this section, we first describe the general outline of a continuous ADR system in Section 3.1, and present its graph discrete analog, named _ADR-GNN_ in Section 3.2. We then discuss each component in ADR-GNN in detail in Sections 3.3-3.4.
**Notations.** We define a graph by \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), where \(\mathcal{V}\) is a set of \(n\) nodes and \(\mathcal{E}\subseteq\mathcal{V}\times\mathcal{V}\) is a set of \(m\) edges. We denote the 1-hop neighborhood of the \(i\)-th node by \(\mathcal{N}_{i}\), and the node features by \(\mathbf{U}\in\mathbb{R}^{n\times c}\), where \(c\) is the number of features. The symmetric graph Laplacian reads \(\mathbf{L}=\mathbf{D}-\mathbf{A}\), and the symmetric normalized Laplacian is given by \(\hat{\mathbf{L}}=\mathbf{D}^{-\frac{1}{2}}\mathbf{L}\mathbf{D}^{-\frac{1}{2}}\), where \(\mathbf{D}\) is the degree matrix.
### Continuous Advection-Diffusion-Reaction Systems
The continuous PDE that describes an ADR system is given by:
\[\frac{\partial U}{\partial t}=\underbrace{\nabla\cdot(VU)}_{\text{Advection}}+ \underbrace{K\Delta U}_{\text{Diffusion}}+\underbrace{f(U,X,\theta_{r})}_{ \text{Reaction}},\ X\in\Omega,\ t\in[0,T], \tag{1}\]
accompanied by initial conditions \(U(X,t=0)\) and boundary conditions. Here, \(U(X,t)=[u_{1}(X,t),\ldots,u_{c}(X,t)]:\mathbb{R}^{\Omega\times[0,T]}\to \mathbb{R}^{c}\) is a density function, written as a vector of scalar functions \(u_{s}(X,t),\ s=1,\ldots,c\), that depend on the initial location \(X\) and time \(t\). The spatial domain \(\Omega\) can be \(\mathbb{R}^{d}\) or a manifold \(\mathcal{M}\subseteq\mathbb{R}^{d}\). From a neural network perspective, \(u_{s}\) is referred to as a channel, interacting with other channels. The left-hand side of Equation (1) is a time derivative that represents the change in features in time, as discussed in Section 2. The right-hand side includes three terms:
* **Advection.** Here, \(V\) denotes a velocity function that transports the density \(U\) in space, and \(\nabla\cdot\) is the divergence operator.
* **Diffusion.** We denote the continuous Laplacian operator by \(\Delta\). The Laplacian is scaled with a diagonal matrix \(K=\mathrm{diag}(\kappa_{1},\ldots,\kappa_{c})\in\mathbb{R}^{c\times c},\ \ \kappa_{i}\geq 0\) of non-negative diffusion coefficients, each independently applied to its corresponding channel in \(U\).
* **Reaction.** Here, \(f(U,X,\theta_{r})\) is a non-linear pointwise function parameterized by \(\theta_{r}\).
### Advection-Diffusion-Reaction on Graphs
Equation (1) is defined in the continuum. We now use a graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) to discretize \(\Omega\). The nodes \(\mathcal{V}\) can be regarded as a discretization of \(X\), that is, the \(i\)-th node is located in \(\mathbf{X}_{i}\), and the edges \(\mathcal{E}\) represent the topology of \(\Omega\). Then, the _spatial_, graph discretization of Equation (1) is:
\[\frac{d\mathbf{U}(t)}{dt} = \mathbf{DIV}\left(\mathbf{V}\left(\mathbf{U}(t),t;\boldsymbol{ \theta}_{a}(t)\right)\mathbf{U}(t)\right)-\hat{\mathbf{L}}\mathbf{U}(t) \mathbf{K}(t;\boldsymbol{\theta}_{d}(t))+f(\mathbf{U}(t),\mathbf{X},t; \boldsymbol{\theta}_{r}(t)) \tag{2a}\] \[\mathbf{U}(0) = \mathbf{U}^{(0)}=g_{\text{in}}(\mathbf{X},\boldsymbol{\theta}_{0}) \tag{2b}\]
Here, \(\mathbf{U}(t)\in\mathbb{R}^{n\times c}\) is a matrix that describes the node features at time \(t\). The advection term depends on the velocity function \(\mathbf{V}\) parameterized by learnable weights \(\mathbf{\theta}_{a}(t)\). The precise discretization of the advection operator \(\mathbf{DIV}(\mathbf{V}\cdot)\) is discussed in Section 3.4. The diffusion is discretized using the symmetric normalized Laplacian1\(\hat{\mathbf{L}}\) that is scaled with a diagonal matrix with non-negative learnable diffusion coefficients on its diagonal \(\mathbf{K}(t;\mathbf{\theta}_{d}(t))=\mathrm{diag}(\mathrm{hardtanh}(\mathbf{\theta}_ {d}(t),0,1))\geq 0\), where \(\mathrm{hardtanh}(\mathbf{\theta}_{d}(t),0,1)\) clamps each element in \(\mathbf{\theta}_{d}\in\mathbb{R}^{c}\) to be between 0 and 1. The reaction term \(f\) from Equation (2a) is a pointwise non-linear function realized by a multilayer-perceptron (MLP) parameterized by learnable weights \(\mathbf{\theta}_{r}(t)\). To obtain initial node embedding \(\mathbf{U}^{(0)}\in\mathbb{R}^{n\times c}\) from the input features \(\mathbf{X}\in\mathbb{R}^{n\times c_{in}}\), we use a fully-connected layer \(g_{in}\) in Equation (2b).
Footnote 1: In PDE theory, the Laplacian is a negative operator, while in graph theory it is positive. Therefore it is required to multiply \(\hat{\mathbf{L}}\) by a negative sign in Equation (2a) compared to Equation (1).
In this work we focus on static and temporal node-level tasks, and we note that typically, \(c\), the number of hidden channels of \(\mathbf{U}(T)\in\mathbb{R}^{n\times c}\), is different than \(c_{out}\), the number of channels of the target output \(\mathbf{Y}\in\mathbb{R}^{n\times c_{out}}\). Therefore the output of neural network, \(\tilde{\mathbf{Y}}\), is given by
\[\tilde{\mathbf{Y}}=g_{\mathrm{out}}(\mathbf{U}(T),\mathbf{\theta}_{ \mathrm{out}})\in\mathbb{R}^{n\times c_{out}}, \tag{3}\]
where \(g_{\mathrm{out}}\) is a fully-connected layer, parameterized by learnable weights \(\mathbf{\theta}_{\mathrm{out}}\).
**The qualitative behavior of ADR-GNN.** The ADR-GNN model combines the learning of three powerful terms. Namely, the learned parameters are \(\mathbf{\theta}_{a}\) the advection parameters, \(\mathbf{\theta}_{d}\) the diffusion parameters, and \(\mathbf{\theta}_{r}\) the reaction parameters. Therefore, an ADR-GNN layer can express and model various phenomena. For example, if we set \(\mathbf{\theta}_{d}(t)=0\), then there is no diffusion in the system, and the method is dominated by advection and reaction. If on the other hand, one learns a very small advection (i.e., the learned \(\mathbf{V}\), to be discussed later, tends to retain all features in place), then a reaction-diffusion oriented system is obtained. Similarly, other combinations of advection, diffusion, and reaction can be achieved, because of the learning of the parameters of the system. Thus, ADR-GNN can be adopted to solve a host of problems, depending on dynamics and patterns mandated by the data, as we show later in our experiments in Section 4.
### From an ODE to a Graph Neural Network - Time Discretization of ADR-GNN
Equation (2) _spatially_ discretizes the PDE in Equation (1), yielding an ODE defined on the graph. The _time_ discretization of the ODE yields a sequential process that can be thought of as layers of neural networks [36; 19; 91]. That is, upon discrete time integration of Equation (2a), we replace the notion of time \(t\) with \(l\) layers, and a step size \(h\), that is a positive scalar hyperparameter.
While it is possible to use many ODE discretization methods (see, e.g., [36; 98; 19; 14] and references within), in various applications where an ADR system arises, from flow in porous media [23], to PDE-based image segmentation [89], and multiphase flow [43], an operator-splitting (OS) [4] is utilized. We therefore also use an OS time discretization for Equation (2a), that yields a graph neural ADR layer, summarized in Algorithm 1. Composing several neural ADR layers leads to ADR-GNN. We further discuss the properties of the OS approach in Appendix A. The exact discretizations of the ADR terms are derived in Section 3.4.
```
Input: Node features \(\mathbf{U}^{(l)}\in\mathbb{R}^{n\times c}\) Output: Updated node features \(\mathbf{U}^{(l+1)}\in\mathbb{R}^{n\times c}\) Advection: \(\mathbf{U}^{(l+1/3)}=\mathbf{U}^{(l)}+h\mathbf{DIV}(\mathbf{V}(\mathbf{U}^{(l) },t;\mathbf{\theta}_{a}^{(l)})\mathbf{U}^{(l)})\) Diffusion: \(\mathbf{U}^{(l+2/3)}=\mathrm{mat}\left(\left(\mathbf{I}+h\mathbf{K}(t;\mathbf{ \theta}_{d}^{(l)})\otimes\tilde{\mathbf{L}}\right)^{-1}\mathrm{vec}(\mathbf{U }^{(l+1/3)})\right)\) Reaction: \(\mathbf{U}^{(l+1)}=\mathbf{U}^{(l+2/3)}+hf(\mathbf{U}^{(l+2/3)},\mathbf{U}^{( 0)},t;\mathbf{\theta}_{r}^{(l)})\)
```
**Algorithm 1** Graph Neural Advection-Diffusion-Reaction Layer
### Discretized Graph Operators
We now elaborate on the discretized graph operators utilized in our ADR-GNN, summarized in Algorithm 1. Besides the combination of the learnable advection, diffusion, and reaction terms,
which, to the best of our knowledge, was not studied in the context of GNNs, the main innovation here is the _consistent, mass preserving, and stable_ discretization of the advection operator. We further utilize recent time integration techniques of the diffusion and reaction terms, discussed below.
**Advection.** To define the graph discretized advection operator, we extend the non-learnable advection operator from [15], into a learnable, neural advection operator. Our advection operator transports node features based on learned directed edge weights (velocities) \(\{(\mathbf{V}_{i\to j},\mathbf{V}_{j\to i})\}_{(i,j)\in\mathcal{E}}\), where each \(\mathbf{V}_{i\to j},\mathbf{V}_{j\to i}\in\mathbb{R}^{c}\), such that \(0\leq\mathbf{V}_{i\to j}\leq 1\). The notation \(i\to j\) implies that the weight transfers features from the \(i\)-th to \(j\)-th node. We further demand that the outbound edge weights associated with every node, per channel, sum to 1, i.e., \(\sum_{j\in\mathcal{N}_{i}}\mathbf{V}_{i\to j}=1\). This constraint suggests that a node can at most transfer the total of its features to other nodes. First, we define the discretized divergence from Equation (2a), that operates on the learned edge weights \(\mathbf{V}\):
\[\mathbf{DIV}_{i}(\mathbf{V}\mathbf{U})=\sum_{j\in\mathcal{N}_{i}}\mathbf{V}_{ j\to i}\odot\mathbf{U}_{j}-\mathbf{U}_{i}\odot\sum_{j\in\mathcal{N}_{i}} \mathbf{V}_{i\to j}=\sum_{j\in\mathcal{N}_{i}}\mathbf{V}_{j\to i}\odot \mathbf{U}_{j}-\mathbf{U}_{i}, \tag{4}\]
where \(\odot\) is the elementwise Hadamard product. Then, the graph advection operator in Algorithm 1 is:
\[\mathbf{U}_{i}^{(l+1/3)}=\mathbf{U}_{i}^{(l)}+h\mathbf{DIV}_{i}(\mathbf{V}^{(l )}\mathbf{U}^{(l)})=\mathbf{U}_{i}^{(l)}+h\left(\sum_{j\in\mathcal{N}_{i}} \mathbf{V}_{j\to i}^{(l)}\odot\mathbf{U}_{j}^{(l)}-\mathbf{U}_{i}^{(l)}\right). \tag{5}\]
Namely, the updated node features are obtained by adding the \(\mathbf{V}_{j\to i}\) weighted inbound node features, while removing the \(\mathbf{V}_{i\to j}\) weighted outbound node features, and \(h\) is a positive step size. The scheme in Equation (5) is the forward Euler discretization. We now show that the proposed graph neural advection operator is _mass conserving_, _stable_ and _consistent_2. By satisfying these properties, our advection operator is adherent to the continuous advection PDE [48].
Footnote 2: See stability definition and proofs in Appendix B.
**Lemma 1**.: _Define the mass of the graph node features \(\mathbf{U}^{(l)}\in\mathbb{R}^{n\times c}\) as the scalar \(\rho^{(l)}=\sum\mathbf{U}^{(l)}\). Then the advection operator in Equation (5) is mass conserving, i.e., \(\rho^{(l+1/3)}=\rho^{(l)}\)._
**Lemma 2**.: _The advection operator in Equation (5) is stable._
To learn a _consistent_ advection operator, that is, an operator that mimics the directional behavior of the advection in Equation (1), we craft an edge weight \(\mathbf{V}\) learning mechanism, shown in Algorithm 2, that yields direction-oriented weights, i.e., we ensure that \(\mathbf{V}_{i\to j}\neq\mathbf{V}_{j\to i}\), unless they are zeroes.
**Input:** Node features \(\mathbf{U}^{(l)}\in\mathbb{R}^{n\times c}\)
**Output:** Edge weights \(\mathbf{V}_{i\to j}^{(l)},\mathbf{V}_{j\to i}^{(l)}\in\mathbb{R}^{c}\)
\(\mathbf{Z}_{ij}^{(l)}=\mathrm{ReLU}(\mathbf{U}_{i}^{(l)}\mathbf{A}_{1}^{(l)} +\mathbf{U}_{j}^{(l)}\mathbf{A}_{2}^{(l)})\mathbf{A}_{3}^{(l)}\quad\mathrm{ and}\quad\mathbf{Z}_{ji}^{(l)}=\mathrm{ReLU}(\mathbf{U}_{j}^{(l)}\mathbf{A}_{1}^{(l)}+ \mathbf{U}_{i}^{(l)}\mathbf{A}_{2}^{(l)})\mathbf{A}_{3}^{(l)}\)
\(\mathbf{V}_{i\to j}^{(l)}=\mathrm{ReLU}(\mathbf{Z}_{ij}^{(l)}-\mathbf{Z}_{ji} ^{(l)})\mathbf{A}_{4}^{(l)}\quad\mathrm{and}\quad\mathbf{V}_{j\to i}^{(l)}= \mathrm{ReLU}(-\mathbf{Z}_{ij}^{(l)}+\mathbf{Z}_{ji}^{(l)})\mathbf{A}_{4}^{(l)}\)
\(\mathbf{V}_{i\to j}^{(l)}\leftarrow\frac{\exp(\mathbf{V}_{i\to j}^{(l)})}{\sum_ {k\in\mathcal{N}_{i}}\exp(\mathbf{V}_{i\to k}^{(l)})}\quad\mathrm{and}\quad \mathbf{V}_{j\to i}^{(l)}\leftarrow\frac{\exp(\mathbf{V}_{j\to i}^{(l)})}{ \sum_{k\in\mathcal{N}_{j}}\exp(\mathbf{V}_{j\to k}^{(l)})}\)
**Algorithm 2** Learning directional edge weights.
Here, \(\boldsymbol{\theta}_{a}^{(l)}=\{\mathbf{A}_{1}^{(l)},\mathbf{A}_{2}^{(l)}, \mathbf{A}_{3}^{(l)},\mathbf{A}_{4}^{(l)}\}\) are learnable fully connected layers, and the \(\exp\) is computed channel-wise. We note that the sign of \(\mathbf{Z}_{ij}-\mathbf{Z}_{ji}\) is opposite than that of \(-\mathbf{Z}_{ij}+\mathbf{Z}_{ji}\) in Algorithm 2. Hence, after the \(\mathrm{ReLU}(\cdot)\) activation, one of the edge weights, either \(\mathbf{V}_{i\to j}\) or \(\mathbf{V}_{j\to i}\) is guaranteed to be equal to zero, and the other will be non-negative. This allows the architecture to create significant asymmetry in the edge weights \(\mathbf{V}\), as also seen in Figure 1.
**Diffusion.** To discretize the diffusion term from Equation (2a), both explicit and implicit time discretizations can be used [4]. An explicit forward Euler discretization yields the following layer:
\[\mathbf{U}^{(l+2/3)}=\mathbf{U}^{(l+1/3)}-h\left(\hat{\mathbf{L}}\mathbf{U}^{(l +1/3)}\mathbf{K}^{(l)}\right). \tag{6}\]
However, an explicit scheme requires using a small step size \(h>0\), as it is marginally stable [4]. We therefore harness an implicit scheme, which guarantees the stability of the diffusion 3, and reads:
\[\mathbf{U}^{(l+2/3)}=\mathrm{mat}\left((\mathbf{I}+h\mathbf{K}^{(l)}\otimes\hat{ \mathbf{L}})^{-1}\mathrm{vec}(\mathbf{U}^{(l+1/3)})\right). \tag{7}\]
Here, \(\otimes\) is the Kronecker product, \(\mathrm{vec}()\) is a flattening operator, and \(\mathrm{mat}()\) reshapes a vector to a matrix. The computation of \(\mathbf{U}^{(l+2/3)}\) requires the solution of a linear system, solved by conjugate gradients4[31, 4]. In our experiments we found 5 iterations to be sufficient.
Footnote 4: We note that the matrix \(\mathbf{I}+h\mathbf{K}_{l}\otimes\hat{\mathbf{L}}\) is positive definite and invertible, because the identity matrix is positive definite, \(h\) is positive, \(\mathbf{K}_{l}\) is non-negative, and the graph Laplacian \(\hat{\mathbf{L}}\) is positive semi-definite.
**Reaction.** Our reaction term is realized using MLPs. Recent works showed that utilizing both additive and multiplicative MLPs yields improved performance [42, 22, 7]. Hence, we define:
\[f(\mathbf{U}^{(l+2/3)},\mathbf{U}^{(0)},\mathbf{X},t;\mathbf{\theta}^{(l)}_{r})= \sigma\left(\mathbf{U}^{(l+2/3)}\mathbf{R}^{(l)}_{1}+\mathrm{tanh}(\mathbf{U }^{(l+2/3)}\mathbf{R}^{(l)}_{2})\odot\mathbf{U}^{(l+2/3)}+\mathbf{U}^{(0)} \mathbf{R}^{(l)}_{3}\right), \tag{8}\]
as our reaction term in Equation (2a). Here, \(\mathbf{\theta}^{(l)}_{r}=\{\mathbf{R}^{(l)}_{1},\mathbf{R}^{(l)}_{2},\mathbf{R} ^{(l)}_{3}\}\) are trainable fully-connected layers, and \(\sigma\) is non-linear activation function (ReLU on our experiments), that can also be coupled with batch-normalization. The reaction term is integrated via forward Euler, as shown in Algorithm 1.
## 4 Experimental Results
We demonstrate our ADR-GNN on two types of tasks on real-world datasets: node classification, and spatio-temporal node forecasting. Architectures and training details are provided in Appendix C, and the complexity of ADR-GNN is discussed in Appendix D. We use a grid search to select hyperparameters, discussed in Appendix E. Datasets details and statistics are reported in Appendix F. Throughout all the experiments, the top three models according to the mean metric are highlighted by **First**, **Second**, **Third**. Overall, we propose the following ADR-GNN architectures:
* ADR-GNN\({}_{S}\). Here we follow a similar approach to typical neural networks, where different weights are learned for each layer. From a dynamical system perspective, this can be interpreted as an unrolled ADR iteration [57]. This architecture is suitable for'static' datasets that do not involve temporal information, such as Cora [59], and is specified in Appendix C.1.
* ADR-GNN\({}_{T}\). A time-dependent ADR-GNN that is suitable for temporal datasets. In addition to the functionality of ADR-GNN\({}_{I}\), it utilizes temporal embedding, discussed in Appendix C.2.
### Node Classification
**Homophilic graphs.** We experiment with Cora [59], Citeseer [76], and Pubmed [60] datasets. We use the 10 splits from [65] with train/validation/test split ratios of \(48\%/32\%/20\%\), and report their average accuracy in Table 1. In Appendix G.1 we also provide the accuracy standard deviation. As a comparison, we consider multiple recent methods, such as GCN [45], GAT [88], Geom-GCN [65], APPNP [46], JKNet [94], MixHop [1], WRGAT[82], GCNII [18], PDE-GCN [27], NSD [10], H2GCN [102], GGCN [95], C&S [40], DMP [96], GREAD [22], LINKX [51], and ACMII [55]. We see that our ADR-GNNs outperforms all methods on the Cora and Pubmed datasets, and achieves close (0.12% accuracy difference) to the best performing PDE-GCN on Citeseer.
**Heterophilic graphs.** While our ADR-GNN offers competitive accuracy on homophilic datasets, as discussed in Section 2, ADR systems are widely used to model non-smooth phenomena and patterns, as often appear in heterophilic datasets by their definition [65]. We therefore utilize 10 heterophilic datasets from various sources. In Table 2 we compare the average accuracy of our ADR-GNN\({}_{\mathrm{S}}\) with recent GNNs on the Squirrel, Film, and Chameleon from [67], as well as the Cornell, Texas and Wisconsin datasets from [65], using the 10-splits from [65] we train/validation/test split ratios of \(48\%/32\%/20\%\). We include more comparisons and the accuracy standard deviation in Appendix G.1. In addition to the previously considered methods, we also compare with FAGCN [9], GraphCON [70], GPR-GNN [21], GRAFF [26], ACMP-GCN [90], and G\({}^{2}\)[71]. We see that ADR-GNNs offers accuracy that is in line with recent state-of-the-art methods. In addition, we evaluate ADR-GNNs on the Twitch-DE, deezer-europe, Penn94, and arXiv-year datasets from [52, 51] to further demonstrate the efficacy of our method, in Appendix G.2.
### Spatio-Temporal Node Forecasting
Classical ARD models are widely utilized to predict and model spatio-temporal phenomena [29; 2]. We therefore now evaluate our temporal ADR-GNN\({}_{\text{T}}\) on several spatio-temporal node forecasting datasets. To this end, we harness the software package PyTorch-Geometric-Temporal [68] that offers a graph machine learning pipeline for spatio-temporal graph tasks. In our experiments, we use the Chickenpox Hungary, PedalMe London, and Wikipedia Math datasets from [68], as well as the traffic speed prediction datasets METR-LA [41] and PEMS-BAY [16].
For the first three datasets, we follow the incremental training mode, mean-squared-error (MSE) loss, and testing procedure from [68]. We report the performance of ADR-GNN\({}_{\text{T}}\) and other models, in terms of MSE, in Table 3. We compare with several recent methods, namely, DCRNN [49], GConv [77], GC-LSTM [17], DyGrAE [85; 84], EGCN [64], A3T-GCN [101], T-GCN [99], MPNN LSTM [63], and AGCRN [5]. Our results in Table 3 show improvement over the considered models, further revealing the significance of neural ADR systems on graphs, offered by our ADR-GNN\({}_{\text{T}}\).
On the METR-LA and PEMS-BAY datsets, we follow the same training and testing procedures, and mean-absolute-error (MAE) loss as in [49]. We report the MAE, root mean squared error (RMSE), and mean absolute percentage error (MAPE). To demonstrate the effectiveness of ADR-GNN\({}_{\text{T}}\) for varying time frame predictions, we report the results on 3, 6, and 12 future frame traffic speed prediction, where each time frame equates to 5 minutes, in Table 4. We compare ADR-GNN\({}_{\text{T}}\) with various methods, from 'classical' (non-neural) approaches such as historical averaging (HA), VAR [54], and SVR [80], to neural methods like FC-LSTM [83], DCRNN [49], Graph WaveNet [93], ASTGCN [34], STSGCN [81], GMAN [100], MTGNN [92], GTS [78], and STEP [79]. We find that our ADR-GNN\({}_{\text{T}}\) offers lower (better) metrics than the considered methods. For instance, on METR-LA, ADR-GNN\({}_{\text{T}}\) reduces the MAE achieved by the recent STEP method from 3.37 to 3.19.
### Ablation Studies
**Synthetic Feature Transportation.** The benefit of diffusion and reaction are known in GNNs (see [30; 14; 22] and references within). However, the significance of neural advection was not studied in GNNs prior to our work, to the best of our knowledge. Therefore, and following the
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Method & Cora & Cite. & Pub. & & & \\ Homophily & 0.81 & 0.80 & 0.74 & & & \\ \hline GCN & 85.77 & 73.68 & 88.13 & GCN & 23.96 & 26.86 & 28.18 & 52.70 & 52.16 & 48.92 \\ GAT & 86.37 & 74.32 & 87.62 & GAT & 30.03 & 28.45 & 42.93 & 54.32 & 58.38 & 49.41 \\ GCNII\({}^{\dagger}\) & 88.49 & 77.13 & **90.30** & GCNII\({}^{\dagger}\) & 38.47 & 32.87 & 60.61 & 74.86 & 69.46 & 74.12 \\ Geom-GCN\({}^{\dagger}\) & 85.27 & **77.99** & 90.05 & GEOM & 38.32 & 31.63 & 60.90 & 60.81 & 67.57 & 64.12 \\ APPNP & 87.87 & 76.53 & 89.40 & PDE-GCN\({}^{\dagger}\) & - & - & - & - & - \\ JKNet & 85.25 & 75.85 & 88.94 & PDE-GCN\({}^{\dagger}\) & - & - & - & - \\ MixHop & 87.61 & 76.26 & 85.31 & GCN & 23.96 & 26.86 & 85.94 & 85.95 & 89.41 \\ WRGAT & 88.20 & 76.81 & 88.52 & GCN & 23.96 & GCN & 23.96 & 84.86 & 86.86 \\ PDE-GCN\({}^{\dagger}\) & **88.60** & **78.48** & 89.93 & GCN & 23.96 & GCN & 23.96 & 87.65 & 87.65 \\ NSD\({}^{\dagger}\) & 87.14 & 77.14 & 89.49 & GGCN & 23.96 & GCN & 23.96 & GCN & 23.96 & GCN & 23.96 & GCN & 23.96 \\ GGCN & 87.95 & 77.14 & 89.15 & GCN & 23.96 & GCN & 23.96 & GCN & 23.96 & GCN & 23.96 & GCN & 23.96 \\ H2GCN & 87.87 & 77.11 & 89.49 & GGCN & 23.96 & GCN & 23.96 & GCN & 23.96 & GCN & 23.96 & GCN & 23.96 & GCN & 23.96 \\ C\&S & **89.05** & 76.22 & 89.74 & GCN & 23.96 & GCN & 23.96 & GCN & 23.96 & GCN & 23.96 & GCN & 23.96 \\ GRAFF\({}^{\dagger}\) & 88.01 & 77.30 & 90.04 & GCN & 23.96 & GCN & 23.96 & GCN & 23.96 & GCN & 23.96 & GCN & 23.96 & GCN & 23.96 \\ DMP\({}^{\dagger}\) & 86.52 & 76.87 & 89.27 & GCN & 23.96 & GCN & 23.96 & GCN & 23.96 & GCN & 23.96 & GCN & 23.96 & GCN & 23.96 \\ GREAD\({}^{\dagger}\) & 88.57 & 77.60 & **90.23** & GCN & 23.96 & GCN & 23.96 & GCN & 23.96 & GCN & 23.96 & GCN & 23.96 & GCN & 23.96 & GCN & 23.96 \\ LINKX & 84.64 & 73.19 & 87.86 & GCN & 23.96 & GCN & 23.96 & GCN & 23.96 & GCN & 23.96 & GCN & 23.96 & GCN & 23.96 & GCN & 23.96 \\ ACMII\({}^{\dagger}\) & 88.25 & 77.12 & 89.71 & GCN & 23.96 & GCN & 23.96 & GCN & 23.96 & GCN & 23.96 & GCN & 23.96 & GCN & 23.96 & GCN & 23.96 & GCN & 23.96 & GCN & 23.96 & GCN & 23.96 & GCN & 23.96 \\ \hline ADR-GNN\({}_{\text{S}}\) & **89.43** & **78.36** & **90.55** & GCN & 23.96 & GCN & 23.
discussion of the task of feature transportation in Section 1 and Figure 1, we now compare the behavior of the advection, diffusion, and reaction terms on this task. Although this experiment is conceptually simple, it is evident from Figure 2, that diffusion and reaction terms in GNNs are limited in modeling such a behavior. This result, however, is not surprising. Employing diffusion smooths, rather than directly _transferring_ node features. Similarly, the reaction term can only learn to scale the node features in this experiment. On the contrary, Figure 2 that the advection term, that by definition, transports information, achieves an exact fit. More details about the experiment are given in Appendix G.3.
**The Impact of Advection, Diffusion, and Reaction.** We study the influence of each of the proposed terms in Equation (2a) on real-world datasets, independently and jointly. The results, reported in Table 5 further show the significance of the advection term. For datasets that are homophilic like Cora, we see minor accuracy improvement when incorporating the advection term. This is in line with known findings regarding the benefits of diffusion for homophilic datasets [30; 14]. More importantly, we see that mostly for heterophilic datasets like Chameleon, as well as traffic prediction datasets like PEMS-BAY, utilizing the advection significantly improves the performance of the network.
**The Influence of Number of Layers.** The design of ADR-GNN can alleviate oversmoothing in two ways. First, by learning the diffusion coefficients \(\mathbf{K}\), ADR-GNN controls the amount of smoothing, and can also achieve no smoothing if \(\mathbf{K}\) is zero, depending on the data. Second, note that the advection and reaction terms can increase the frequency of the node features, because they are not limited to smoothing processes. To verify our observation, we evaluate ADR-GNN\({}_{\mathrm{S}}\) on Cora and Citeseer with 2 to 64 layers, to see if its performance degrades as more layers are added, an issue that is associated with oversmoothing. We report the obtained accuracy in Figure 3, where no performance drop is evident. For reference, we also report the results obtained with GCN [45]. Also, we define and report the measured Dirichlet energy in Appendix G.4, that shows ADR-GNN does not oversmooth.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Method & Chickenpox Hungary & PedalMe London & Wikipedia Math \\ \hline DCRNN & 1.124 \(\pm\) 0.015 & 1.463\(\pm\) 0.019 & **0.679 \(\pm\) 0.020** \\ GConvGRU & 1.128 \(\pm\) 0.011 & 1.622 \(\pm\) 0.032 & **0.657 \(\pm\) 0.015** \\ GConvLSTM & 1.121 \(\pm\) 0.014 & **1.442 \(\pm\) 0.028** & 0.777 \(\pm\) 0.021 \\ GC-LSTM & 1.115 \(\pm\) 0.014 & **1.455 \(\pm\) 0.023** & 0.779 \(\pm\) 0.023 \\ DyGrAE & 1.120 \(\pm\) 0.021 & 1.455 \(\pm\) 0.031 & 0.773 \(\pm\) 0.009 \\ EGCN-H & **1.113 \(\pm\) 0.016** & 1.467 \(\pm\) 0.026 & 0.775 \(\pm\) 0.022 \\ EGCN-O & 1.124 \(\pm\) 0.009 & 1.491 \(\pm\) 0.024 & 0.750 \(\pm\) 0.014 \\ A3T-GCN & **1.114 \(\pm\) 0.008** & 1.469 \(\pm\) 0.027 & 0.781 \(\pm\) 0.011 \\ T-GCN & 1.117 \(\pm\) 0.011 & 1.479 \(\pm\) 0.012 & 0.764 \(\pm\) 0.011 \\ MPNN LSTM & 1.116 \(\pm\) 0.023 & 1.485 \(\pm\) 0.028 & 0.795 \(\pm\) 0.010 \\ AGCRN & 1.120 \(\pm\) 0.010 & 1.469 \(\pm\) 0.030 & 0.788 \(\pm\) 0.011 \\ \hline ADR-GNN\({}_{\mathrm{T}}\) & **0.817 \(\pm\) 0.012** & **0.598 \(\pm\) 0.050** & **0.571 \(\pm\) 0.014** \\ \hline \hline \end{tabular}
\end{table}
Table 3: The predictive performance of spatio-temporal neural networks evaluated by average MSE of 10 experimental repetitions and standard deviations, calculated on 10% forecasting horizons.
Figure 2: Source and target node features, and their fit using advection, diffusion, and reaction.
## 5 Limitations
While our ADR-GNN allows modeling data that involves directed transportation of features, it comes with an additional cost, as it requires computing the directed edge weights to mimic the behavior of the advection PDE. This adds complexity to each layer in the form of MLPs, described in Algorithm 2. Furthermore, as we show in our experiments, incorporating the advection into GNNs yields minor improvement when the data are governed by diffusion, e.g., for the Pubmed dataset.
Summary and Discussion
In this paper, we present a novel GNN architecture that is based on the Advection-Diffusion-Reaction PDE, called ADR-GNN. We develop a graph neural advection operator that mimics the continuous advection operator, and compose it with learnable diffusion and reaction terms.
We discuss and analyze the properties of ADR-GNN and its flexibility in modeling various phenomena. In particular, we show that the main advantage of the graph advection operator is its ability to transport information over the graph edges through the layers - a behavior that is hard to model using the diffusion and reaction terms that have been used in the literature. To demonstrate the effectiveness of ADR-GNN we experiment with total of 18 real-world datasets, from homophilic and heterophilic node classification to spatio-temporal node forecasting datasets.
While the gains observed on homophilic datasets are relatively modest, the performance improvements demonstrated on heterophilic datasets are significant, offering 5% accuracy increase in some cases. Moreover, when applied to spatio-temporal node forecasting datasets, our ADR-GNN exhibits notable enhancements in the evaluated metrics compared to other methods. This progress can be attributed to the inherent suitability of ADR-GNN for tasks involving directional transportation of features, making it an intuitive choice for modeling such scenarios.
## References
* Abu-El-Haija et al. [2019] Sami Abu-El-Haija, Bryan Perozzi, Amol Kapoor, Nazanin Alipourfard, Kristina Lerman, Hrayr Harutyunyan, Greg Ver Steeg, and Aram Galstyan. Mixhop: Higher-order graph convolutional architectures via sparsified neighborhood mixing. In _international conference on machine learning_, pages 21-29. PMLR, 2019.
* Adam and Sibert [2004] M Shiham Adam and John R Sibert. Use of neural networks with advection-diffusion-reaction models to estimate large-scale movements of skipjack tuna from tagging data. Technical report, Marine Research Centre, Ministry of Fisheries, Agriculture and Marine Resources, 2004.
* Alon and Yahav [2021] Uri Alon and Eran Yahav. On the bottleneck of graph neural networks and its practical implications. In _International Conference on Learning Representations_, 2021.
* Ascher [2008] Uri M Ascher. _Numerical methods for evolutionary differential equations_. SIAM, 2008.
* Bai et al. [2020] Lei Bai, Lina Yao, Can Li, Xianzhi Wang, and Can Wang. Adaptive Graph Convolutional Recurrent Network for Traffic Forecasting. _Advances in Neural Information Processing Systems_, 33, 2020.
* de Avila Belbute-Peres et al. [2020] Filipe de Avila Belbute-Peres, Thomas D. Economon, and J. Zico Kolter. Combining Differentiable PDE Solvers and Graph Neural Networks for Fluid Flow Prediction. In _International Conference on Machine Learning (ICML)_, 2020.
* Ben-Shaul et al. [2023] Ido Ben-Shaul, Tomer Galanti, and Shai Dekel. Exploring the approximation capabilities of multiplicative neural networks for smooth functions. _arXiv preprint arXiv:2301.04605_, 2023.
* Betts [2001] J.T. Betts. _Practical Methods for Optimal Control using Nonlinear Programming_. Advances in Design and Control. SIAM, Philadelphia, 2001.
* Bo et al. [2021] Deyu Bo, Xiao Wang, Chuan Shi, and Huawei Shen. Beyond low-frequency information in graph convolutional networks. In _AAAI_. AAAI Press, 2021.
* Bodnar et al. [2022] Cristian Bodnar, Francesco Di Giovanni, Benjamin Paul Chamberlain, Pietro Lio, and Michael M Bronstein. Neural sheaf diffusion: A topological perspective on heterophily and oversmoothing in gnns. _arXiv preprint arXiv:2202.04579_, 2022.
* Borovitskiy et al. [2021] Viacheslav Borovitskiy, Iskander Azangulov, Alexander Terenin, Peter Mostowsky, Marc Peter Deisenroth, and Nicolas Durrande. Matern gaussian processes on graphs. In _International Conference on Artificial Intelligence and Statistics_. PMLR, 2021.
* Brandstetter et al. [2022] Johannes Brandstetter, Daniel E. Worrall, and Max Welling. Message passing neural PDE solvers. In _International Conference on Learning Representations_, 2022.
* Cai and Wang [2020] Chen Cai and Yusu Wang. A note on over-smoothing for graph neural networks. _arXiv preprint arXiv:2006.13318_, 2020.
* [14] Benjamin Paul Chamberlain, James Rowbottom, Maria Gorinova, Stefan Webb, Emanuele Rossi, and Michael M Bronstein. GRAND: Graph neural diffusion. In _International Conference on Machine Learning (ICML)_, pages 1407-1418. PMLR, 2021.
* [15] Airlie Chapman and Airlie Chapman. Advection on graphs. _Semi-Autonomous Networks: Effective Control of Networked Systems through Protocols, Design, and Modeling_, pages 3-16, 2015.
* [16] Chao Chen, Karl Petty, Alexander Skabardonis, Pravin Varaiya, and Zhanfeng Jia. Freeway performance measurement system: mining loop detector data. _Transportation Research Record_, 1748(1):96-102, 2001.
* [17] Jinyin Chen, Xuanheng Xu, Yangyang Wu, and Haibin Zheng. GC-LSTM: Graph Convolution Embedded LSTM for Dynamic Link Prediction. _arXiv preprint arXiv:1812.04206_, 2018.
* [18] Ming Chen, Zhewei Wei, Zengfeng Huang, Bolin Ding, and Yaliang Li. Simple and deep graph convolutional networks. In Hal Daume III and Aarti Singh, editors, _Proceedings of the 37th International Conference on Machine Learning_, volume 119 of _Proceedings of Machine Learning Research_, pages 1725-1735. PMLR, 13-18 Jul 2020.
* [19] Tian Qi Chen, Yulia Rubanova, Jesse Bettencourt, and David K. Duvenaud. Neural ordinary differential equations. _CoRR_, abs/1806.07366, 2018.
* [20] Weikai Chen, Xiaoguang Han, Guanbin Li, Chao Chen, Jun Xing, Yajie Zhao, and Hao Li. Deep rbfnet: Point cloud feature learning using radial basis functions. _arXiv preprint arXiv:1812.04302_, 2018.
* [21] Eli Chien, Jianhao Peng, Pan Li, and Olgica Milenkovic. Adaptive universal generalized pagerank graph neural network. In _International Conference on Learning Representations_, 2021.
* [22] Jeongwhan Choi, Seoyoung Hong, Noseong Park, and Sung-Bae Cho. Gread: Graph neural reaction-diffusion equations. _arXiv preprint arXiv:2211.14208_, 2022.
* [23] Keith H Coats. A note on impes and some impes-based simulation models. _SPE Journal_, 5(03):245-251, 2000.
* [24] Chris Cosner. Reaction-diffusion-advection models for the effects and evolution of dispersal. _Discrete & Continuous Dynamical Systems_, 34(5):1701, 2014.
* [25] M Deepa Maheshvare, Soumyendu Raha, and Debnath Pal. A graph-based framework for multiscale modeling of physiological transport. _Frontiers in Network Physiology_, 1:18, 2022.
* [26] Francesco Di Giovanni, James Rowbottom, Benjamin P Chamberlain, Thomas Markovich, and Michael M Bronstein. Graph neural networks as gradient flows. _arXiv preprint arXiv:2206.10991_, 2022.
* [27] Moshe Eliasof, Eldad Haber, and Eran Treister. PDE-GCN: Novel architectures for graph neural networks motivated by partial differential equations. _Advances in Neural Information Processing Systems_, 34:3836-3849, 2021.
* [28] L. C. Evans. _Partial Differential Equations_. American Mathematical Society, San Francisco, 1998.
* [29] Bernold Fiedler and Arnd Scheel. Spatio-temporal dynamics of reaction-diffusion patterns. _Trends in nonlinear analysis_, pages 23-152, 2003.
* [30] Johannes Gasteiger, Stefan Weissenberger, and Stephan Gunnemann. Diffusion improves graph learning. In _Conference on Neural Information Processing Systems (NeurIPS)_, 2019.
* [31] G.H. Golub and C.F. van Loan. _Matrix Computations_. Johns Hopkins University Press, 1988.
* [32] Alessio Gravina, Davide Bacciu, and Claudio Gallicchio. Anti-symmetric dgn: a stable architecture for deep graph networks. _arXiv preprint arXiv:2210.09789_, 2022.
* [33] Fangda Gu, Heng Chang, Wenwu Zhu, Somayeh Sojoudi, and Laurent El Ghaoui. Implicit graph neural networks. _Advances in Neural Information Processing Systems_, 33:11984-11995, 2020.
* [34] Shengnan Guo, Youfang Lin, Ning Feng, Chao Song, and Huaiyu Wan. Attention based spatial-temporal graph convolutional networks for traffic flow forecasting. In _Proceedings of the AAAI conference on artificial intelligence_, volume 33, pages 922-929, 2019.
* [35] E. Haber. _Computational Methods in Geophysical Electromagnetics_. SIAM, Philadelphia, 2014.
* [36] E. Haber and L. Ruthotto. Stable architectures for deep neural networks. _Inverse Problems_, 34(1), 2017.
* [37] Eldad Haber, Keegan Lensink, Eran Treister, and Lars Ruthotto. IMEXnet a forward stable deep neural network. In _International Conference on Machine Learning_, pages 2525-2534. PMLR, 2019.
* [38] M. Hochbruck, C. Luibich, and H. Selhofer. Exponential integrators for large systems of differential equations. _SIAN. J. Sci. Comp_, 19, 1998. n5.
* [39] Radim Hosek and Jonas Volek. Discrete advection-diffusion equations on graphs: Maximum principle and finite volumes. _Applied Mathematics and Computation_, 361:630-644, 2019.
* [40] Qian Huang, Horace He, Abhay Singh, Ser-Nam Lim, and Austin R Benson. Combining label propagation and simple models out-performs graph neural networks. _arXiv preprint arXiv:2010.13993_, 2020.
* [41] Hosagrahar V Jagadish, Johannes Gehrke, Alexandros Labrinidis, Yannis Papakonstantinou, Jignesh M Patel, Raghu Ramakrishnan, and Cyrus Shahabi. Big data and its technical challenges. _Communications of the ACM_, 57(7):86-94, 2014.
* [42] Siddhant M. Jayakumar, Wojciech M. Czarnecki, Jacob Menick, Jonathan Schwarz, Jack Rae, Simon Osindero, Yee Whye Teh, Tim Harley, and Razvan Pascanu. Multiplicative interactions and where to find them. In _International Conference on Learning Representations_, 2020.
* [43] Samet Kadioglu, Dana Knoll, Mark Sussman, and Richard Martineau. A second order jfnk-based imex method for single and multi-phase flows. In _Computational Fluid Dynamics 2010_, pages 549-554. Springer, 2011.
* [44] Alexei M Khokhlov. Propagation of turbulent flames in supernovae. _The Astrophysical Journal_, 449:695, 1995.
* [45] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. _International Conference on Learning Representations (ICLR)_, 2017.
* [46] Johannes Klicpera, Aleksandar Bojchevski, and Stephan Gunnemann. Combining neural networks with personalized pagerank for classification on graphs. In _International Conference on Learning Representations_, 2019.
* [47] Toshiyuki Koto. Imex runge-kutta schemes for reaction-diffusion equations. _Journal of Computational and Applied Mathematics_, 215(1):182-195, 2008.
* [48] R.J. LeVeque. _Numerical Methods for Conservation Laws_. Birkhauser, 1990.
* [49] Yaguang Li, Rose Yu, Cyrus Shahabi, and Yan Liu. Diffusion Convolutional Recurrent Neural Network: Data-Driven Traffic Forecasting. In _International Conference on Learning Representations_, 2018.
* [50] Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Andrew Stuart, Kaushik Bhattacharya, and Anima Anandkumar. Multipole graph neural operator for parametric partial differential equations. _Advances in Neural Information Processing Systems_, 33:6755-6766, 2020.
* [51] Derek Lim, Felix Matthew Hohne, Xiuyu Li, Sijia Linda Huang, Vaishnavi Gupta, Omkar Prasad Bhalerao, and Ser-Nam Lim. Large scale learning on non-homophilous graphs: New benchmarks and strong simple methods. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan, editors, _Advances in Neural Information Processing Systems_, 2021.
* [52] Derek Lim, Xiuyu Li, Felix Hohne, and Ser-Nam Lim. New benchmarks for learning on non-homophilous graphs. _Workshop on Graph Learning Benchmarks, WWW_, 2021.
* [53] Zichao Long, Yiping Lu, Xianzhong Ma, and Bin Dong. PDE-net: Learning PDEs from data, 2018.
* [54] Zheng Lu, Chen Zhou, Jing Wu, Hao Jiang, and Songyue Cui. Integrating granger causality and vector auto-regression for traffic prediction of large-scale wlans. _KSII Transactions on Internet and Information Systems (TIIS)_, 10(1):136-151, 2016.
* [55] Sitao Luan, Chenqing Hua, Qincheng Lu, Jiaqi Zhu, Mingde Zhao, Shuyuan Zhang, Xiao-Wen Chang, and Doina Precup. Revisiting heterophily for graph neural networks. _Conference on Neural Information Processing Systems_, 2022.
* [56] Sitao Luan, Mingde Zhao, Chenqing Hua, Xiao-Wen Chang, and Doina Precup. Complete the missing half: Augmenting aggregation filtering with diversification for graph convolutional networks. _arXiv preprint arXiv:2008.08844_, 2020.
* [57] Morteza Mardani, Qingyun Sun, David Donoho, Vardan Papyan, Hatef Monajemi, Shreyas Vasanawala, and John Pauly. Neural proximal gradient descent for compressive imaging. _Advances in Neural Information Processing Systems_, 31, 2018.
* [58] Robert MM Mattheij, Sjoerd W Rienstra, and JHM Ten Thije Boonkkamp. _Partial differential equations: modeling, analysis, computation_. SIAM, 2005.
* [59] Andrew Kachites McCallum, Kamal Nigam, Jason Rennie, and Kristie Seymore. Automating the construction of internet portals with machine learning. _Information Retrieval_, 3(2):127-163, 2000.
* [60] Galileo Namata, Ben London, Lise Getoor, Bert Huang, and U Edu. Query-driven active surveying for collective classification. In _10th International Workshop on Mining and Learning with Graphs_, volume 8, page 1, 2012.
* [61] Hoang Nt and Takanori Maehara. Revisiting graph neural networks: All we have is low-pass filters. _arXiv preprint arXiv:1905.09550_, 2019.
* [62] Kenta Oono and Taiji Suzuki. Graph neural networks exponentially lose expressive power for node classification. In _International Conference on Learning Representations_, 2020.
* [63] George Panagopoulos, Giannis Nikolentzos, and Michalis Vazirgiannis. Transfer Graph Neural Networks for Pandemic Forecasting. In _Proceedings of the 35th AAAI Conference on Artificial Intelligence_, 2021.
* [64] Aldo Pareja, Giacomo Domeniconi, Jie Chen, Tengfei Ma, Toyotaro Suzumura, Hiroki Kanezashi, Tim Kaler, Tao B Schardl, and Charles E Leiserson. EvolveGCN: Evolving Graph Convolutional Networks for Dynamic Graphs. In _AAAI_, pages 5363-5370, 2020.
* [65] Hongbin Pei, Bingzhe Wei, Kevin Chen-Chuan Chang, Yu Lei, and Bo Yang. Geom-gcn: Geometric graph convolutional networks. In _International Conference on Learning Representations_, 2020.
* [66] Maziar Raissi. Deep hidden physics models: Deep learning of nonlinear partial differential equations. _The Journal of Machine Learning Research_, 19(1):932-955, 2018.
* [67] Benedek Rozemberczki, Carl Allen, and Rik Sarkar. Multi-Scale Attributed Node Embedding. _Journal of Complex Networks_, 9(2), 2021.
* [68] Benedek Rozemberczki, Paul Scherer, Yixuan He, George Panagopoulos, Alexander Riedel, Maria Astefanoaei, Oliver Kiss, Ferenc Beres, Guzman Lopez, Nicolas Collignon, et al. Pytorch geometric temporal: Spatiotemporal signal processing with neural machine learning models. In _Proceedings of the 30th ACM International Conference on Information & Knowledge Management_, pages 4564-4573, 2021.
* [69] T Konstantin Rusch, Michael M Bronstein, and Siddhartha Mishra. A survey on oversmoothing in graph neural networks. _arXiv preprint arXiv:2303.10993_, 2023.
* [70] T Konstantin Rusch, Ben Chamberlain, James Rowbottom, Siddhartha Mishra, and Michael Bronstein. Graph-coupled oscillator networks. In _International Conference on Machine Learning_, pages 18888-18909. PMLR, 2022.
* [71] T Konstantin Rusch, Benjamin P Chamberlain, Michael W Mahoney, Michael M Bronstein, and Siddhartha Mishra. Gradient gating for deep multi-rate learning on graphs. _arXiv preprint arXiv:2210.00513_, 2022.
* [72] Lars Ruthotto and Eldad Haber. Deep neural networks motivated by partial differential equations. _Journal of Mathematical Imaging and Vision_, 62:352-364, 2020.
* [73] MH Saadat, B Gjorgiev, L Das, and G Sansavini. Neural tangent kernel analysis of pinn for advection-diffusion equation. _arXiv preprint arXiv:2211.11716_, 2022.
* [74] Alvaro Sanchez-Gonzalez, Victor Bapst, Kyle Cranmer, and Peter Battaglia. Hamiltonian graph networks with ode integrators. _arXiv preprint arXiv:1909.12790_, 2019.
* [75] AW Seed. A dynamic and spatial scaling approach to advection forecasting. _Journal of Applied Meteorology and Climatology_, 42(3):381-388, 2003.
* [76] Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Galligher, and Tina Eliassi-Rad. Collective classification in network data. _AI magazine_, 29(3):93-93, 2008.
* [77] Youngjoo Seo, Michael Defferrard, Pierre Vandergheynst, and Xavier Bresson. Structured Sequence Modeling with Graph Convolutional Recurrent Networks. In _International Conference on Neural Information Processing_, pages 362-373. Springer, 2018.
* [78] Chao Shang, Jie Chen, and Jinbo Bi. Discrete graph structure learning for forecasting multiple time series. In _International Conference on Learning Representations_, 2021.
* [79] Zezhi Shao, Zhao Zhang, Fei Wang, and Yongjun Xu. Pre-training enhanced spatial-temporal graph neural network for multivariate time series forecasting. In _Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining_, pages 1567-1577, 2022.
* [80] Alex J Smola and Bernhard Scholkopf. A tutorial on support vector regression. _Statistics and computing_, 14:199-222, 2004.
* [81] Chao Song, Youfang Lin, Shengnan Guo, and Huaiyu Wan. Spatial-temporal synchronous graph convolutional networks: A new framework for spatial-temporal network data forecasting. In _Proceedings of the AAAI conference on artificial intelligence_, volume 34, pages 914-921, 2020.
* [82] Susheel Suresh, Vinith Budde, Jennifer Neville, Pan Li, and Jianzhu Ma. Breaking the limit of graph neural networks by improving the assortativity of graphs with local mixing patterns. _Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining_, 2021.
* [83] Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. _Advances in neural information processing systems_, 27, 2014.
* [84] Aynaz Taheri and Tanya Berger-Wolf. Predictive Temporal Embedding of Dynamic Graphs. In _Proceedings of the 2019 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining_, pages 57-64, 2019.
* [85] Aynaz Taheri, Kevin Gimpel, and Tanya Berger-Wolf. Learning to represent the evolution of dynamic graphs with recurrent models. In _Companion Proceedings of The 2019 World Wide Web Conference_, WWW '19, page 301-307, 2019.
* [86] Matthew Thorpe, Tan Minh Nguyen, Hedi Xia, Thomas Strohmer, Andrea Bertozzi, Stanley Osher, and Bao Wang. GRAND++: Graph neural diffusion with a source term. In _International Conference on Learning Representations_, 2022.
* [87] Lafras Uys. _Coupling kinetic models and advection-diffusion equations to model vascular transport in plants, applied to sucrose accumulation in sugarcane_. PhD thesis, Stellenbosch: University of Stellenbosch, 2009.
* [88] Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph Attention Networks. _International Conference on Learning Representations_, 2018.
* [89] Luminita A Vese and Tony F Chan. A multiphase level set framework for image segmentation using the mumford and shah model. _International journal of computer vision_, 50(3):271-293, 2002.
* [90] Yuelin Wang, Kai Yi, Xinliang Liu, Yu Guang Wang, and Shi Jin. Acmp: Allen-cahn message passing for graph neural networks with particle phase transition. _arXiv preprint arXiv:2206.05437_, 2022.
* [91] E Weinan. A Proposal on Machine Learning via Dynamical Systems. _Communications in Mathematics and Statistics_, 5(1):1-11, March 2017.
* [92] Zonghan Wu, Shirui Pan, Guodong Long, Jing Jiang, Xiaojun Chang, and Chengqi Zhang. Connecting the dots: Multivariate time series forecasting with graph neural networks. In _Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining_, pages 753-763, 2020.
* [93] Zonghan Wu, Shirui Pan, Guodong Long, Jing Jiang, and Chengqi Zhang. Graph wavenet for deep spatial-temporal graph modeling. In _Proceedings of the 28th International Joint Conference on Artificial Intelligence_, IJCAI'19, page 1907-1913. AAAI Press, 2019.
* [94] Keyulu Xu, Chengtao Li, Yonglong Tian, Tomohiro Sonobe, Ken-ichi Kawarabayashi, and Stefanie Jegelka. Representation learning on graphs with jumping knowledge networks. In Jennifer Dy and Andreas Krause, editors, _Proceedings of the 35th International Conference on Machine Learning_, volume 80 of _Proceedings of Machine Learning Research_, pages 5453-5462. PMLR, 10-15 Jul 2018.
* [95] Yujun Yan, Milad Hashemi, Kevin Swersky, Yaoqing Yang, and Danai Koutra. Two sides of the same coin: Heterophily and oversmoothing in graph convolutional neural networks. _arXiv preprint arXiv:2102.06462_, 2021.
* [96] Liang Yang, Mengzhe Li, Liyang Liu, Chuan Wang, Xiaochun Cao, Yuanfang Guo, et al. Diverse message passing for attribute with heterophily. _Advances in Neural Information Processing Systems_, 34:4751-4763, 2021.
* [97] Kuangen Zhang, Ming Hao, Jing Wang, Clarence W de Silva, and Chenglong Fu. Linked dynamic graph cnn: Learning on point cloud via linking hierarchical features. _arXiv preprint arXiv:1904.10014_, 2019.
* [98] Tianjun Zhang, Zhewei Yao, Amir Gholami, Joseph E Gonzalez, Kurt Keutzer, Michael W Mahoney, and George Biros. ANODEV2: A coupled neural ode framework. _Advances in Neural Information Processing Systems_, 32, 2019.
* [99] Ling Zhao, Yujiao Song, Chao Zhang, Yu Liu, Pu Wang, Tao Lin, Min Deng, and Haifeng Li. T-GCN: A Temporal Graph Convolutional Network for Traffic Prediction. _IEEE Transactions on Intelligent Transportation Systems_, 21(9):3848-3858, 2019.
* [100] Chuanpan Zheng, Xiaoliang Fan, Cheng Wang, and Jianzhong Qi. Gman: A graph multi-attention network for traffic prediction. In _Proceedings of the AAAI conference on artificial intelligence_, volume 34, pages 1234-1241, 2020.
* [101] Jiawei Zhu, Yujiao Song, Ling Zhao, and Haifeng Li. A3T-GCN: Attention Temporal Graph Convolutional Network for Traffic Forecasting. _arXiv preprint arXiv:2006.11583_, 2020.
* [102] Jiong Zhu, Yujun Yan, Lingxiao Zhao, Mark Heimann, Leman Akoglu, and Danai Koutra. Beyond homophily in graph neural networks: Current limitations and effective designs. _Advances in Neural Information Processing Systems_, 33:7793-7804, 2020.
* [103] Juntang Zhuang, Nicha Dvornek, Xiaoxiao Li, and James S Duncan. Ordinary differential equations on graph networks. 2020.
The Behavior of the Operator Splitting Approach
The advantage of Operator Splitting (OS) is that it allows the individual treatment of each component of the ODE separately, thus obtaining the appropriate qualitative behavior. This is especially beneficial in the context of advection, where it is difficult to obtain stability, that is, to satisfy the CFL condition [48], as well as to obtain mass conservation. Furthermore, OS is beneficial for the diffusion component, where implicit methods guarantee stability [4] compared to unstable explicit discretizations.
We now discuss the behavior of the OS approach for integrating the ADR ODE in Equation (2). The theory behind OS can be analyzed in the linear case, and in the case of non-linear equations, linearization is typically assumed [58]. We now consider a linear ODE of the form
\[\frac{d\mathbf{U}(t)}{dt}=\mathbf{A}\mathbf{U}+\mathbf{D}\mathbf{U}+\mathbf{R }\mathbf{U}, \tag{9}\]
where \(\mathbf{A}\,\mathbf{D}\,\mathbf{R}\) denote the advection, diffusion, and reaction operators, respectively. \(\mathbf{U}(t)\) denotes the node features at time \(t\).
Suppose that we are interested in computing \(\mathbf{U}(t+\delta t)\). For this constant ODE system, the _analytic_, exact solution is given by [28]:
\[\mathbf{U}(t+\delta t)=\exp\left(\delta t(\mathbf{A}+\mathbf{D}+\mathbf{R}) \right)\mathbf{U}(t). \tag{10}\]
The following Lemma is easily proven using Taylor series of the matrix exponential function [4]:
**Lemma 3**.: _Let \(\mathbf{Q}=\exp(\delta t(\mathbf{A}+\mathbf{D}+\mathbf{R}))\) where \(\mathbf{A}\), \(\mathbf{D}\), and \(\mathbf{R}\) are matrices that do not share their eigenvectors. Then the discrepancy between the exact and OS solution operator reads_
\[\mathbf{Q}-\exp(\delta t\mathbf{R})\exp(\delta t\mathbf{D})\exp(\delta t \mathbf{A})=\mathcal{O}(\delta t^{2})\]
**Remark 1**.: _If the eigenvectors of \(\mathbf{A},\mathbf{D},\mathbf{R}\) from Lemma 3 are shared, then the matrix exponents commute, and the discrepancy is zero._
Following Lemma 3, it holds that the solution of the ADR ODE can be expressed as a sequential process of three separate problems, with an error of \(\mathcal{O}(\delta t^{2})\) compared to the exact solution, as follows:
\[\exp\left(\delta t(\mathbf{A}+\mathbf{D}+\mathbf{R})\right) \mathbf{U}(t) = \exp\left(\delta t\mathbf{R}\right)\exp\left(\delta t\mathbf{D} \right)\exp\left(\delta t\mathbf{A}\right)\mathbf{U}(t)+\mathcal{O}(\delta t ^{2})\] \[= \underbrace{\exp(\delta t\mathbf{R})\underbrace{\exp\left(\delta t \mathbf{D}\right)\underbrace{\exp\left(\delta t\mathbf{A}\right)\mathbf{U}(t)} _{\text{Advection}}}_{\text{Advection}-\text{Diffusion}-\text{Reaction}}}_{ \text{Advection}-\text{Diffusion}-\text{Reaction}}+\mathcal{O}(\delta t^{2})\]
Note that \(\mathbf{U}^{(l+1/3)}=\exp(\delta t\mathbf{A})\mathbf{U}^{(l)}\) is the exact solution [38] of the reaction system \(\frac{\mathbf{U}(t)}{dt}=\mathbf{A}\mathbf{U}\). Similarly, \(\mathbf{U}^{(l+2/3)}=\exp(\delta t\mathbf{D})\mathbf{U}^{(l+1/3)}\) is the exact solution of the diffusion system \(\frac{\mathbf{U}(t)}{dt}=\mathbf{D}\mathbf{U}\), and \(\mathbf{U}^{(l+1)}=\exp(\delta t\mathbf{R})\mathbf{U}^{(l+2/3)}\) is the exact solution of the reaction system \(\frac{\mathbf{U}(t)}{dt}=\mathbf{R}\mathbf{U}\). Thus, operator splitting can be viewed as taking a step of advection, followed by a step of diffusion that is finally followed by a reaction step. Note that the error above is of the same magnitude as each step of the forward Euler integration scheme, often used in neural networks, e.g., in ResNet (see [A] in additional references below).
## Appendix B The Properties of the Graph Discretized Advection Operator
In this section, we prove Lemma 1 and Lemma 2 from the main text. For convenience, we repeat the Lemmas, followed by their proofs. We start by noting the following remark:
**Remark 2**.: _The advection operator in Equation (5) does not mix the node feature \(\mathbf{U}\) channels in our ADR-GNN._
The importance of this remark is that it allows us to analyze the properties of the advection operator per-channel, or, alternatively, assuming a single channel, \(c=1\), which we assume in our following proofs.
**Lemma 1**.: _Define the mass of the graph node features \(\mathbf{U}^{(l)}\in\mathbb{R}^{n\times c}\) as the scalar \(\rho^{(l)}=\sum\mathbf{U}^{(l)}\). Then the advection operator in Equation (5) is mass conserving, i.e., \(\rho^{(l+1/3)}=\rho^{(l)}\)._
Proof.: Without loss of generality, and following Remark 2, let us assume a single channel, and consider the mass of the node features \(\rho=\sum\mathbf{U}\), before and after applying an advection layer as described in Equation (5). The input node features has a total mass of \(\rho^{(l)}=\sum\mathbf{U}^{(l)}\). The total mass of the output of an advection layer reads:
\[\rho^{(l+1/3)}=\sum_{i}\mathbf{U}^{(l+1/3)}_{i} = \sum_{i}\left(\mathbf{U}^{(l)}_{i}+h\sum_{j\in\mathcal{N}_{i}} \mathbf{V}^{(l)}_{j\to i}\mathbf{U}^{(l)}_{j}-h\mathbf{U}^{(l)}_{i}\right) \tag{11a}\] \[= \sum_{i}\left(\mathbf{U}^{(l)}_{i}+h\sum_{j}\mathbf{V}^{(l)}_{j \to i}\mathbf{U}^{(l)}_{j}-h\mathbf{U}^{(l)}_{i}\right)\] (11b) \[= \sum_{i}\mathbf{U}^{(l)}_{i}+h\left(\sum_{i}\sum_{j}\mathbf{U}^{ (l)}_{j}\left(\mathbf{V}^{(l)}_{j\to i}\right)-\sum_{i}\mathbf{U}^{(l)}_{i}\right)\] (11c) \[= \sum_{i}\mathbf{U}^{(l)}_{i}+h\left(\sum_{j}\mathbf{U}^{(l)}_{j} \left(\sum_{i}\mathbf{V}^{(l)}_{j\to i}\right)-\sum_{i}\mathbf{U}^{(l)}_{i}\right)\] (11d) \[= \sum_{i}\mathbf{U}^{(l)}_{i}+h\left(\sum_{j}\mathbf{U}^{(l)}_{j} -\sum_{i}\mathbf{U}^{(l)}_{i}\right)=\sum_{i}\mathbf{U}^{(l)}_{i}=\rho^{(l)}. \tag{11e}\]
The transition between Equations (11a) and (11b) is valid because for \(j\notin\mathcal{N}_{i}\), the edge weight is zero, i.e., \(\mathbf{V}^{(l)}_{j\to i}=0\), and therefore the summation does not change. Also, the transition between Equations (11d) and (11e) holds because of the constraint on the sum of outbound edge weights to be equal to \(1\), i.e. \(\sum_{i\in\mathcal{N}_{j}}\mathbf{V}^{(l)}_{j\to i}=1\). Therefore, because \(\rho^{(l+1/3)}=\rho^{(l)}\), our graph discretized advection operator is mass preserving.
**Definition 1**.: _A neural operator \(F\) that considers node features \(\mathbf{U}\) is (Lyapunov) stable if for every \(\epsilon>0\) there exists \(\delta>0\), such that every pair of inputs \(\mathbf{U}\,,\tilde{\mathbf{U}}\) that satisfy \(\|\mathbf{U}-\tilde{\mathbf{U}}\|\leq\delta\), then \(\|F(\mathbf{U})-F(\tilde{\mathbf{U}})\|\leq\epsilon\)._
**Lemma 2**.: _The advection operator in Equation (5) is stable._
Proof.: Without loss of generality, and following Remark 2, let us consider a single channel, and let \(\mathbf{V}^{(l)}\) be a sparse matrix such that \(\mathbf{V}^{(l)}_{ij}=\mathbf{V}^{(l)}_{i\to j}\). To show stability, we first observe that the matrix form of the scalar formulation of the advection _layer_ in Equation (5) is given by:
\[\mathbf{U}^{(l+1/3)}=\mathbf{I}\mathbf{U}^{(l)}+h\mathbf{V}^{(l)}\mathbf{U}^ {(l)}-h\mathbf{U}^{(l)}=\underbrace{\left((1-h)\mathbf{I}+h\mathbf{V}^{(l)} \right)}_{\mathbf{A}^{(l)}}\mathbf{U}^{(l)}. \tag{12}\]
Because of the demand that \(\mathbf{V}^{(l)}\) is normalized (i.e., \(\sum_{j\in\mathcal{N}_{i}}\mathbf{V}^{(l)}_{i\to j}=1\)), and the advection weights satisfy \(0\leq\mathbf{V}^{(l)}_{i\to j}\leq 1\), the advection operator \(\mathbf{A}^{(l)}\) is a column stochastic non-negative matrix. By the Perron-Frobenius theorem, such matrices are known to have a spectral radius bounded by 1 (see [B] in additional appendix references), and hence the advection operator is stable.
## Appendix C Architectures and Training Details
As discussed in the main paper, we propose two architectures, depending on the type of dataset - static (e.g., Cora), or spatio-temporal (e.g., PEMS-BAY). In the following subsections, we elaborate on these architectures.
### Node Classification: ADR-GNNs
We now elaborate on the'static' architecture ADR-GNNs used in our node classification experiments. The overall architecture is similar to standard GNN architectures for node classification, such as GCN [45] and GCNII [18]. It is composed of an initial embedding layer (that corresponds to Equation (2b) in the main paper), \(L\) graph neural ADR layers, and a classifier, as described in Equation (3) in the main paper. The complete flow of ADR-GNNs is described in Algorithm 3. To train ADR-GNNs on node classification datasets, we minimize the cross-entropy loss between the ground-truth node labels \(\mathbf{Y}\) and the predicted node labels \(\tilde{\mathbf{Y}}\), as is standard in GNNs, and similar to [45].
```
Input: Node features \(\mathbf{X}\in\mathbb{R}^{n\times c_{in}}\) Output: Predicted node labels \(\tilde{\mathbf{Y}}\in\mathbb{R}^{n\times c_{out}}\)
1:procedureADR-GNNs
2:\(\mathbf{X}\leftarrow\mathrm{Dropout}(\mathbf{X},p)\)
3:\(\mathbf{U}^{(0)}=g_{in}(\mathbf{X})\)
4:for\(l=0\dots L-1\)do
5:\(\mathbf{U}^{(l)}\leftarrow\mathrm{Dropout}(\mathbf{U}^{(l)},p)\)
6: Advection: \(\mathbf{U}^{(l+1/3)}=\mathbf{U}^{(l)}+h\mathbf{D}\mathbf{I}\mathbf{V}(\mathbf{ V}(\mathbf{U}^{(l)};\boldsymbol{\theta}_{a}^{(l)})\mathbf{U}^{(l)})\)
7: Diffusion: \(\mathbf{U}^{(l+2/3)}=\mathrm{mat}\left((\mathbf{I}+h\mathbf{K}(\boldsymbol{ \theta}_{d}^{(l)})\otimes\hat{\mathbf{L}})^{-1}\mathrm{vec}(\mathbf{U}^{(l+1 /3)})\right)\)
8: Reaction: \(\mathbf{U}^{(l+1)}=\mathbf{U}^{(l+2/3)}+hf(\mathbf{U}^{(l+2/3)},\mathbf{U}^{(0) };\boldsymbol{\theta}_{r}^{(l)})\)
9:endfor
10:\(\mathbf{U}^{(L)}\leftarrow\mathrm{Dropout}(\mathbf{U}^{(L)},p)\)
11:\(\tilde{\mathbf{Y}}=g_{out}(\mathbf{U}^{(L)})\)
12: Return \(\tilde{\mathbf{Y}}\)
13:endprocedure
```
**Algorithm 3** ADR-GNNs Architecture Flow
### Spatio-Temporal Node Forecasting: ADR-GNN\({}_{\text{T}}\)
The typical task in spatio-temporal datasets is to predict future quantities (e.g., driving speed) given several previous time steps (also called frames). Formally, one is given an input tensor \(\mathbf{X}_{temporal}\in\mathbb{R}^{n\times\tau_{in}c_{in}}\), where \(\tau_{in}\) is the number of input (observed) time frames, and the goal is to predict \(\tau_{out}\) time frames ahead, i.e., \(\mathbf{Y}_{\mathrm{temporal}}\in\mathbb{R}^{n\times\tau_{out}c_{out}}\). This is in contrast to'static' datasets such as Cora [59], where input node features \(\mathbf{X}\in\mathbb{R}^{n\times c_{in}}\) are given, and the goal is to fit to some ground-truth \(\mathbf{Y}\in\mathbb{R}^{n\times c_{out}}\). In this context, a'static' dataset can be thought of as setting \(\tau_{in}=\tau_{out}=1\) for the spatio-temporal settings. We show the overall flow of our 'temporal' architecture ADR-GNN\({}_{\text{T}}\) in Algorithm 45.
Footnote 5: In Algorithm 4, \(\oplus\) denotes channel-wise concatenation.
In our spatio-temporal ADR-GNN\({}_{\text{T}}\), we update the hidden state feature matrix \(\mathbf{U}^{(l)}_{\text{state}}\) based on the hidden historical feature matrix \(\mathbf{U}^{(l)}_{\text{hist}}\), as shown in Lines 6-9 in Algorithm 4.
Similarly to Attention models (see [C] in the additional appendix references), we incorporate time embedding based on the concatenation of sine and cosine function evaluations with varying frequencies multiplied by the time of the input frames, as input to our ADR-GNN\({}_{\text{T}}\), denoted by \(\mathbf{T}_{\mathrm{emb}}\in\mathbb{R}^{n\times\tau_{in}c_{t}}\), where we choose the number of frequencies to be 10, and by the concatenation of both sine and cosine lead to \(c_{t}=20\). We note that the time embedding is computed in a pre-processing fashion. To initialize the hidden feature matrices \(\mathbf{U}^{(0)}_{\text{state}},\ \mathbf{U}^{(0)}_{\text{hist}}\), we embed the input data \(\mathbf{X}_{\mathrm{temporal}}\), concatenated with \(\mathbf{T}_{\mathrm{emb}}\), using two fully connected layers, as described in Lines 3-4 in Algorithm 4. 6
Footnote 6: In Python notations, \(\mathbf{X}_{\mathrm{temporal}}[:,-c_{in}]\) extracts the last \(c_{in}\) entries of the second dimension of \(\mathbf{X}_{\mathrm{temporal}}\), which returns the features of the last time frame.
For the Chickenpox Hungary, PedalMe London, and Wikipedia Math datasets, we minimize the mean squared error (MSE) between the ground truth future node quantities and the predicted quanti
ties by \(\text{ADR-GNN}_{\text{T}}\), similar to the training procedure of the rest of the considered methods in Table 3. Specifically, following [68], the goal is to predict the node quantities of the next time frame given 4 previous time frames. On the METR-LA and PEMS-BAY datasets we minimize the mean absolute error (MAE), similar to [49], where we also follow the standard 12 previous time frames as inputs, and consider 3,6, and 12 future time frames node quantity prediction as output.
```
0: Node features \(\mathbf{X}_{\text{temporal}}\in\mathbb{R}^{n\times\tau_{in}c_{in}}\), time embedding \(\mathbf{T}_{\text{emb}}\in\mathbb{R}^{n\times\tau_{in}c_{t}}\) Output: Predicted future node quantities \(\tilde{\mathbf{Y}}\in\mathbb{R}^{n\times\tau_{out}c_{out}}\)
1:procedure\(\text{ADR-GNN}_{\text{T}}\)
2:\(\mathbf{X}_{\text{temporal}}\leftarrow\text{Dropout}(\mathbf{X}_{\text{ temporal}},p)\)
3:\(\mathbf{T}_{\text{emb}}\leftarrow\text{g}^{\text{time-embed}}(\mathbf{T}_{ \text{emb}})\)
4:\(\mathbf{U}_{\text{state}}^{(0)}=g_{\text{int}}^{\text{hist}}(\mathbf{X}_{ \text{temporal}}[:,-c_{in}]\oplus\mathbf{T}_{\text{emb}})\)
5:\(\mathbf{U}_{\text{hist}}^{(0)}=g_{\text{int}}^{\text{hist}}(\mathbf{X}_{ \text{temporal}}\oplus\mathbf{T}_{\text{emb}})\)
6:for\(I=0\ldots L-1\)do
7:\(\mathbf{U}_{\text{state}}^{(l)}\leftarrow\text{Dropout}(\mathbf{U}_{\text{ state}}^{(l)},p)\)
8: Advection: \(\mathbf{U}_{\text{state}}^{(l+1/3)}=\mathbf{U}_{\text{state}}^{(l)}+h\mathbf{ D}\mathbf{I}\mathbf{V}(\mathbf{V}(\mathbf{U}_{\text{hist}}^{(l)};\boldsymbol{ \theta}_{a}^{(l)})\mathbf{U}^{(l)}),\boldsymbol{\theta}_{a})\mathbf{U}_{ \text{state}}^{(l)}\)
9: Diffusion: \(\mathbf{U}_{\text{state}}^{(l+2/3)}=\text{mat}\left((\mathbf{I}+h\mathbf{K}^{(l )}\otimes\hat{\mathbf{L}})^{-1}\text{vec}(\mathbf{U}_{\text{state}}^{(l+1/3)})\right)\)
10: Reaction: \(\mathbf{U}_{\text{state}}^{(l+1)}=\mathbf{U}_{\text{state}}^{(l+2/3)}+hf( \mathbf{U}_{\text{hist}}^{(l+2/3)},\mathbf{U}_{\text{hist}}^{(0)};\boldsymbol{ \theta}_{r})\)
11:\(\mathbf{U}_{\text{hist}}^{(l+1)}=g_{l}^{\text{hist}}(\mathbf{U}_{\text{hist}}^ {(l)}\oplus\mathbf{U}_{\text{state}}^{(l+1)}\oplus\mathbf{T}_{\text{emb}})\)
12:endfor
13:\(\mathbf{U}_{\text{state}}^{(L)}\leftarrow\text{Dropout}(\mathbf{U}_{\text{ state}}^{(L)},p)\)
14:\(\tilde{\mathbf{Y}}=g_{\text{out}}^{\text{state}}(\mathbf{U}_{\text{state}}^{(L)})\)
15: Return \(\mathbf{Y}\)
16:endprocedure
```
**Algorithm 4**\(\text{ADR-GNN}_{\text{T}}\) Architecture Flow
## Appendix D Computational Complexity and Time
**Complexity.** Our ADR-GNN architectures include four main operations: (i) input/output embedding, (ii) advection, (iii) diffusion, and (iv) reaction layers.
The complexity of (i) and (iv) is \(\mathcal{O}(|\mathcal{V}|c^{2})\), where \(c\) is the number of channels, because they are composed of pointwise MLPs. The complexity of (ii) is \(\mathcal{O}((|\mathcal{V}|+|\mathcal{E}|)c^{2})\) because it requires the computation of the edge weights \(\mathbf{V}\), as shown in Algorithm 2, followed by a multiplication by the node features, as shown in Equation (5). Similarly, (iii) also is of complexity \(\mathcal{O}((|\mathcal{V}|+|\mathcal{E}|)c)\), because it is required to multiply the scaled Laplacian with the node features. Note that as discussed in Section 3.4, and similar to [14, 70], we do not explicitly invert the matrix \(\mathbf{I}+h\mathbf{K}^{(l)}\otimes\hat{\mathbf{L}}\), but rather use the conjugate-gradients (CG) method to solve a system of equations. Below, we further discuss the use of CG to solve the system of equations.
**Implicit solution of the diffusion term.** The diffusion step in our ADR-GNN, as shown in Section 3.4, requires the solution of the linear system at each step. As previously discussed this is solved by using the CG method. Thus, the backward (differentiation) function in most common software packages, such as PyTorch, tracks the CG iterations. This tracking can be avoidable by using implicit differentiation, which is the backbone of implicit methods (see [35] for detailed derivation). In the context of deep learning, implicit differentiation was used for implicit neural networks [33]. The basic idea is to use implicit differentiation of the equation
\[\mathbf{U}^{(l+2/3)}=\mathrm{mat}(((\mathbf{I}-h\mathbf{K}^{(l)}\otimes\hat{ \mathbf{L}})\mathrm{vec}(\mathbf{U}^{(l+1/3)})) \tag{13}\]
with respect to \(\mathbf{K}^{(l)}\) and thus avoid the tracking of the CG iterations if many are needed.
**Runtimes.** In addition to the complexity analysis above, we provide the measured runtimes in Table 6. As discussed in Section 5, learning the advection weights requires an increased computational effort. However, it can significantly improve the considered task metric. For convenience, in Table
6, in addition to the runtimes we also report the obtained task metric. Importantly, we show that the improved metrics offered by ADR-GNNs with 64 channels and 4 layers are not simply obtained due to the increased costs, by showing that enlarging GCN and GAT from standard 2 layers and 64 channels, to 2 layers and 256 channels (wide), or 64 layers and 64 channels (deep) does not yield similar improvements. We measure the runtimes using an Nvidia-RTX3090 with 24GB of memory, which is the same GPU used to conduct our experiments.
## Appendix E Hyperparameters
All hyperparameters were determined by grid search, and the ranges and sampling mechanism distributions are provided in Table 7. Note, that as discussed after Equation (8), we may add a BatchNorm layer before applying the non-linear activation \(\sigma\) to the reaction term, we therefore treat the use of batchnorm as a hyperparameter in Table 7.
## Appendix F Datasets
We report the statistics of the datasets used in our experiments in Table 8 and 9 for the node classification, and spatio-temporal node forecasting datasets, respectively. All datasets are publicly available, and appropriate references to the data sources are provided in the main paper.
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline Metric & GCN & GAT & \begin{tabular}{c} GCN \\ (wide) \\ \end{tabular} & \begin{tabular}{c} GAT \\ (wide) \\ \end{tabular} & \begin{tabular}{c} GCN \\ (deep) \\ \end{tabular} & \begin{tabular}{c} GAT \\ (deep) \\ \end{tabular} & \begin{tabular}{c} GCN \\ (deep) \\ \end{tabular} & \begin{tabular}{c} GAT \\ (deep) \\ \end{tabular} &
\begin{tabular}{c} ADR-GNNs \\ \end{tabular} \\ \hline Training time & 7.71 & 14.59 & 14.32 & 36.63 & 95.11 & 184.51 & 35.41 \\ Inference time & 1.75 & 2.98 & 2.86 & 7.57 & 12.93 & 38.96 & 8.24 \\ Parameters & 104 & 105 & 565 & 567 & 358 & 360 & 210 \\ Accuracy & 85.77 & 83.13 & 85.18 & 83.37 & 38.62 & 33.40 & 89.43 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Training and inference GPU runtimes (milliseconds), number of parameters (thousands), and node classification accuracy (%) on Cora.
\begin{table}
\begin{tabular}{l c c} \hline \hline Hyperparameter & Range & Uniform Distribution \\ \hline input/output embedding learning rate & [1e-4, 1e-1] & log uniform \\ advection learning rate & [1e-4, 1e-1] & log uniform \\ diffusion learning rate & [1e-4, 1e-1] & log uniform \\ reaction learning rate & [1e-4, 1e-1] & log uniform \\ input/output embedding weight decay & [0, 1e-2] & uniform \\ advection weight decay & [0, 1e-2] & uniform \\ diffusion weight decay & [0, 1e-2] & uniform \\ reaction weight decay & [0, 1e-2] & uniform \\ input/output dropout & [0, 0.9] & uniform \\ hidden layer dropout & [0, 0.9] & uniform \\ use BatchNorm & \{ \textbackslash{}es / no \} & discrete uniform \\ step size h & [1e-3, 1] & uniform \\ layers & \{ \textbackslash{}2,4,8,16,32,64 \} & discrete uniform \\ channels & \{ 8,16,32,64,128,256 \} & discrete uniform \\ \hline \hline \end{tabular}
\end{table}
Table 7: Hyperparameter ranges
## Appendix G Experimental Results
### Additional Comparisons and Standard Deviations on Node Classification
To allow a more comprehensive comparison, and because some of the considered methods did not report the standard deviation around the mean accuracy, we now provide the experimental results from 4.1 on Cora, Citeseer, Pubmed datasets in Table 10, and Cornell, Texas, Wisconsin, Squirrel, Film, Chameleon datasets in Table 11,with the standard deviation around the mean of the 10 splits from [65]. Note, that here we do not color the tables, because some of the second or third top performing models in the main paper did not report the accuracy standard deviation, and therefore coloring Tables 10-11 will change the order of the best performing models.
also choose a random node \(\mathbf{v}_{\mathrm{dst}}\in\mathcal{V}_{\mathrm{ER}}\setminus\mathbf{v}_{ \mathrm{ER}}^{\mathrm{src}}\). The goal is to transport all the mass from all the nodes to the \(\mathbf{v}_{\mathrm{dst}}\), such that \(\mathbf{v}_{\mathrm{dst}}\) will have a feature of 1, the the rest of the nodes in the graph will be zeroes. That is, all the features in the graph are concentrated in \(\mathbf{v}_{\mathrm{dst}}\). In our example in Figure 2, we use the protocol specified here, to generate a graph with 5 nodes, and we show the approximation obtained with the advection, diffusion, and reaction terms.
### The Dirichlet Energy of ADR-GNN
We follow [71] and define the Dirichlet energy of the graph node features as:
\[\mathrm{E}(\mathbf{U}^{(l)})=\frac{1}{|\mathcal{V}|}\sum_{i\in\mathcal{V}} \sum_{j\in\mathcal{N}_{i}}||\mathbf{U}_{i}^{(l)}-\mathbf{U}_{j}^{(l)}||_{2}^{2}. \tag{14}\]
\begin{table}
\begin{tabular}{c c c c} \hline \hline Method & Cora & Citeseer & Pubmed \\ Homophily & 0.81 & 0.80 & 0.74 \\ \hline GCN & 85.77 \(\pm\) 1.27 & 73.68 \(\pm\) 1.36 & 88.13 \(\pm\) 0.50 \\ GAT & 86.37 \(\pm\) 0.48 & 74.32 \(\pm\) 1.23 & 87.62 \(\pm\) 1.10 \\ GCNII\({}^{\dagger}\) & 88.49 \(\pm\) 1.25 & 77.13 \(\pm\) 1.48 & 90.30 \(\pm\) 0.43 \\ Geom-GCN\({}^{\dagger}\) & 85.27 \(\pm\) 1.57 & 77.99 \(\pm\) 1.15 & 90.05 \(\pm\) 0.47 \\ MixHop & 87.61 \(\pm\) 2.03 & 76.26 \(\pm\) 2.95 & 85.31 \(\pm\) 2.29 \\ WRGAT & 88.20 \(\pm\) 2.26 & 76.81 \(\pm\) 1.89 & 88.52 \(\pm\) 0.92 \\ NSD\({}^{\dagger}\) & 87.14 \(\pm\) 1.13 & 77.14 \(\pm\) 1.57 & 89.49 \(\pm\) 0.40 \\ GGCN & 87.95 \(\pm\) 1.05 & 77.14 \(\pm\) 1.45 & 89.15 \(\pm\) 0.37 \\ H2GCN & 87.87 \(\pm\) 1.20 & 77.11 \(\pm\) 1.57 & 89.49 \(\pm\) 0.38 \\ LINKX & 84.64 \(\pm\) 1.13 & 73.19 \(\pm\) 0.99 & 87.86 \(\pm\) 0.77 \\ ACIII-GCN++ & 88.25 \(\pm\) 0.96 & 77.12 \(\pm\) 1.58 & 89.71 \(\pm\) 0.48 \\ \hline ADR-GNNs & 89.43 \(\pm\) 1.15 & 78.36 \(\pm\) 144 & 90.55 \(\pm\) 0.53 \\ \hline \hline \end{tabular}
\end{table}
Table 10: Node classification accuracy (\(\%\)) on _homophilic_ datasets. \(\dagger\) denotes the maximal accuracy of several proposed variants.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Method & Squirrel & Film & Cham. & Corn. & Texas & Wisc. \\ Homophily & 0.22 & 0.22 & 0.23 & 0.30 & 0.11 & 0.21 \\ \hline GCN & 23.96 \(\pm\) 2.01 & 26.86 \(\pm\) 1.10 & 28.18 \(\pm\) 2.24 & 52.70 \(\pm\) 5.30 & 52.16 \(\pm\) 5.16 & 48.92 \(\pm\) 3.06 \\ GAT & 30.03 \(\pm\) 1.55 & 28.45 \(\pm\) 0.89 & 42.93 \(\pm\) 2.50 & 54.32 \(\pm\) 5.05 & 58.38 \(\pm\) 6.63 & 49.41 \(\pm\) 4.09 \\ GCNII & 38.47 \(\pm\) 1.58 & 32.87 \(\pm\) 1.30 & 60.61 \(\pm\) 3.04 & 74.86 \(\pm\) 3.79 & 69.46 \(\pm\) 3.83 & 74.12 \(\pm\) 3.40 \\ Geom-GCN\({}^{\dagger}\) & 38.32 \(\pm\) 0.92 & 31.63 \(\pm\) 1.15 & 60.90 \(\pm\) 2.81 & 60.81 \(\pm\) 3.67 & 67.57 \(\pm\) 2.72 & 64.12 \(\pm\) 3.66 \\ MixHop & 43.80 \(\pm\) 1.48 & 32.22 \(\pm\) 2.34 & 60.50 \(\pm\) 2.53 & 73.51 \(\pm\) 6.34 & 77.84 \(\pm\) 7.73 & 75.88 \(\pm\) 4.90 \\ GRAND & 40.05 \(\pm\) 1.50 & 35.62 \(\pm\) 1.01 & 54.67 \(\pm\) 2.54 & 82.16 \(\pm\) 7.09 & 75.68 \(\pm\) 7.25 & 79.41 \(\pm\) 3.64 \\ NSD\({}^{\dagger}\) & 56.34 \(\pm\) 1.32 & 37.79 \(\pm\) 1.15 & 68.68 \(\pm\) 1.58 & 86.49 \(\pm\) 4.71 & 85.95 \(\pm\) 5.51 & 89.41 \(\pm\) 4.74 \\ WRGAT & 48.85 \(\pm\) 0.78 & 36.53 \(\pm\) 0.77 & 65.24 \(\pm\) 0.87 & 81.62 \(\pm\) 3.90 & 83.62 \(\pm\) 5.50 & 86.98 \(\pm\) 3.78 \\ MagNet & \multicolumn{3}{c}{ –} & 84.30 \(\pm\) 7.00 & 83.30 \(\pm\) 6.10 & 85.70 \(\pm\) 3.20 \\ GGCN & 55.17 \(\pm\) 1.58 & 37.81 \(\pm\) 1.56 & 71.14 \(\pm\) 1.84 & 85.68 \(\pm\) 6.63 & 84.86 \(\pm\) 4.55 & 86.86 \(\pm\) 3.29 \\ H2GCN & 36.48 \(\pm\) 1.86 & 35.70 \(\pm\) 1.00 & 60.11 \(\pm\) 1.71 & 82.70 \(\pm\) 5.28 & 84.86 \(\pm\) 7.23 & 87.65 \(\pm\) 4.98 \\ GraphCON\({}^{\dagger}\) & \multicolumn{3}{c}{ –} & \multicolumn{3}{c}{ –} & 84.30 \(\pm\) 4.80 & 85.40 \(\pm\) 4.20 & 87.80 \(\pm\) 3.30 \\ FAGCN & 42.59 \(\pm\) 0.69 & 34.87 \(\pm\) 1.35 & 55.22 \(\pm\) 2.11 & 79.19 \(\pm\) 5.87 & 82.43 \(\pm\) 2.87 & 82.94 \(\pm\) 1.58 \\ GPRGNN & 31.61 \(\pm\) 1.24 & 34.63 \(\pm\) 1.22 & 46.58 \(\pm\) 1.71 & 80.27 \(\pm\) 8.11 & 78.38 \(\pm\) 4.36 & 82.94 \(\pm\) 4.21 \\ ACMP-GCN & \multicolumn{3}{c}{ –} & \multicolumn{3}{c}{ –} & 85.40 \(\pm\) 7.00 & 86.20 \(\pm\) 3.00 & 86.10 \(\pm\) 4.00 \\ LINKX & 61.81 \(\pm\) 1.80 & 36.10 \(\pm\) 1.55 & 68.42 \(\pm\) 1.38 & 77.84 \(\pm\) 5.81 & 74.60 \(\pm\) 8.37 & 75.49 \(\pm\) 5.72 \\ GRAFF\({}^{\dagger}\) & 59.01 \(\pm\) 1.31 & 37.11 \(\pm\) 1.08 & 71.38 \(\pm\) 1.47 & 84.05 \(\pm\) 6.10 & 88.38 \(\pm\) 4.53 & 88.83 \(\pm\) 3.29 \\ G\({}^{2\dagger}\)\({}^{\dagger}\) & 64.26 \(\pm\) 2.38 & 37.30 \(\pm\) 1.01 & 71.40 \(\pm\) 2.38 & 87.30 \(\pm\) 4.84 |
2310.02428 | EGraFFBench: Evaluation of Equivariant Graph Neural Network Force Fields
for Atomistic Simulations | Equivariant graph neural networks force fields (EGraFFs) have shown great
promise in modelling complex interactions in atomic systems by exploiting the
graphs' inherent symmetries. Recent works have led to a surge in the
development of novel architectures that incorporate equivariance-based
inductive biases alongside architectural innovations like graph transformers
and message passing to model atomic interactions. However, thorough evaluations
of these deploying EGraFFs for the downstream task of real-world atomistic
simulations, is lacking. To this end, here we perform a systematic benchmarking
of 6 EGraFF algorithms (NequIP, Allegro, BOTNet, MACE, Equiformer, TorchMDNet),
with the aim of understanding their capabilities and limitations for realistic
atomistic simulations. In addition to our thorough evaluation and analysis on
eight existing datasets based on the benchmarking literature, we release two
new benchmark datasets, propose four new metrics, and three challenging tasks.
The new datasets and tasks evaluate the performance of EGraFF to
out-of-distribution data, in terms of different crystal structures,
temperatures, and new molecules. Interestingly, evaluation of the EGraFF models
based on dynamic simulations reveals that having a lower error on energy or
force does not guarantee stable or reliable simulation or faithful replication
of the atomic structures. Moreover, we find that no model clearly outperforms
other models on all datasets and tasks. Importantly, we show that the
performance of all the models on out-of-distribution datasets is unreliable,
pointing to the need for the development of a foundation model for force fields
that can be used in real-world simulations. In summary, this work establishes a
rigorous framework for evaluating machine learning force fields in the context
of atomic simulations and points to open research challenges within this
domain. | Vaibhav Bihani, Utkarsh Pratiush, Sajid Mannan, Tao Du, Zhimin Chen, Santiago Miret, Matthieu Micoulaut, Morten M Smedskjaer, Sayan Ranu, N M Anoop Krishnan | 2023-10-03T20:49:00Z | http://arxiv.org/abs/2310.02428v2 | # EGraFFBench: Evaluation of Equivariant Graph Neural Network Force Fields for Atomistic Simulations
###### Abstract
Equivariant graph neural networks force fields (EGraFFs) have shown great promise in modeling complex interactions in atomic systems by exploiting the graphs' inherent symmetries. Recent works have led to a surge in the development of novel architectures that incorporate equivariance-based inductive biases alongside architectural innovations like graph transformers and message passing to model atomic interactions. However, thorough evaluations of these deploying EGraFFs for the downstream task of real-world atomistic simulations, are lacking. To this end, here we perform a systematic benchmarking of 6 EGraFF algorithms (NequiP, Allegro, BOTNet, MACE, Equiformer, TorchMDNet), to understand their capabilities and limitations for realistic atomistic simulations. In addition to our thorough evaluation and analysis of eight existing datasets based on the benchmarking literature, we release two new benchmark datasets, propose four new metrics, and three challenging tasks. The new datasets and tasks evaluate the performance of EGraFF to out-of-distribution data, in terms of different crystal structures, temperatures, and new molecules. Interestingly, evaluation of the EGraFF models based on dynamic simulations reveals that having a
lower error on energy or force does not guarantee stable or reliable simulation or faithful replication of the atomic structures. Moreover, no model clearly outperforms other models on all datasets and tasks. Importantly, we show that the performance of all the models on out-of-distribution datasets is unreliable, pointing to the need to develop a foundation model for force fields that can be used in real-world simulations. In summary, this work establishes a rigorous framework for evaluating machine learning force fields in the context of atomic simulations and points to open research challenges within this domain.
## 1 Introduction
Graph neural networks (Gnns) have emerged as powerful tools for learning representations of graph-structured data, enabling breakthroughs in various domains such as social networks, mechanics, drug discovery, and natural language processing (Perozzi et al., 2014; Wu et al., 2020; Zhang and Chen, 2018; Stokes et al., 2020; Zhou et al., 2020; Miret et al., 2023; Lee et al., 2023). In the field of atomistic simulations, Gnn force fields have shown significant promise in capturing complex interatomic interactions and accurately predicting the potential energy surfaces of atomic systems (Park et al., 2021; Sanchez-Gonzalez et al., 2020; Schutt et al., 2021; Qiao et al., 2021). These force fields can, in turn, be used to study the dynamics of atomic systems--that is, how the atomic systems evolve with respect to time--enabling several downstream applications such as drug discovery, protein folding, stable structures of materials, and battery materials with targeted diffusion properties.
Recent work has shown that Gnn force fields can be further enhanced and made data-efficient by enforcing additional inductive biases, in terms of equivariance, leveraging the underlying symmetry of the atomic structures. This family of Gnns, hereafter referred to as equivariant graph neural network force fields (EGraFFs), have demonstrated their capability to model symmetries inherent in atomic systems, resulting in superior performance in comparison to other machine-learned force fields. This is achieved by explicitly accounting for symmetry operations, such as rotations and translations, and ensuring that the learned representations in EGraFFs are consistent under these transformations.
Traditionally, EGraFFs are trained on the forces and energies based on first principle simulations data, such as density functional theory. Recently work has shown that low training or test error does not guarantee the performance of the EGraFFs for the downstream task involving atomistic or molecular dynamics (MD) simulations (Fu et al., 2023). Specifically, EGraFFs can suffer from several major issues such as (i) unstable trajectory (the simulation suddenly explodes/becomes unstable due to high local forces), (ii) poor structure (the structure of the atomic system including the coordination, bond angles, bond lengths is not captured properly), (iii) poor generalization to out-of-distribution datasets including simulations at different temperatures or pressures of the same system, simulations of different structures having the same chemical composition--for example, crystalline (ordered) and glassy (disordered) states of the same system, or simulations of different compositions having the same chemical components--for example, Li\({}_{4}\)P\({}_{2}\)S\({}_{6}\) and Li\({}_{7}\)P\({}_{3}\)S\({}_{11}\). Note that these are realistic tasks for which a force field that is well-trained on one system can generalize to other similar systems. As such, an extensive evaluation and comparison of EGraFFs is needed, which requires standardized datasets, well-defined metrics, and comprehensive benchmarking, that capture the diversity and complexity of atomic systems.
An initial effort to capture the performance of machine-learned force fields was carried out (Fu et al., 2023). In this work, the authors focused on existing datasets and some metrics, such as radial distribution functions and diffusion constants of atomic systems. However, the work did not cover the wide range of EGraFFs that has been newly proposed, many of which have shown superior performance on common tasks. Moreover, the metrics in Fu et al. (2023) were limited to stability, mean absolute error of forces radial distribution function, and diffusivity. While useful, these metrics either do not capture the variations during the dynamic simulation (e.g., how the force or energy error evolves during simulation) or require long simulations (such as diffusion constants, which requires many steps to reach the diffusive regime). Further, the work does not propose any novel tasks that can serve as a benchmark for the community developing new force fields.
With the increasing interest in EGraFFs for atomic simulations, we aim to address the gap in benchmarking by performing a rigorous evaluation of the quality of simulations obtained using modern EGraFF force fields. To this extent, we evaluate 6 EGraFFs on 10 datasets, including two new challenging datasets that we contribute, and propose new metrics based on real-world simulations. By employing a diverse set of atomic systems and benchmarking metrics, we aim to objectively and rigorously assess the capabilities and limitations of EGraFFs. The main contributions of this research paper are as follows:
* **EGraFFs:** We present a benchmarking package to evaluate \(6\) EGraFFs for atomistic simulations. As a byproduct of this benchmarking study, we release a well-curated codebase of the prominent Equivariant Gnnforce fields in the literature enabling easier and streamlined access to relevant modeling pipelines
* **Challenging benchmark datasets:** We present \(10\) datasets, including two new datasets, namely GeTe and LiPS20. The datasets cover a wide range of atomic systems, from small molecules to bulk systems. The datasets capture several scenarios, such as compounds with the same elements but different chemical compositions, the same composition with different crystal structures, and the same structure at different temperatures. This includes complex scenarios such as melting trajectories of crystals.
* **Challenging downstream tasks:** We propose several challenging downstream tasks that evaluate the ability of EGraFFs to model the out-of-distribution datasets described earlier.
* **Improved metrics:** We propose additional metrics that evaluate the quality of the atomistic simulations regarding the structure and dynamics with respect to the ground truth.
## 2 Preliminaries
Every material consists of atoms that interact with each other based on the different types of bondings (e.g., covalent and ionic). These bonds are approximated by force fields that model the atomic interactions. Here, we briefly describe atomistic simulations and the equivariant Gnns used for modeling these systems.
### Atomistic simulation
Consider a set of \(N\) atoms represented by a point cloud corresponding to their position vectors \((r_{1},r_{2},\ldots,r_{N})\) and their types \(\omega_{i}\). Specifically, the potential energy of a system can be written as the summation of one-body \(U(r_{i})\), two-body \(U(r_{i},r_{j})\), three-body \(U(r_{i},r_{j},r_{k})\), up to \(N\)-body interaction terms as
\[U=\sum_{i=1}^{N}U(r_{i})+\sum_{\begin{subarray}{c}i,j=1;\\ i\neq j\end{subarray}}^{N}U(r_{i},r_{j})+\sum_{\begin{subarray}{c}i,j,k=1;\\ i\neq j\neq k\end{subarray}}^{N}U(r_{i},r_{j},r_{k})+\cdots \tag{1}\]
Since the exact computation of this potential energy is challenging, they are approximated using empirical force fields that learn the effective potential energy surface as a function of two-, three-, or four-body interactions. In atomistic simulations, these force fields are used to obtain the system's energy. The forces on each particle are then obtained as \(F_{i}=-\partial U/\partial r_{i}\). The acceleration of each atom is obtained from these forces as \(F_{i}/m_{i}\) where \(m_{i}\) is the mass of each atom. Accordingly, the updated position is computed by numerically integrating the equations of motion using a symplectic integrator. These steps are repeated to study the dynamics of atomic systems.
### Equivariant Gnn force fields (EGraFF)
Gnns are widely used to model the force field due to the topological similarity with atomic systems. Specifically, nodes are considered atoms, the edges represent interactions/bonds, and the energy or force is predicted as the output at the node or edge levels. Equivariant Gnns employ a message passing scheme that is equivariant to rotations, that is, \(G(Rx)=RG(x)\), where \(R\) is a rotation and \(G\) is an equivariant transformation (see Fig.1). This enables a rich representation of atomic environments equivariant to rotation. Notably, while the energy of an atomic system is invariant to rotation (that is, a molecule before and after rotation would have the same energy), the force is equivariant to rotation (that is, the forces experienced by the molecules due to the interactions also get rotated when the molecule is rotated).
## 3 Models Studied
All EGraFFs employed in this work rely on equivariance in the graph structure. All models use a one-hot encoding of the atomic numbers \(Z_{i}\) as the node input and the position vector \(r_{i}\) as a node or edge input. Equivariance in these models is ensured by the use of spherical harmonics along with radial basis functions. The convolution or message-passing implementation differs from model to model. Further hyperparameters details for all models are tabulated in App. A.10
**NeuPIP [Batzner et al., 2022]**, based on the tensor field networks, employs a series of self-interaction, convolution, and concatenation with the neighboring atoms. The convolution filter \(S_{m}^{l}(\vec{r}_{ij})=R(|\vec{r}_{ij}|)\times Y_{m}^{l}(\vec{r}_{ij}/|\vec{r} _{ij}|)\) represented as a product of radial basis function \(R\) and spherical harmonics \(Y_{m}^{l}\) ensures equivariance. This was the first EGraFF proposed for atomistic simulations based on spherical harmonics.
Figure 1: Equivariant transformation \(G\) on a molecule under rotation \(R\).
**Allegro Musaelian et al.**[**2022**]** merges the precision of recent equivariant Gnns with stringent locality, without message passing. Its inherent local characteristic enhances its scalability for potentially more extensive systems. In contrast to other models, Allegro predicts the energy as a function of the final edge embedding rather than the node embeddings. All the pairwise energies are summed to obtain the total energy of the system. Allegro features remarkable adaptability to data outside the training distribution, consistently surpassing other force fields in this aspect, especially those employing body-ordered strategies.
**BOTNet**Batatia et al. [2022a] is a refined body-ordered adaptation of NequIP. While maintaining the two-body interactions of NequIP in each layer, it increments the body order by one with every iteration of message passing. Unlike NequIP, BOTNet uses non-linearities in the update step.
**MACE**Batatia et al. [2022b] offers efficient equivariant messages with high body order computation. Due to the augmented body order of the messages, merely two message-passing iterations suffice to attain notable accuracy. This contrasts with the usual five or six iterations observed in other Gnns, rendering MACE both scalable and amenable to parallelization.
**TorchMDNet**Tholke and Fabritiis [2022] introduces a transformer-based Gnn architecture, utilizing a modified multi-head attention mechanism. This modification expands the traditional dot-product attention to integrate edge data, which can enhance the learning of interatomic interactions.
**Equformer**[Liao and Smidt, 2023] is a transformer-based GNN architecture, introducing a new attention mechanism named 'equivariant graph attention'.This mechanism equips conventional attention used in the transformers with equivariance.
**PaiNN**[Schutt et al., 2021] is a polarizable atom interaction neural network consisting of equivariant message passing architecture that takes into account the varying polarizability of atoms in different chemical environments, allowing for a more realistic representation of molecular behavior.
**DimeNet++**[Gasteiger et al., 2020] is a directional message passing neural network where each rotationally equivariant message is associated with a direction in coordinate space.
## 4 Benchmarking Evaluation
In this section, we benchmark the above-mentioned architectures and distill the insights generated. The evaluation environment is detailed in App. A.8.
### Datasets
Since the present work focuses on evaluating EGraFFs for molecular dynamics (MD) simulations, we consider only datasets with included time dynamics--i.e., all the datasets represent the dynamics of atom (see Fig. 2). We consider a total of 10 datasets (see Tab. 10 and App. A.1).
**MD17** is a widely used Batzner et al. [2022], Liao and Smidt [2023], Batatia et al. [2022a,b], Tholke and Fabritiis [2022], Fu et al. [2023] dataset for benchmarking ML force fields. It was proposed by Chmiela et al. [2017] and constitutes a set of small organic molecules, including benzene, toluene, naphthalene, ethanol, uracil, and aspirin, with energy and forces generated by ab initio MD simulations (AIMD). Here, we select four molecules, namely aspirin, ethanol, naphthalene, and salicylic acid, to cover a range of chemical structures and topology. Further, zero-shot evaluation is performed on benzene. We train the models on 950 configurations and validate them on 50.
**3BPA** contains a large flexible drug-like organic molecule 3-(benzyloxy)pyridin-2-amine (3BPA) sampled from different temperature MD trajectories Kovacs et al. [2021]. It has three consecutive rotatable bonds leading to a complex dihedral potential energy surface with many local minima, making it challenging to approximate using classical or ML force fields. The models can be trained either on 300 K snapshots or on mixed temperature snapshots sampled from 300 K, 600 K, and 1200 K. In the following experiments, we train models on 500 configurations sampled at 300 K
Figure 2: Visualisation of datasets. (a) GeTe\({}_{4}\), (b) LiPS20, (c) 3BPA, (d) Acetylacetone, (e) MD17.
and test 1669 configurations sampled at 600 K.
**LiPS** consists of lithium, phosphorous, and sulfur (\(Li_{6.75}P_{3}S_{11}\)), which is used in similar benchmarking analysis Fu et al. (2023), as a representative system for the MD simulations to study kinetic properties in materials. Note that LiPS is a crystalline (ordered structure) that can potentially be used in battery development. We have adopted this dataset from (Batzner et al., 2022)and benchmarked all models for their force and energy errors. The training and testing datasets have 19000 and 1000 configurations, respectively.
**Acetylacetone (AcAc)** The dataset was generated by conducting MD simulations at both 300K and 600K using a Langevin thermostat(Batatia et al., 2022). The uniqueness of this dataset stems from the varying simulation temperatures and the range of sampled dihedral angles. While the training set restricts sampling to dihedral angles below 30\({}^{\circ}\), our models are tested on angles extending up to 180\({}^{\circ}\). The model must effectively generalize on the Potential Energy Surface (PES) for accurate generalization at these higher angles. This challenge presents an excellent opportunity for benchmarking GNNs. The dataset consists of 500 training configurations and 650 testing configurations.
**GeTe** is a new dataset generated by a Car-Parrinello MD (CPMD) simulations Hutter (2012) of Ge and Te atoms, which builds on a density functional theory (DFT) based calculation of the interatomic forces, prior to a classical integration of the equations of motions. It consists of 200 atoms, of which 40 are Ge and 160 are Te, i.e., corresponding to the composition GeTe\({}_{4}\) whose structural properties have been investigated in detail and reproduce a certain number of experimental data in the liquid and amorphous phase from neutron/X-ray scattering Micoulaut et al. (2014); Guanasekera et al. (2014) and Mossbauer spectroscopy Micoulaut et al. (2014). As GeTe belongs to the promising class of phase-change materials Zhang et al. (2019), it is challenging to simulate using classical force fields because of the increased accessibility in terms of time and size. Thus, an accurate force field is essential to understand the structural changes in GeTe during the crystalline to disordered phase transitions. Here, our dataset consists of 1,500 structures in training, 300 in test, and 300 in validation.
**LiPS20** is a new dataset generated from AIMD simulations of a series of systems containing Li, P, and S elements, including both the crystalline and disordered structures of elementary substances and compounds, such as Li, P, S, Li\({}_{2}\)P\({}_{2}\)S\({}_{6}\), \(\beta\)-Li\({}_{3}\)PS\({}_{4}\), \(\gamma\)-Li\({}_{3}\)PS\({}_{4}\), and \(x\)Li\({}_{2}\)S-(\(100-x\))P\({}_{2}\)S\({}_{5}\) (\(x=67,70,75,\) and \(80\)) glasses using the CP2K package Kuhne et al. (2020). Details of dataset generation, structures, and compositions in this dataset are given in App. A.2.
### Evaluation metrics
Ideally, once trained, the forward simulations by EGraFFs should be close to the ground truth (first principle simulations) both in terms of the atomic structure and dynamics. To this extent, we propose four metrics. Note that these metrics are evaluated based on the forward simulation, starting from an arbitrary structure for \(n\) steps employing the force fields; a task for which it is not explicitly trained. All the forward simulations were performed using the Atomic Simulation Environment (ASE) package (Larsen et al., 2017). The simulations were conducted in the canonical (\(NVT\)) ensemble, where the temperature and timesteps were set in accordance with the sampling conditions specified in the respective datasets. See details in App. A.3
#### 4.2.1 Structure metrics
We propose two metrics to evaluate the proximity of structures predicted by the EGraFF to the ground truth.
**Wright's Factor (WF)**, \(R_{\chi}\)**: Grimley et al. (1990) represents the relative difference between the radial distribution function (RDF) of the ground truth atomic structure (\(g_{ref}(r)\)) and the structure obtained from the atomistic simulations employing the EGraFFs (\(g(r)\)) as
\[R_{\chi}=\left[\frac{\sum_{i=1}^{n}\left(g(r)-g_{\mathrm{ref}}(r)\right)^{2}}{ \sum_{i=1}^{n}\left(g_{\mathrm{ref}}(r)\right)^{2}}\right] \tag{2}\]
RDF essentially represents the local time-averaged density of atoms at a distance \(r\) from a central atom (see App. A.4). Hence, it captures the structure simulated by a force field concisely and one-dimensionally. A force field is considered acceptable if it can provide a WF less than 9% for bulk systems Bauchy (2014).
**Jensen-Shannon Divergence(JSD) of radial distribution function:** Jensen-Shannon Divergence (JSD) Cover and Thomas (1991), Shannon (1948) is a useful tool for quantifying the difference or similarity between two probability distributions in a way that overcomes some of the limitations of the KL DivergenceKullback and Leibler (1951). Since the RDF is essentially a distribution of the atomic density, JSD between two predicted RDF and ground truth RDF can be computed as:
\[\text{JSD}(g(r)\parallel g_{ref}(r))=\frac{1}{2}\left(\text{KL}(g(r)\parallel \hat{g}(r))+\text{KL}(g_{ref}(r)\parallel\hat{g}(r))\right) \tag{3}\]
where \(\hat{g}(r)=1/2(g(r)+g_{ref}(r))\) is the mean of the predicted and ground-truth RDFs. (see App. A.4)
#### 4.2.2 Dynamics metrics
We monitor the energy and force error over the forward simulation trajectory to evaluate how close the predicted dynamics are to the ground truth. Specifically, we use the following metrics, namely, **energy violation error, EV(t)**, and **force violation error, FV(t)**, defined as:
\[EV(t)=\frac{(\hat{E}(t)-E(t))^{2}}{\hat{E}(t)^{2}+E(t)^{2}},\text{and}\quad FV(t)= \frac{\|\mathcal{\hat{F}}(t)-\mathcal{F}(t)\|_{2}}{\left(\|\mathcal{\hat{F}}(t )\|_{2}+\|\mathcal{F}(t)\|_{2}\right)} \tag{4}\]
where \(\hat{E}(t)\) and \(E(t)\) are the predicted and ground truth energies respectively and \(\mathcal{\hat{F}}(t)\) and \(\mathcal{F}(t)\) are the predicted and ground truth forces. Note that this metric ensures that the energy and the force violation errors are bounded between 0 and 1, with 0 representing exact agreement with the ground truth and 1 representing no agreement. Further, we compute the geometric mean of \(EV(t)\) and \(FV(t)\) over the trajectory to represent the cumulative \(EV\) and \(FV\).
### Results
#### 4.3.1 Energy and Forces
To evaluate the performance of the trained models on different datasets, we first compute the mean absolute error in predicting the energy and force (see Table 1). First, we observe that no single model consistently outperforms others for all datasets, highlighting the dataset-specific nature of the models. TorchMDNet model has notably lower energy error than other models for most datasets, while NguiIP has minimum force error on majority of datasets with low energy error. On bulk systems such as LiPS and LiPS20, MACE and BOTNet show the lowest energy error. Interestingly, GeTe, the largest dataset in terms of the number of atoms, exhibits significant energy errors across all models, with the Equipment having the lowest energy error. Equipment also exhibits lower force error for datasets like Acetylacetone, 3BPA, and MD17, but suffers high force error on GeTe, LiPS, and LiPS20. Overall, NguiIP seems to perform well in terms of both energy and force errors for several datasets. It is also interesting to note that the models exhibiting low energy error often exhibit high force error, suggesting that the gradient of energy is not captured well by these models. This will potentially lead to poor simulations as the updated positions are computed directly from the forces.
#### 4.3.2 Forward simulations
To evaluate the ability of the trained models to simulate realistic structures and dynamics, we perform MD simulations using the trained models, which are compared with ground truth simulations, both employing the same initial configuration. For each model, five forward simulations of 1000 timesteps are performed on each dataset. Tables 2 and 3 show the JSD and WF, and EV and FV, respectively, of the trained models on the datasets (see App. A.5, A.6 and A.7 for figures). Both in terms of JSD and WF, we observe that NguiIP performs better on most datasets. Interestingly, even on datasets where other models have lower MAE on energy and force error, NguiIP performs better in capturing the atomic structure. Altogether, we observe that NguiIP followed by TorchMDNet performs best in capturing the atomic structure for most datasets. We now evaluate the models' EV and FV during the forward simulation. Interestingly, we observe that NguiIP and Allegro exhibit the least FV for most datasets, while MACE and BOTNet perform better in terms of EV. Interestingly, TorchMDNet, despite having the lowest MAE on energy for most datasets, does not exhibit low EV, indicating that having low MAE during model development does not guarantee low energy error during MD simulation.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c c c} \hline \hline & \multicolumn{2}{c}{**NopicIT**} & \multicolumn{2}{c}{**Alllegro**} & \multicolumn{2}{c}{**BOTNET**} & \multicolumn{2}{c}{**SIXEC**} & \multicolumn{2}{c}{**Geulptororie**} & \multicolumn{2}{c}{**TotchMDNet**} & \multicolumn{2}{c}{**FAN**} & \multicolumn{2}{c}{**DineNet\(\star\)**} \\ & **R** & **F** & **E** & **F** & **E** & **F** & **E** & **F** & **E** & **F** & **E** & **F** & **E** & **F** & **F** & **F** \\ \hline
**Aerojunction** & 1.38 & 4.59 & **1002** & 44.4 & 2.0 & 1.0 & 2.0 & 5.0 & 4.0 & 4.0 & **2.0** & 3.0 & 1.0 & 5.0 & 42.7 & 102.0 & 15.28 \\
**3BPA** & 3.25 & **3.26** & **4.13** & 10.0 & 5.0 & 14.0 & 4.0 & 12.0 & 6.0 & **2.0** & **3.0** & **3.0** & 1.0 & 38.67 & 40.41 & 396.74 & 46.72 \\
**Aspita** & 6.84 & 13.89 & **2007** & 9.17 & 9.9 & 14.0 & 8.5 & 14.0 & 6.5 & 15.0 & 5.3 & 5.3 & 5.3 & 5.9 & 44.9 & 12.41 & 133.34 & 22.07 \\
**Emanuel** & 2.57 & 7.49 & **234** & 5.01 & 2.6 & 6.0 & 6.0 & 2.6 & **3.9** & 2.6 & 6.9 & 9.3 & 2.6 & 5.9 & 7.7 & 11.81 & 499.58 & 17.19 \\
**Sophimation** & 5.70 & 6.20 & 5.44 & 2.64 & 6.57 & 6.5 & 6.0 & 6.8 & **19.5** & 5.8 & 7.0 & **10.5** & 4.03 & 10.56 & 4.01 & 175.04 & 19.65 \\
**Safe Aid** & 5.78 & 8.42 & 5.76 & **6.30** & 5.6 & 5.6 & 1021 & 5.34 & **1.24** & 5.2 & 12.9 & 6.5 & 7.9 & 24.5 & 11.2 & 169.18 & 25.48 \\
**LAPS** & 165.43 & 15.00 & 34.15 & **12.05** & **280.0** & 13.0 & 5.0 & 32.4 & **23.4** & 5.2 & 11.9 & 6.5 & 7.9 & 24.5 & 11.2 & 169.18 & 25.48 \\
**LAPS20** & 165.80 & **3.04** & 31.75 & **32.5** & **280.0** & **13.0** & 30.9 & 15.0 & 33.7 & 51.0 & 6.0 & 6.0 & 128.80 & 112.43 & 55.22 & 42.23 \\
**LAPS20** & 265.80 & **3.04** & 33.17 & 3.14 & 5.49 & **5.90** & **140.0** & 4.84 & 7224.9 & 5.73 & 6.24 & 0.7 & 51.9 & \\
**GeTe** & 1780.951 & **1144.00** & 10004.23 & 250.43 & 250.40 & 250.70 & 247.00 & **6003.7** & 363.17 & 2613.0 & 371.01 & 844.28 & 330.65 & 51704.65 & 222.39 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Energy (E) and force (F) mean absolute error in meV and meV/A, respectively, for the trained models on different datasets. Darker colors represent the better-performing models. We use shades of green and blue color for energy and force, respectively.
#### 4.3.3 Training and inference time
Table 4 shows different models' training and inference time. MACE and TorchMDNet have the lowest per epoch training time. The total training time is higher for transformer models TorchMDNet and Equiformer because of the larger number of epochs required for training. Although NequiIP and Allegro require more time per epoch, they get trained quickly in fewer epochs. LiPS dataset, having the largest dataset size in training of around 20000, has the largest per epoch training time. Since MD simulations are generally performed on CPUs, we report inference time as a mean over five simulations for 1000 steps performed on a CPU. TorchMDNet is significantly fast on all the datasets while Allegro and MACE show competitive performance. A visual analysis of the models on these metrics are given in App. A.7.
### Challenging tasks on EGraFF
#### 4.4.1 Generalizability to unseen structures
The first task focuses on evaluating the models on an unseen small molecule structure. To this extent, we test the models, trained on four molecules of the MD17 dataset (aspirin, ethanol, naphthalene, and salicylic acid), on the benzene molecule, an unseen molecule from the MD17 dataset. Note that the benzene molecule has a cyclic ring structure. Aspirin and Salicylic acid contain one ring, naphthalene is polycyclic with two rings, while ethanol has a chain structure with no rings. Table 5 shows the EV and FV and Table 6 shows the corresponding JSD and WF. We observe that all the models suffer very high errors in force and energy. Equiformer trained on ethanol and salicylic acid exhibits unstable simulation after the first few steps. Interestingly, non-cyclic ethanol models perform better than aspirin and salicylic acid, although the latter structures are more similar to benzene. Similarly, the model trained on polycyclic Naphthalene performs better than other models. Altogether, we observe that despite having the same chemical elements, models trained on one small molecule do not generalize to an unseen molecule with a different structure.
#### 4.4.2 Generalizability to higher temperatures
At higher temperatures, the sampling region in the energy landscape widens; hence, the configurations obtained at higher temperatures come from a broader distribution of structural configurations. In the 3BPA molecule, at 300K, only the stable dihedral angle configurations are present, while at 600K, all configurations are sampled. Here, we
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c c c} \hline \hline & \multicolumn{2}{c}{**NequiIP**} & \multicolumn{2}{c}{**Alllegro**} & \multicolumn{2}{c}{**BOUTNet**} & \multicolumn{2}{c}{**MACE**} & \multicolumn{2}{c}{**Equiformer**} & \multicolumn{2}{c}{**TorchMDNet**} & \multicolumn{2}{c}{**Pash**} & \multicolumn{2}{c}{**DimeNET+**} \\ & **JSD** & **WF** & **JSD** & **WF** & **JSD** & **WF** & **JSD** & **WF** & **JSD** & **WF** & **JSD** & **WF** & **JSD** & **WF** & **JSD** & **WF** \\ \hline
**Acrylenet** & 28.24 & 24.55 & 29.63 & 22.71 & 30.61 & 26.04 & 31.07 & 29.30 & 29.36 & 21.28 & 29.34 & 22.49 & 28.99 & 22.33 & 6.00 & 143.36 \\
**3BPA** & **0.82** & 6.12 & 11.9 & 7.98 & 1.07 & 7.13 & 99.88 & 3.09 & 9.74 & 7.44 & 0.87 & 7.31 & 1.39 & 6.65 & 8.87 & 89.97 \\
**Aspirin** & 0.133 & 30.65 & **0.108** & 23.29 & 0.122 & 27.36 & 0.111 & 18.92 & 0.120 & 23.58 & 0.131 & 23.99 & 10.09 & 4.01 & 0.31 & 167.48 \\
**Ethanol** & 0.526 & 18.34 & 0.159 & **0.580** & **0.500** & **15.57** & 0.944 & 19.33 & 0.549 & 23.48 & 0.464 & 17.70 & 0.78 & **0.42** & 37.55 & 20.57 \\
**Napirin** & 0.089 & 20.96 & 0.082 & 19.44 & 0.031 & 24.65 & 0.095 & 22.89 & 0.090 & 26.72 & 0.081 & 19.25 & 0.08 & 2.62 & 0.23 & 130.40 \\
**Salicylic Acid** & 0.077 & 16.95 & 0.124 & 27.58 & 0.076 & 14.55 & 0.097 & 19.55 & 0.092 & 14.17 & 0.07 & 16.12 & 0.08 & 6.31 & 0.00 & 20.85 \\
**LAPS** & 0.00 & 3.899 & 0.0 & 3.57 & 0.0 & 3.93 & 0.3 & 3.66 & 0.0 & 1.97 & 0.0 & 1.49 & 0.0 & 0.51 & 0.0 & 28.55 \\
**LiPS20** & 0.001 & 14.92 & 0.001 & 18.32 & 0.001 & 17.08 & 0.001 & 17.70 & 2.00 & 0.006 & 41.70 & - & 0.006 & 41.70 & - & 0.007 \\
**Gerfc** & 0.00 & 27.8 & 0.0 & 2026.0 & 0.0 & 20.3 & 0.0 & 0.0 & 20.0 & - & - & 0.00 & 2.80 & 0.0 & 2.29 & 0.0 & 16.77 \\ \hline \hline \end{tabular}
\end{table}
Table 2: JSD and WF for six EGraFFs on all the datasets. The values are computed as the average of five forward simulations for 1000 timesteps on each dataset with different initial conditions.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c c} \hline \hline & \multicolumn{2}{c}{**NequiIP**} & \multicolumn{2}{c}{**Alllegro**} & \multicolumn{2}{c}{**BOUTNet**} & \multicolumn{2}{c}{**MACE**} & \multicolumn{2}{c}{**Equiformer**} & \multicolumn{2}{c}{**TorchMDNet**} & \multicolumn{2}{c}{**Pash**} & \multicolumn{2}{c}{**DimeNET+**} \\ \hline
**Acrylenet** & 28.09 & 0.390 & 0.50 & 0.710 & 0.737 & 0.715 & 0.813 & 0.715 & 0.810 & 0.711 & 0.830 & 0.715 & 0.820 & 17.15 & 0.820 & 17.87 & 0.205 \\
**Slightlight** & 0.931 & 0.961 & 0.972 & 0.941 & 0.931 & 0.931 & 0.921 & 0.901 & 0.941 & 0.930 & 0.920 & 0.920 & 0.920 & 0.920 & 0.920 & 0.920 \\
**3BPA** & 0.800 & 0.711 & 0.729 & 0.710 & 0.710 & 0.711 & 0.710 & 0.700 & 0.803 & 0.720 & 0.854 & 0.700 & 0.803 & 0.710 & 19.22 & 0.230 \\
**Aspirin** & 0.204 & 0.202 & 0.027 & 0.023 & 0.023 & 0.027 & 0.023 & 0.027 & 0.020 & 0.330 & 0.027 & 0.300 & 0.027 & 0.440 & 0.031 & 0.300 & 0.020 \\
**Aspirin** & 0.351 & 0.081 & 0.950 & 0.813 & 0.857 & 0.878 & 0.048 & 0.855 & 0.830 & 0.850 & 0.850 & 0.850 & 0.850 & 0.850 & 0.850 \\
**Ethanol** & 0.257 & 0.011 & 0.741 & 0.729 & 0.723 & 0.723 & 0.726 & 0.015 & 0.865 & 0.867 & 0.860 & 0.860 & 0.860 & 0.860 & 0.860 \\
**Napirin** & 0.24 & 0.45 & 0.238 & 0.000 & 1.534 & 0.899 & 0.600 & 0.534 & 0.860 & 0.860 & 0.860 & 0.860 & 0.860 & 0.860 & 0.860 \\
**Slightlight +** & 0.258 & 0.258 & 0.268 & 0.002 & 0.001 & 0.001 & 0.949 & 0.600 & 0.530 & 0.860 & 0.860 & 0.860 & 0.860 & 0.860 & 0.860 & 0.860 \\
**Slight +** & 2.155 & 0.455 & **0.095** & **0.004** & **0.04** & **0.02** & **0.02** & **0.04** & **0.00** & **0.00** & **0.00** & **0.00** & **0.00** & **0.00** & **0.00** & **0.00** & **0.00** \\
**LAPS** & 0.008 & 0.008 & 0.008 & 0.008 & 0.004 & 0
evaluate the model trained at lower temperatures for simulations at higher temperatures. Table 7 shows the obtained mean energy and force violation of the forward simulation trajectory, and Table 8 shows the corresponding JSD and WF. We observe that the models can reasonably capture the behavior, both structure and dynamics, at higher temperatures.
#### 4.4.3 Out of distribution tasks on the LIPS20 dataset
**Unseen crystalline structures:** Crystal structures are stable low-energy structures with inherent symmetries and periodicity. Predicting their energy accurately is an extremely challenging task and a cornerstone in materials discovery. Here, we train the models on liquid (disordered) structures and test them on the out-of-distribution crystalline structures to evaluate their generalizability capabilities. Table 9 shows that BOTNet performs appreciably well with almost the same energy and force error on crystal structures as the obtained training error. Both the transformer models have poor performance on the LiPS20 system, in terms of both the training and testing datasets. TorchMDNet has significantly high energy and force errors, whereas Equiformer exhibits instability during the forward simulation.
**Generalizability to unseen composition:** The LiPS20 dataset consists of 20 different compositions with varying system sizes and cell geometries (see App. A.2). In Tables (a)a(a) and (b)b, we show the results on the test structures that are not present in the training datasets. The test dataset consists of system sizes up to 260 atoms, while the models were trained on system sizes with \(<\) 100 atoms. It tests the models' generalization as well as inductive capability. We observe that MACE and BOTNet have the lowest mean energy, force violation, and low WF. NequIP and Allegro have significantly higher test errors.
## 5 Concluding Insights
In this work, we present EGrAFBench, a benchmarking suite for evaluating machine-learned force fields. The key insights drawn from the extensive evaluation are as follows.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c} \hline \hline & \multicolumn{2}{c}{NequIP} & \multicolumn{2}{c}{AllLegro} & \multicolumn{2}{c}{BOTNet} & \multicolumn{2}{c}{MACE} & \multicolumn{2}{c}{Equiformer} & \multicolumn{2}{c}{TorchMDNet} \\ & JSD & WF & JSD & WF & JSD & WF & JSD & WF & JSD & WF & JSD & WF \\ \hline Aspirin & 2368483 & 73.801 & 573090 & **61.518** & **311916** & 62.842 & 47336 & 89.692 & 482522 & 75.081 & 494482 & 76.828 \\ Ethanol & **609735** & 63.01 & 11.3609 & 5.1601 & 11.08856 & 57.181 & 108529 & **61.852** & - & - & 116381 & 65.746 \\ Neuphatence & **32082** & 65.799 & **330412** & 51.018 & 673988 & **22.884** & 821416 & 31.497 & 365549 & 65.117 & 475078 & 110.906 \\ Salicylic acid & 495088 & 70.401 & 525441 & **50.738** & 130028 & 62.014 & 1340236 & 61.483 & - & - & **139295** & 71.778 \\ \hline \hline \end{tabular}
\end{table}
Table 6: JSD and WF over simulation trajectory of benzene molecule using models trained on aspirin, ethanol, napthalene, and salicylic acid.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c} \hline \hline & \multicolumn{2}{c}{NequIP} & \multicolumn{2}{c}{AllLegro} & \multicolumn{2}{c}{BOTNet} & \multicolumn{2}{c}{MACE} & \multicolumn{2}{c}{Equiformer} & \multicolumn{2}{c}{TorchMDNet} \\ & **T** & **I** & **T** & **I** & **T** & **I** & **T** & **I** & **T** & **I** & **T** & **I** \\ \hline
**Acetylacetone** & 0.66 & 3.18 & 0.17 & 1.94 & 0.11 & 1.90 & **0.04** & 2.66 & 0.52 & 9.98 & **0.11** & **1.79** \\
**3BPA** & 1.07 & 7.07 & 1.80 & 4.92 & 0.12 & 4.46 & **0.06** & **4.18** & 0.68 & 19.25 & 0.13 & 4.83 \\
**Asppirin** & 5.23 & 2.93 & 1.61 & 1.68 & 0.21 & 1.76 & 0.14 & 2.45 & 0.85 & 13.04 & 0.15 & **1.41** \\
**Ethanol** & 5.49 & 2.05 & 1.62 & 0.68 & 5.03 & 1.07 & 1.15 & 1.28 & 0.81 & 5.70 & **0.14** & 0.80 \\
**Naphthalene** & 5.26 & 3.75 & 2.11 & 1.07 & 13.47 & 12.27 & 4.72 & 2.28 & 2.08 & 14.67 & **0.14** & 1.37 \\
**Salicylic Acid** & 5.24 & 3.30 & 1.61 & 0.87 & 11.68 & 1.26 & 3.858 & 2.29 & 0.82 & 9.79 & **0.14** & 1.17 \\
**LiPS** & 89.91 & 35.83 & 20.89 & 13.91 & 4.82 & 10.29 & **3.61** & **6.52** & 18.51 & 46.34 & 3.18 & 6.95 \\
**LiPS20** & 2.78 & 25.51 & 0.76 & 11.42 & 0.36 & 15.187 & 0.18 & 6.75 & 1.86 & 56.59 & 0.21 & **5.12** \\
**GeTe** & 7.22 & 105.62 & **4.49** & 220.43 & 2.07 & 78.2 & 0.58 & 26.75 & 9.33 & 143.91 & **1.55** & 21.67 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Training time (T) per epoch and inference time (I) in \(minutes/epoch\) and \(minutes\), respectively, for the trained models on all the datasets. Inference time is the mean over 5 forward simulations of 1000 steps on the CPU.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c} \hline \hline & \multicolumn{2}{c}{NequIP} & \multicolumn{2}{c}{AllLegro} & \multicolumn{2}{c}{BOTNet} & \multicolumn{2}{c}{MACE} & \multicolumn{2}{c}{Equiformer} & \multicolumn{2}{c}{TorchMDNet} \\ & E & F & E & F & E & F & E & F & E & F & E & F \\ \hline Aspirin & 22650 & 0.762 & 22676 & 0.765 & 21880 & 0.760 & 21881 & 0.766 & 47027.742 & 0.769 & 46864 & 0.765 \\ Ethanol & (11.62) & (0.060) & (0.311) & (0.070) & (6.874) & (0.061) & (21.11) & (0.061) & (3.88) & (0.065) & (184.678) & (0.058) \\ Ethanol & 6154.4 & 0.740 & 62242 & **0.011** & **55005** & 0.935 & 5863.2 & 0.921 & - & - & 20262 & 0.742 \\
0.402 & (0.042) & (0.056) & (12.501) & (0.040) & (13.725) & (0.016) & (0.338) & (0.022) & - & - & (19.401) & (0.052) \\
0.402 & (0.056) & (12.501) & (0.040) & (13.725) & (0.016) & (0.338) & (0.022) & - & - & (19.401) & (0.052) \\
0.481 & (16.41) & (0.067) & (0.070) & (3.32) & (0.008) & (0.324) & (0.012) & (6.069) & (0.061) & - & (0.057) \\
0.481 & (1.062) & (0.061) & (1.082) & **10.53** & **1.205** & 0.925 & 0.965 & - & - & 35947 & 0.769 \\
0.308 & (0.067) & (0.314) & (0.076) & (10.309) & (0.005) & (0.310) & (0.007) & - & (2.907) & (0.087) \\ \hline \hline \end{tabular}
\end{table}
Table 5: EV (E) and FV (F) on the forward simulation of benzene molecule by the models trained on aspirin, ethanol, napthathalene, and salicylic acid.
1. **Dataset matters:** There was no single model that was performing best on all the datasets and all the metrics. Thus, the selection of the model depends highly on the nature of the atomic system, whether it is a small molecule or a bulk system, for instance.
2. **Structure is important:** Low force or energy error during model development does not guarantee faithful reproduction of the atomic structure. Conversely, models with higher energy or force error may provide reasonable structures. Accordingly, downstream evaluation of atomic structures using structural metrics is important in choosing the appropriate model.
3. **Stability during dynamics:** Models exhibiting low energy or force errors during the model development on static configurations do not guarantee low errors during forward simulation. Thus, the energy and force violations during molecular dynamics should be evaluated separately to understand the stability of the simulation.
4. **Out-of-distribution is still challenging:** Discovery of novel materials relies on identifying hitherto unknown configurations with low energy. We observe that the models still do not perform reliably on out-of-distribution datasets, leaving an open challenge in materials modeling.
5. **Fast to train and fast on inference:** We observe that some models are fast on training, while others are fast on inference. For instance, TorchMDNet is slow to train but fast on inference. While MACE is fast both on training and inference, it does not give the best results in terms of structure or dynamics. Thus, in cases where larger simulations are required, the appropriate model that balances the training/inference time and accuracy may be chosen.
**Limitations and future work:** Our research clearly points to developing a foundation model trained on large datasets. Further, improved training strategies that (i) ensure the learning of gradients of energies and forces, (ii) take into account the dynamics during simulations, and (iii) reproduce the structure faithfully need to be developed. This suggests moving away from the traditional training approach only on energy and forces and rather focusing on the system's dynamics. Further strategies combining experimentally observed structures and simulated dynamics can be devised through experiment-simulation fusion to develop reliable force fields that are faithful to both experiments and simulations.Another interesting aspect is the empirical evaluation of which particular architectural feature of a model helps in giving a superior performance for a given dataset or system (defined by the type of bonding, number of atoms, crystalline vs disordered, etc.). Such a detailed analysis can be a guide to designing improved architecture while also providing thumb rules toward the use of an appropriate architecture for a given system.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c} \hline \hline & \multicolumn{2}{c}{NEGUP} & \multicolumn{2}{c}{AllEGOR} & \multicolumn{2}{c}{BOTNET} & \multicolumn{2}{c}{MACE} & \multicolumn{2}{c}{Geupformer} & \multicolumn{2}{c}{TongMDNet} \\ & E & F & E & F & E & F & E & F & E & F & E & F \\ \hline Acetylacetone & 300K & 0.959 & 0.7062 & 0.817 & 0.710 & 0.924 & 0.7131 & **0.893** & 0.706 & **0.810** & 0.7113 & 0.836 & 0.7128 \\ & 600K & 1.888 & 0.714 & 1.912 & 0.7137 & 1.898 & 0.7140 & 2.715 & 0.412 & 0.710 & 0.7137 & 1.996 & 0.7129 \\
3BPA & 300K & 0.539 & 0.7106 & 3.708 & 0.7102 & **0.807** & 0.7109 & 0.759 & **0.709** & 0.803 & 0.814 & 0.7097 \\ & 600K & 1.880 & 0.708 & 1.603 & 0.7082 & 1.607 & 0.7102 & 1.241 & **0.708** & 1.319 & 0.7104 & 0.7120 & 0.7121 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Geometric mean of energy (\(\times 10^{-5}\)) and force violation at 300 K and 600 K using model trained at 300 K using model trained at 300 K for acetylacetone and 3BPA dataset.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c} \hline \hline & \multicolumn{2}{c}{NEGUP} & \multicolumn{2}{c}{AllEGOR} & \multicolumn{2}{c}{BOTNET} & \multicolumn{2}{c}{MACE} & \multicolumn{2}{c}{Geupformer} & \multicolumn{2}{c}{TongMDNet} \\ & E & F & E & F & E & F & E & F & E & F & E & F \\ \hline Acetylacetone & 300K & 0.959 & 0.7062 & 0.817 & 0.710 & 0.924 & 0.7131 & **0.893** & 0.706 & **0.810** & 0.7113 & 0.836 & 0.7128 \\ & 600K & 1.888 & 0.714 & 1.912 & 0.7137 & 1.898 & 0.7140 & 2.715 & 0.412 & 0.7127 & 1.70 & 0.7137 & 1.996 & 0.7129 \\ & 300K & 0.539 & 0.7106 & 3.708 & 0.7102 & **0.807** & 0.7109 & 0.759 & **0.709** & 0.803 & 0.814 & 0.7097 \\ & 600K & 1.880 & 0.7085 & 1.603 & 0.7082 & 1.607 & 0.7102 & 1.241 & **0.708** & 1.319 & 0.7104 & 0.7100 & 0.7121 \\ \hline \hline \end{tabular}
\end{table}
Table 8: JSD and WF at 300 K and 600 K using the model trained at 300 K for acetylacetone and 3BPA.
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline & \multicolumn{2}{c}{NEGUP} & \multicolumn{2}{c}{AllEGOR} & \multicolumn{2}{c}{BOTNET} & \multicolumn{2}{c}{MACE} & \multicolumn{2}{c}{Geupformer} & \multicolumn{2}{c}{TongMDNet} \\ & E & F & E & F & E & F & E & F & E & F & E \\ \hline Acetylacetone & 300K & 0.959 & 0.7062 & 0.817 & 0.710 & 0.924 & 0.7131 & **0.893** & 0.706 & **0.810** & 0.7113 & 0.836 & 0.7128 \\ & 600K & 1.888 & 0.714 & 1.912 & 0.7137 & 1.898 & 0.7140 & 2.715 & 0.4127 & 0.710 & 0.7137 & 1.996 & 0.7129 \\ & 300K & 0.539 & 0.7106 & 3.708 & 0.7102 & **0.807** & 0.7109 & 0.759 & **0.709** & 0.803 & 0.814 & 0.7097 \\ & 600K & 1.880 & 0.7085 & 1.603 & 0.7082 & 1.607 & 0.7102 & 1.241 & **0.708** & 1.319 & 0.7104 & 0.7100 & 0.7121 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Geometric mean of energy (\(\times 10^{-5}\)) and force violation at 300 K and 600 K using model trained at 300 K and 600 K using model trained at 300 K for acetylacetone and 3BPA.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & \multicolumn{2}{c}{NEGUP} & \multicolumn{2}{c}{AllEGOR} & \multicolumn{2}{c}{BOTNET} & \multicolumn{2}{c}{MACE} & \multicolumn{2}{c}{TongMDNet} \\ \hline Train structures & E & 0.4500 & 0.2736 & 0.2797 & 0.4175 & 0.5106 & 0.747 & 0.706 \\ & F & **0.7021** & 0.725 & **0.720** & **0.503** & **0.704** & **0.703** \\ Crystal structures & E & 10.854 & 10.725 & **0.720** & **0.503** & **0.704** & **0.705** \\ & F & 10.820 & 0.725 & **0.722** & **0.836** \\ Test structures & E & 15.549,338 & 14.0035 & 12.731 & **0.990** & **0.813** \\ & F & 0.738 & 0.729 & 0.729 & **0.723** & **0.992** & 0.902** \\ \hline \hline \end{tabular}
\end{table}
Table 9: LiPS20 test on train structures, unseen crystalline structures, and test structures: (a) Energy and Force violation and (b) JSD and WF. |
2309.01724 | Neural network-based emulation of interstellar medium models | The interpretation of observations of atomic and molecular tracers in the
galactic and extragalactic interstellar medium (ISM) requires comparisons with
state-of-the-art astrophysical models to infer some physical conditions.
Usually, ISM models are too time-consuming for such inference procedures, as
they call for numerous model evaluations. As a result, they are often replaced
by an interpolation of a grid of precomputed models.
We propose a new general method to derive faster, lighter, and more accurate
approximations of the model from a grid of precomputed models.
These emulators are defined with artificial neural networks (ANNs) designed
and trained to address the specificities inherent in ISM models. Indeed, such
models often predict many observables (e.g., line intensities) from just a few
input physical parameters and can yield outliers due to numerical instabilities
or physical bistabilities. We propose applying five strategies to address these
characteristics: 1) an outlier removal procedure; 2) a clustering method that
yields homogeneous subsets of lines that are simpler to predict with different
ANNs; 3) a dimension reduction technique that enables to adequately size the
network architecture; 4) the physical inputs are augmented with a polynomial
transform to ease the learning of nonlinearities; and 5) a dense architecture
to ease the learning of simple relations.
We compare the proposed ANNs with standard classes of interpolation methods
to emulate the Meudon PDR code, a representative ISM numerical model.
Combinations of the proposed strategies outperform all interpolation methods by
a factor of 2 on the average error, reaching 4.5% on the Meudon PDR code. These
networks are also 1000 times faster than accurate interpolation methods and
require ten to forty times less memory.
This work will enable efficient inferences on wide-field multiline
observations of the ISM. | Pierre Palud, Lucas Einig, Franck Le Petit, Emeric Bron, Pierre Chainais, Jocelyn Chanussot, Jérôme Pety, Pierre-Antoine Thouvenin, David Languignon, Ivana Bešlić, Miriam G. Santa-Maria, Jan H. Orkisz, Léontine E. Ségal, Antoine Zakardjian, Sébastien Bardeau, Maryvonne Gerin, Javier R. Goicoechea, Pierre Gratier, Viviana V. Guzman, Annie Hughes, François Levrier, Harvey S. Liszt, Jacques Le Bourlot, Antoine Roueff, Albrecht Sievers | 2023-09-04T17:16:30Z | http://arxiv.org/abs/2309.01724v1 | # Neural network-based emulation of interstellar medium models
###### Abstract
Context:The interpretation of observations of atomic and molecular tracers in the galactic and extragalactic interstellar medium (ISM) requires comparisons with state-of-the-art astrophysical models to infer some physical conditions. Usually, ISM models are too time-consuming for such inference procedures, as they call for numerous model evaluations. As a result, they are often replaced by an interpolation of a grid of precomputed models.
Aims:We propose a new general method to derive faster, lighter, and more accurate approximations of the model from a grid of precomputed models for use in inference procedures.
Methods:These emulators are defined with artificial neural networks (ANNs) with adapted architectures and are fitted using regression strategies instead of interpolation methods. The specificities inherent in ISM models need to be addressed to design and train adequate ANNs. Indeed, such models often predict numerous observables (e.g., line intensities) from just a few input physical parameters and can yield outliers due to numerical instabilities or physical bistabilities and multistabilities. We propose applying five strategies to address these characteristics: 1) an outlier removal procedure; 2) a clustering method that yields homogeneous subsets of lines that are simpler to predict with different ANNs; 3) a dimension reduction technique that enables us to adequately size the network architecture; 4) the physical inputs are augmented with a polynomial transform to ease the learning of nonlinearities; and 5) a dense architecture to ease the learning of simpler relations between line intensities and physical parameters.
Results:We compare the proposed ANNs with four standard classes of interpolation methods, nearest-neighbor, linear, spline, and radial basis function (RBF), to emulate a representative ISM numerical model known as the Meudon PDR code. Combinations of the proposed strategies produce networks that outperform all interpolation methods in terms of accuracy by a factor of 2 in terms of the average error (reaching 4.5% on the Meudon PDR code) and a factor of 3 for the worst-case errors (33%). These networks are also 1 000 times faster than accurate interpolation methods and require ten to forty times less memory.
Conclusions:This work will enable efficient inferences on wide-field multiline observations of the ISM.
Conclusions:
## 1 Introduction
Many aspects of star and planet formation are still only partially understood. Studies around the efficiency of star formation require a better understanding of the effects of feedback mechanisms and of gas dynamics, both in the Milky Way and other galaxies. In addition, understanding the evolution of interstellar matter from diffuse clouds to planet-forming disks requires investigations of the interstellar chemistry, for instance, examining the development of the chemical complexity or the fractionation of isotopologues. New and large hyperspectral surveys in radioastronomy stand as a game-changer for the study of these processes, as they enable observing full molecular clouds (\(\sim\) 10 pc size) at a dense-core scale (\(<\) 0.1 pc) spatial resolution. For instance, the "Orion B" IRAM-30m Large Program (Pety et al. 2017) covers about 250 pc\({}^{2}\) of the Orion B giant molecular cloud. It has produced a hyperspectral image of one million pixels and 200 000 spectral channels, allowing for the emission of dozens of molecules to be mapped over the whole cloud. More generally, instruments with multispectral or hyperspectral capabilities such as the IRAM-30m, ALMA, NOEMA, and the James Webb spatial telescope (JWST) are now poised to provide observation maps with hundreds or thousands of pixels in multiple emission lines.
Astrophysical codes for interstellar medium (ISM) environments are able to model observed regions and link numerous observables (e.g., line intensities) to a few local physical conditions (e.g., the gas density or thermal pressure). For instance, radiative transfer and excitation codes can be used to relate gas density, temperature, and column densities of detected species to their observable line intensities. Such codes include RADEX (van der Tak et al. 2007), RADMC-3D (Dullemond et al. 2012), LIME (Brinch & Hogerheijde 2010), MCFOST (Pinte et al. 2022), and MOLPOP-CEP (Asensio Ramos & Elitzur 2018). Some other codes adopt a more holistic approach and take multiple physical phenomena into account as well as their coupling, for instance, large chemical networks, thermal balance, and radiative transfer. Furthermore, Hu region models such as Cloudy (Ferland et al. 2017) reconstruct the chemical structure of ionized regions. They evaluate line intensities from input parameters including illuminating star properties, the medium density, metallicity, and elementary abundances. Shock models such as the Paris-Durham code (Godard et al. 2019) and the
MAPPINGS code (Sutherland et al., 2018) compute the chemical structure of interstellar shocks and observables such as line intensities. Here, the main input parameters are the shock velocity, pre-shock densities, and the intensity of the magnetic field. Finally, photodissociation region (PDR) models such as the Meudon PDR code (Le Petit et al., 2006) describe the ultraviolet (UV) irradiated medium at the edge of molecular clouds in star-forming regions or diffuse interstellar clouds. They compute the thermal and chemical structure of these objects as well as observables such as the atomic and molecular line intensities. The input parameters mainly include the intensity of the incident stellar UV radiation field, the gas density or thermal pressure, the visual extinction, the metallicity and the cosmic ray ionization rate. In the following, we use the term "physical parameters" to refer to a subset of interest of the input parameters that a code uses to compute observables.
For each of these models, small changes in the physical parameters can lead to very different predicted observables. The adjustment of the physical parameters to allow the predicted observables to match the actual observations can therefore be used to estimate these physical parameters. Codes that model the observed environment more realistically lead to more meaningful estimations. However, the complexity of the physics considered in a code directly impacts its evaluation time and, hence, its applicability.
On the one hand, a simple 0D code such as RADEX can run in just a few seconds. Such fast codes can be used directly for inference in minimization-based or Bayesian Markov chain Monte Carlo (MCMC) sampling approaches (Robert & Casella, 2004, chapter 7), which require numerous iterative evaluations. For instance, RADEX and UCLCHEM (Holdship et al., 2017) have already been used as is in inference with Bayesian methods in low-dimensional cases (Makrymallis & Viti, 2014; Holdship et al., 2018; Keil et al., 2022; Behrens et al., 2022; Gratier et al., 2016; Maffucci et al., 2018).
On the other hand, a more comprehensive model such as the Meudon PDR code, which handles multiple physical processes on a 1D spatial grid, typically requires several hours of computations. These durations are prohibitively long for inferring the physical parameters on large observation maps. Such cases can be addressed by deriving a faster emulator either of the numerical model or of the full likelihood function, which includes both the numerical model and a noise model for observations. For instance, the Bayesian algorithm BAMBI (Graff et al., 2012), used for instance in Johannesson et al. (2016), relies on the SkyNet neural network (Graff et al., 2014) to emulate the full likelihood function. Emulating the full likelihood requires the assumption of a noise model and it is therefore either observation-specific or generic. For instance, SkyNet assumes a Gaussian likelihood with a fixed variance for continuous variable inference. In this work, we focus on full numerical code emulation to be able to apply the obtained emulator to any observation from any telescope, with any noise model and any set of lines.
In practice, the emulation of a numerical model is based on a grid of precomputed models that spans the relevant parameter space, generated prior to any comparison with observations. A search for the point in the grid that best reproduces the observations is sometimes performed (Sheffer et al., 2011; Sheffer & Wolfire, 2013; Joblin et al., 2018). A better and more common way of exploiting the grid is to approximate the numerical model using interpolation methods, which permits the observables for new points to be predicted with a lower evaluation time (Wu et al., 2018; Ramambason et al., 2022). In the following, a numerical code emulator defined from a grid of precomputed models using, for example, an interpolation method, is called a "surrogate model." A grid of precomputed models is called a "dataset."
Interpolation methods have become the main approach to build surrogates of comprehensive ISM models over the last years thanks to their conceptual and implementation simplicity (e.g., Wu et al., 2018; Ramambason et al., 2022). Nearest-neighbor interpolation, linear interpolation, spline interpolation, and radial basis function (RBF) interpolation are the four most commonly used families of methods. By definition, a surrogate model defined with an interpolation method passes exactly through the points of the dataset. This constraint does not guarantee a good level of accuracy with respect to new points. Besides, a surrogate model defined with an interpolation method requires the whole dataset to be stored, which can be very heavy if it contains many recomputed models or many quantities associated to each model. Finally, although they are generally faster than the original numerical codes, interpolation methods handle outputs (i.e., observables) independently. Thus, they are quite slow when the number of outputs is large.
In this work, we aim to derive accurate, fast, and light surrogate models. To do so, we relaxed the constraint of having the model pass through the points of the dataset. In this case, deriving a surrogate model thus becomes a regression problem, which benefits from many recent advances in numerical optimization developed for machine learning. Such approaches have already been applied in ISM studies. For instance, in Smirnov-Pinchukov et al. (2022), a \(k\)-nearest-neighbor regression algorithm was used to emulate a protoplanetary disks model, while in Bron et al. (2021), a random forest was trained to emulate a chemistry model. However, most often, the versatile class of artificial neural networks (ANNs) is preferred to address the complexity of comprehensive ISM models. For instance, ANN emulators of astrochemical models were derived in (de Mijolla et al., 2019; Holdship et al., 2021; Grassi et al., 2022). In addition, in Grassi et al. (2011), the authors derived a new simulation code and an associated ANN emulator. Here, we emulate a state-of-the-art ISM code, namely, the Meudon PDR code (Le Petit et al., 2006). Such a code sometimes produces outliers due to potential numerical instabilities or physical bistabilities or multistabilities. It also predicts several thousands of observables from a handful of parameters, which is unusual in the machine learning community - except for networks that generate structured data such as images, text, or times series. In observations, only a fraction of these observables are measured. However, different observations can detect very different subsets of observables. To avoid having to derive one surrogate model per subset of observables, we chose to emulate the full code at once.
We present five main strategies to derive high performance surrogate models in these conditions. First, a robust regression framework (Rousseeuw & Leroy, 1987) was used to identify and remove outliers. Secondly, we applied a clustering method to derive homogeneous subsets of lines that are simpler to emulate, so that we could define and train one network per cluster. We chose an adequate layer size in the network architecture thanks to a dimension reduction technique. A polynomial transform of the input is applied to ease the learning of nonlinearities. Finally, we limited any redundant computations with the recent dense architecture (Huang et al., 2017). All obtained ANNs were then compared with interpolation methods with respect to speed, memory requirements, and accuracy. The best obtained surrogate model will be exploited to perform inference of physical parameters from observations in Palud et al. (in prep). We note that ANNs come with the ability to automatically and efficiently compute derivatives such as the gradient and the Hessian matrix, which
enables using faster and more accurate inference methods. All proposed ANNs were implemented using the PyTorch Python library1. The most accurate ANN obtained in this work has been made publicly available2.
Footnote 1: The code used to build the proposed ANNs can be found at [https://github.com/einjol/sim-model-nn-approximation](https://github.com/einjol/sim-model-nn-approximation)
Footnote 2: [https://ism.obspm.fr/files/ArticleData/2023_Palu_Einig/2023_Palu_Einig/2023_Palu_Einig_data.zip](https://ism.obspm.fr/files/ArticleData/2023_Palu_Einig/2023_Palu_Einig/2023_Palu_Einig_data.zip)
The paper is structured as follows. Section 2 describes the emulation methods to be compared. Section 3 describes the Meudon PDR code and the dataset of precomputed models. It also introduces the framework used to compare surrogate models. In Section 4, we describe our design of ANNs that address the ISM numerical codes specificities. In Section 5, we compare these ANNs with classic interpolation methods with respect to speed, memory requirements, and accuracy. Section 6 provides our concluding remarks.
## 2 Interpolation and regression methods
Some notations used throughout this paper are introduced. Four families of interpolation methods are then presented and feed-forward ANNs succinctly described. The regression paradigm is finally described with some ANNs specificities. For a more detailed introduction on ANNs, we refer to Shalev-Shwartz & Ben-David (2014, chapter 20)3.
Footnote 3: Accessible at [https://www.cs.huji.ac.il/~shais/UnderstandingMachineLearning/index.html](https://www.cs.huji.ac.il/~shais/UnderstandingMachineLearning/index.html).
### Notation
Throughout this paper, scalars are denoted with regular letters, such as indices \(j\) or vector dimensions \(D\) and \(L\). Vectors are denoted using bold lowercase letters, such as vectors of \(D\) physical parameters \(\mathbf{x}\in\mathbb{R}^{D}\), or vectors of \(L\) line intensities \(\mathbf{y}\in\mathbb{R}^{L}\). Vectors are considered as collections of scalars: \(\mathbf{y}=(y_{\ell})_{\ell=1}^{L}\) with, for example, \(y_{\ell}\) as the intensity of a line \(\ell\). Matrices are written with bold uppercase letters. The notation for functions is set accordingly, such as \(\mathbf{f(x)}=(f_{\ell}(\mathbf{x}))_{\ell=1}^{L}\), with \(f_{\ell}(\mathbf{x})\) as the function that links an input \(\mathbf{x}\) to the intensity of a line \(\ell\).
### Interpolation methods in ISM studies
Interpolation methods yield functions that pass exactly through the points of a dataset of precomputed models. In this paper, four common families of interpolation methods are studied: nearest-neighbor interpolation, linear interpolation, spline interpolation (Bojanov et al., 1993), and RBF interpolation (Fasshauer, 2007, chapter 6). The nearest-neighbor interpolation assigns the value of the closest point in the dataset to a new point. It is fast but generally performs poorly in terms of accuracy. It is somewhat equivalent to a search for the closest point in a grid, which is common in ISM studies (e.g., Sheffer et al., 2011; Sheffer & Wolfire, 2013; Joblin et al., 2018). The piece-wise linear interpolation generally performs better, while remaining quite fast. It first triangulates the dataset, so that a new point is associated to a cell of the triangulation. It then returns a weighted average of the cell point values. It was used in some ISM studies, such as in Ramambason et al. (2022). Spline interpolation methods are based on piece-wise polynomials, yielding an even more accurate and still fast surrogate model. Finally, the RBF interpolation, used, for example, in Wu et al. (2018) to study a PDR, exploits the full dataset for each evaluation. For a new point, it returns a weighted sum of the values of all the dataset points, where the weights depend on the distance to this new point. Surrogate models defined with RBF interpolation are generally very accurate but slower than other interpolation methods.
In ISM studies, datasets of precomputed models are often structured as uniform grids (i.e., as lattices, Joblin et al., 2018; Wu et al., 2018). This structure is not necessary for RBF interpolation methods or in regression approaches, and other structures that can be obtained with, for example, Latin hypercube sampling (McKay et al., 1979), Stratified Monte Carlo (Haber, 1966), or the low-discrepancy sequences used in Quasi-Monte Carlo methods (Asmussen & Glynn, 2007, chapter 9), might yield more accurate surrogate models. However, a uniform grid structure has many advantages. First, it is often more convenient to manually inspect a dataset with such a structure. Second, it allows for the use of certain efficient interpolation methods such as splines,
Figure 1: Comparison of the most popular interpolation methods on the log Rosenbrock function with a dataset structured as a coarse regular grid. (a) Log Rosenbrock function \(\log R\) (Eq. 1), i.e., the true function that interpolation methods are to emulate. (b) Coarse uniform grid and corresponding values of the true function. All four interpolation methods are fitted using these values only. The grid is also shown on the remaining figures. (c), (e), (g), and (i) Surrogate model obtained with each interpolation algorithm. (d), (f), (h), and (j) Absolute error (Eq. 2) between the corresponding surrogate model and the true function.
for which a uniform grid structure is mandatory. Also, the regularity of the grid can be exploited to accelerate nearest-neighbor and linear interpolations. In this work, we aim to perform a fair comparison between interpolation and ANN regression methods and, thus, we restrict the structure of the dataset used for fitting to uniform grids.
Figure 1 shows a comparison of the aforementioned interpolation methods on the log Rosenbrock function:
\[\log R:\mathbf{x}\in\mathbb{R}^{2}\mapsto\log\left[1+(1-x_{1})^{2}+100\,\left( x_{2}-x_{1}^{2}\right)^{2}\right], \tag{1}\]
which is positive and admits a minimum at \((1,1)\) such that \(\log R(1,1)=0\). The interpolation methods are fitted on a \(7\times 7\) coarse regular grid on the square \([-1,1]\times[-0.5,1.5]\). They are then evaluated on a \(101\times 101\) much finer grid on the same square. The accuracy of each method is evaluated using the absolute error (AE) that quantifies how distant the prediction \(f_{t}(\mathbf{x})\) is to the corresponding true value \(y_{t}\)
\[\mathrm{AE}\left(\mathbf{f}\,;(\mathbf{x},y_{t})\right)=[f_{t}(\mathbf{x})-y _{t}]. \tag{2}\]
The absolute error is chosen in this example because it is more intuitive to interpret than other error functions such as the squared error (see Sect. 2.3.2) or the Cauchy error (see Sect. 4.1). In general, the results of such a comparison depend on the choice of the error function.
The fitted models and the associated errors shown in Fig. 1. The set of absolute errors is summarized with its mean and maximum values. The figure reveals general properties of considered interpolation methods. The nearest-neighbor interpolation provides a piece-wise constant surrogate model with high errors. The piece-wise linear and cubic spline yield better accuracies. RBF interpolation performs best on this synthetic case with respect to both mean and max absolute error, but can be outperformed on other examples, mostly by spline interpolation. As the grid is coarse, all four methods struggle to reproduce the banana shape of the Rosenbrock function. In ISM models, such strong and fast variations can correspond to a change of physical regime and are thus of critical importance.
### Performing regression with neural networks
By relaxing the constraint of passing exactly through the points of the dataset of precomputed models, the derivation of a surrogate model becomes a regression problem. In machine learning, a regression problem consists in estimating the function \(\widehat{\mathbf{f}}:\mathbb{R}^{D}\rightarrow\mathbb{R}^{L}\) that best maps input vectors \(\mathbf{x}\) to output vectors \(\mathbf{y}\). This function \(\widehat{\mathbf{f}}\) is learned from a dataset of precomputed models \(\mathcal{D}=[(\mathbf{x}_{\mathbf{x}},\mathbf{y}_{\mathbf{x}})\in\mathbb{R}^ {D}\times\mathbb{R}^{L},n=1,\ldots,N]\). In this work, the input vector \(\mathbf{x}\) corresponds to a vector of physical parameters (e.g., temperature, thermal pressure, volume density) and the output vector \(\mathbf{y}\) to observables computed by a numerical code (e.g., intensities of specific lines). To perform this estimation, functions \(\mathbf{f}\) are parametrized with vectors \(\boldsymbol{\theta}\). This parametrization restricts the search to a class of functions. In the following, functions are sometimes denoted \(\mathbf{f}_{\mathbf{0}}\) to emphasize this association. For instance, in linear regression, an affine function \(\mathbf{x}\mapsto\mathbf{W}\mathbf{x}+\mathbf{b}\) is uniquely described by \(\boldsymbol{\theta}=(\mathbf{W},\mathbf{b})\). Given the complexity of ISM numerical models, this class is too restrictive to produce accurate surrogate models, and richer classes are required.
Multiple classes of functions and the associated regression algorithms enable the emulation of complex nonlinear functions from data of precomputed models, such as polynomial functions, \(k\)-nearest-neighbor regression (used e.g., in Smirnov-Pinchukov et al. (2022), Gaussian process regression (Rasmussen and Williams, 2006), decision trees, and the associated ensemble methods such as random forests (used e.g., in Bron et al. (2021)) or XGBoost (Chen and Guestrin, 2016), ANNs, and others. All methods based on decision trees or nearest neighbors yield piece-wise functions, which prevents a desirable regularity property to be enforced in the surrogate model (e.g., continuity or differentiability). Besides, all the listed algorithms, except ANNs and nearest-neighbor interpolation, can only handle multiple outputs independently, which slows down predictions when the number of outputs is high. An ANN predicts all outputs at once using a sequence of intermediate computations, which is considerably faster. In addition, ANNs are known to yield very accurate surrogate models both in theory and in practice. Finally, an ANN comes with the ability to automatically and efficiently compute the derivative of its outputs with respect to its inputs, using automatic differentiation (Paszke et al., 2017). Overall, to address the complexity of ISM numerical models, exploit prior knowledge on the regularity of the function to approximate, and efficiently predict all outputs simultaneously, we adopted the rich and versatile class of ANNs. Below, we introduce this class of functions and then describe the approach used to fit an ANN to a dataset.
#### 2.3.1 Generalities on neural networks
The class of mathematical models known as ANNs are inspired by biological neural systems. The first ANN was proposed in McCulloch and Pitts (1943) to perform logical operations. Since then, multiple hardware (e.g., GPU computing) and algorithmic developments (e.g., backpropagation as in Rumelhart et al., 1986) endowed them with the capacity to learn more complex patterns and relationships among the data. They enjoy fundamental theoretical results. For different sets of assumptions on the architecture, universal approximation theorems establish that ANNs can approximate almost any continuous function with an arbitrary high level of precision (Hornik et al., 1989; Leshno et al., 1993). This class gained widespread popularity after the 2012 ImageNet Challenge, an image classification competition in which an ANN significantly outperformed rival methods (Krizhevsky et al., 2017). Nowadays, they are considered a state-of-the-art method for a variety of tasks in vector, image, sound, or text processing across multiple scientific
Figure 2: Structure of a simple feedforward ANN with \(H=2\) hidden layers and a linear layer graph, shown on the left.
or industrial fields, including astrophysics. For instance, ANNs have been successfully applied in exoplanet detection (Shandle and Vanderburg, 2018), Galaxy morphology classification (Huertas-Company et al., 2015), ISM magnetohydrodynamic turbulence classification (Peek and Burkhart, 2019), and to approximate ISM numerical models (Grassi et al., 2011; de Mijolla et al., 2019; Holdship et al., 2021; Grassi et al., 2022). For a more general review of applications of Machine Learning in astronomy, see Fluke and Jacobs (2020).
Throughout this work, an ANN is considered as a function \(\mathbf{f}:\mathbf{x}\in\mathbb{R}^{D}\mapsto\mathbf{y}\in\mathbb{R}^{L}\), where \(D\) and \(L\) are input and output dimensions, respectively. For a numerical model, \(D\) is the number of considered physical parameters, such as thermal pressure or visual extinction, and \(L\) is the number of predicted observables, for instance, line intensities. An ANN is made of \(H+1\) intermediate functions \(\mathbf{f}^{(j)}\), called "layers". Intermediate layers \(1\leq j\leq h\) are called the "hidden layers" and the final layer is the "output layer". The \(j^{h}\) layer takes an intermediate vector \(\mathbf{x}^{(j)}\in\mathbb{R}^{j}\) as input and computes an intermediate output \(\mathbf{y}^{(j)}\in\mathbb{R}^{\circ_{j}}\). The intermediate dimensions \(i_{j}\) and \(o_{j}\) can be chosen arbitrarily, except for \(i_{1}=D\) and \(o_{H+1}=L\). In a feedforward ANN, connections between layers form an acyclic graph. The output of a layer \(j\) feeds one or more of the next layers \(j^{\prime}>j\), hence the notion of direction in a feedforward ANN.
Figure 2 shows the structure of a simple ANN that contains \(H=2\) hidden layers and one output layer. This ANN takes in input \(D=2\) physical parameters and predicts \(L=10\) observables. It is indeed a feedforward ANN as its layer graph is linear, as shown on the left. The output of one of its layers \(j\) is thus the input of the next layer \(j+1\), that is to say \(\mathbf{x}^{(j+1)}=\mathbf{y}^{(j)}\) and \(i_{j+1}=o_{j}\). Alternative feedforward architectures with nonlinear layer graph exist, such as residual networks (He et al., 2016) and dense networks (Huang et al., 2017). These architectures include skip connections between layers that bypass the activation function to preserve original input information and intermediate computations. However, linear layer graphs are widespread and remain the simplest multilayer architectures for vector classification or regression tasks. In the rest of this paper, all the ANNs exhibit such architectures, unless otherwise noted.
A hidden layer combines an affine transformation and a nonlinear scalar function \(g^{(j)}\) applied element-wise as follows:
\[\mathbf{f}^{(j)}:\mathbf{x}^{(j)}\mapsto\mathbf{y}^{(j)}=g^{(j)}(\mathbf{W}^ {(j)}\mathbf{x}^{(j)}+\mathbf{b}^{(j)}), \tag{3}\]
with \(\mathbf{W}^{(j)}\in\mathbb{R}^{\circ\rho\times i_{j}}\) and \(\mathbf{b}^{(j)}\in\mathbb{R}^{\circ_{j}}\) the weight matrix and bias vector of the affine transformation, respectively. The nonlinear scalar function \(g^{(j)}\) is called an activation function. Common activation functions include the sigmoid, hyperbolic tangent, rectified linear units (ReLU), and multiple variants (Nwankpa et al., 2021). Choosing different activation functions \(g^{(j)}\) for the \(H\) hidden layers might lead to better performance but would require training many ANNs. A unique \(g\) is therefore generally set for all hidden layers.
The output layer transforms the outputs of one or more hidden layers into the desired prediction using an affine transformation and an output activation function. This output activation function depends on the considered problem. The sigmoid and the softmax functions are usually employed to return probabilities in binary and multiclass classification, respectively. In regression tasks, the identity function is generally used.
Overall, in a regression context, the architecture of an ANN is uniquely defined by its layer graph, an activation function \(g\), a number of hidden layers \(H\geq 0\), and a sequence of sizes representing its layers \((i_{j},o_{j})_{j=1}^{H+1}\). The corresponding class of ANNs is parametrized with a vector \(\boldsymbol{\theta}=(\mathbf{W}^{(j)},\mathbf{b}^{(j)})_{j=1}^{H+1}\) that can be very high-dimensional, depending on the number of hidden layers \(H\) and their sizes \((i_{j},o_{j})_{j=1}^{H+1}\). We note that if \(g\) is differentiable, so is the full ANN \(\mathbf{f}\). The gradient \(\mathbf{\nabla_{\mathbf{x}}}\mathbf{f}\) can then be efficiently evaluated with automatic differentiation techniques (Paszke et al., 2017).
#### 3.2. Fitting a neural network to a dataset
In regression, once the class of function is set (here with an ANN architecture), the parameter \(\boldsymbol{\theta}\) is adjusted so that \(\mathbf{f}_{\boldsymbol{\theta}}\) fits the dataset \(\mathcal{D}\) of precomputed models. A loss function \(\mathcal{L}(\mathbf{f};\mathcal{D})\) quantifies the distance between predictions \(f_{\ell}(\mathbf{x}_{n})\) and the corresponding true values \(y_{n\ell}\). It is based on an error function, such as the absolute error (AE, Eq. 2) or the squared error (SE), as follows:
\[\mathbf{SE}\left(\mathbf{f}:(\mathbf{x},y_{\ell})\right)=\left(f_{\ell}( \mathbf{x})-y_{\ell}\right)^{2}. \tag{4}\]
The loss function summarizes the set of \(N\times L\) errors obtained on the dataset \(\mathcal{D}\). The mean is often used for computational efficiency of evaluation and differentiation, yielding, for example, the mean squared error (MSE) or the mean absolute error (MAE). Obtaining the best function \(\widehat{\mathbf{f}}\) boils down to minimizing the loss function with respect to the parameter \(\boldsymbol{\theta}\)
\[\widehat{\mathbf{f}}\in\arg\min_{\boldsymbol{\theta}}\mathcal{L}(\mathbf{f}_ {\boldsymbol{\theta}};\mathcal{D})\,. \tag{5}\]
Problems of the form of Eq. 5 rarely admit a closed-form solution. Furthermore, with ANNs, the loss function \(\mathcal{L}(\mathbf{f}_{\boldsymbol{\theta}};\mathcal{D})\) is generally not convex and contains multiple saddle points and local minima (Shalev-Shwartz and Ben-David, 2014, chapter 20). Such problems can be solved approximately using a metaheuristic (e.g., genetic algorithms, particle swarm, simulated annealing) when \(\boldsymbol{\theta}\) is low-dimensional. As ANNs typically contain at least hundreds of parameters to tune, these methods are prohibitively slow. In contrast, gradient descent methods are computationally very efficient. They rely on automatic differentiation to efficiently evaluate the gradient of the loss function \(\nabla_{\boldsymbol{\theta}}\mathcal{L}\) and on backpropagation (Rumelhart et al., 1986) to efficiently update \(\boldsymbol{\theta}\). The stochastic gradient descent algorithm (see, e.g., Shalev-Shwartz and Ben-David, 2014, chapter 14) accelerates the search by using "batches" instead of the full dataset in gradient evaluations. Preconditioned variants such as RMSProp (Tieleman and Hinton, 2012) or Adam (Kingma and Ba, 2017) exploit the local geometry of the loss function to escape from saddle points and further accelerate convergence to a good local minimum. This optimization procedure is often called "training phase" or "learning phase" with ANNs, because the network progressively learns from data as the loss function decreases.
## 3 Approximating the Meudon PDR code
The Meudon PDR code, selected as a representative ISM model, is presented below. The datasets used in the comparison between interpolation algorithms and ANNs as well as their preprocessing are described. Finally, the considered comparison metrics are defined.
### The Meudon PDR code: A representative ISM model
The Meudon PDR code4(Le Petit et al., 2006) is a 1D stationary code that simulates interstellar gas illuminated with a stellar radiation field. It can simulate the physics and chemistry of a wide
variety of environments, such as diffuse clouds, PDRs, nearby galaxies, damped Lyman alpha systems, circumstellar disks, and so on. It permits the investigation of effects such as the radiative feedback of a newborn star on its parent molecular cloud.
The user specifies physical conditions such as the thermal pressure in the cloud \(P_{\rm th}\), the intensity of the incoming UV radiation field \(G_{\rm UV}\) (scaling factor applied to the Mathis et al. 1983 standard field), and the depth of the slab of gas expressed in visual extinctions, \(A_{V}^{\rm tot}\). The code then iteratively solves large systems of multiphysics equations. First, the code solves the radiative transfer equation at each position on an adaptive spatial grid, considering absorption in the continuum by dust and in the lines of key atoms and molecules such as H and H\({}_{2}\) (Goicoechea & Le Bourlot 2007). Then, from the specific intensity of the radiation field, it computes the gas and grain temperatures by solving the thermal balance. The heating rate takes into account the photoelectric effect on grains as well as cosmic ray heating. The cooling rate is estimated from the nonlocal thermodynamic equilibrium (non LTE) excitation in the energy levels of the main species by considering radiative and collisional processes as well as chemical formation and destruction. Additional processes can either heat or cool the gas, such as H\({}_{2}\) heating or gas-grain collisions. Finally, the chemistry is solved, providing the densities of about 200 species at each position. About 3 000 reactions are considered, both in the gas phase and on the grains. The chemical reaction network was built combining different sources including data from the KIDA database5(Wakelam et al., 2012) and the UMIST database6(McElroy et al., 2013) as well as data from articles. For key photoreactions, we used cross sections from Hrays et al. (2017) and also taken from Ewine van Dishoeck's photodissociation and photoionization database7. The successive resolution of these three coupled aspects (radiative transfer, thermal balance, chemistry) is iterated until a global stationary state is reached. A full run is computationally intensive and typically lasts a few hours.
Footnote 5: [https://kida.astrochem-tools.org/](https://kida.astrochem-tools.org/)
Footnote 6: [http://udfa.ajmarkwick.net/](http://udfa.ajmarkwick.net/)
Footnote 7: [https://home.strw.leidenuniv.nl/~ewine/photo/index.html](https://home.strw.leidenuniv.nl/~ewine/photo/index.html)
The code provides density profiles of the chemical species and the temperature profiles of both the grains and the gas. It also outputs the line intensities emerging from the cloud that can be compared to observations. As of version 7 (yet to be released), a total of 5 409 line intensities are predicted from species such as H\({}_{2}\), HD, C\({}^{+}\), C, CO, \({}^{13}\)CO, C\({}^{18}\)O, \({}^{13}\)C\({}^{18}\)O, SO, HCO\({}^{+}\), OH\({}^{+}\), HCN, HNC, CH\({}^{+}\), CN or CS.
We choose to work on the Meudon PDR code because we consider it a representative element of the most complex ISM models. Multiple complex ISM codes compute numerous observables from a few physical parameters (Ferland et al., 2017; Sutherland et al., 2018). The complex physical and chemical processes taken into account in such codes make the relations between the line intensities and the input parameters highly non-linear and thus challenging to emulate. Often, ISM numerical models focusing on a subset of physical processes included in the Meudon PDR code, such as radiative transfer and excitation codes, yield simpler relations between observables and input parameters and might thus be simpler to emulate.
### Dataset generation
In this work, we restrict ourselves to constant pressure models as they appear to better reconstruct observations for typical PDRs (Marconi et al., 1998; Lemaire et al., 1999; Allers et al., 2005; Goicoechea et al., 2016; Joblin et al., 2018; Wu et al., 2018). We approximated the code with respect to the \(D=4\) input parameters that are most relevant for inference (Wu et al., 2018; Palud et al. in prep). The three main ones are the thermal pressure, \(P_{\rm th}\), the scaling factor, \(G_{\rm UV}\), of the interstellar standard radiation field and the size of the slab of gas measured in total visual extinction, \(A_{V}^{\rm tot}\). As in Wu et al. (2018), we consider a wide variety of environments with \(P_{\rm th}\in[10^{5},\,10^{9}]\) K cm\({}^{-3}\), \(G_{\rm UV}\in[1,\,10^{5}]\) and \(A_{V}^{\rm tot}\in[1,\,40]\) mag. The Meudon PDR code computes line intensities for multiple angles \(\alpha\) between the cloud surface and the line of sight. In the Meudon PDR code, this angle \(\alpha\) can cover a \([0,60]\) deg interval. A face-on geometry corresponds to \(\alpha=0\) deg and \(\alpha=60\) deg is the closest to an edge-on geometry. To enable analyses of PDRs with known edge-on geometry such as the Orion Bar (Joblin et al., 2018), this angle is added to the considered physical parameters. Table 1 details the values of the main input parameters and of other parameters, fixed at standard values from the literature.
We generated two datasets of Meudon PDR code evaluations to assess the approximation quality of the Meudon PDR code: a training set and a test set8. The training set is used to fit all surrogate models. It contains \(N_{\rm train}=19\,208\) points, structured as a \(14\times 14\times 14\times 7\) uniform grid on \((\log_{10}P_{\rm th},\,\log_{10}G_{\rm UV},\,\log_{10}A_{V}^{\rm tot},\,\alpha)\). This uniform grid structure is chosen to simplify the outlier identification procedure (see Sect. 4.1) and to include spline interpolation in the comparison. We note that all the other considered methods, and ANNs in particular, do not require such a structure for the training dataset and using other dataset structures generated, for instance, with Latin hypercube sampling (McKay et al., 1979), stratified Monte Carlo (Haber, 1966), or low discrepancy sequences used in Quasi-Monte Carlo methods (Asmussen & Glynn, 2007, chapter 9) might improve accuracy. The Meudon PDR code
\begin{table}
\begin{tabular}{l l l l} \multicolumn{4}{c}{Free parameters} \\ \hline Parameter & Value & Unit & Grid \\ \hline Gaz thermal pressure, \(P_{\rm th}\) & \([10^{5},\,10^{9}]\) & K cm\({}^{-3}\) & on log. scale \\ UV intensity, \(G_{\rm UV}\) & \([1,\,10^{5}]\) & (1) & on log. scale \\ Visual extinction, \(A_{V}^{\rm tot}\) & \([1,40]\) & mag & on log. scale \\ Inclination angle, \(\alpha\) & \([0,60]\) & deg & on lin. scale \\ \multicolumn{4}{c}{Fixed parameters} \\ \hline Parameter & Value & Unit & Note \\ \hline Cosmic ray ionization rate & \(10^{-16}\) & s\({}^{-1}\) per H\({}_{2}\) & (2), (3) \\ Dust extinction curve & Galaxy & & (4) \\ \(R_{V}\) & 3.1 &... & (4) \\ \(N_{H}/E(B-V)\) & \(5.8\times 10^{21}\) & cm\({}^{-2}\) & (5) \\ Mass grain/mass gas & 0.01 &... &... \\ Grain size distribution & \(\propto a^{-3.5}\) &... & (6) \\ Min grain radius & \(10^{-7}\) & cm &... \\ Max grain radius & \(3\times 10^{-5}\) & cm &... \\ \end{tabular}
\end{table}
Table 1: Input parameters in the Meudon PDR code.
PDR code predicts line intensities that are strictly positive and span multiple decades. To avoid giving more weight in the regression to lines with high intensities and disregarding the faintest ones, in the following, \(\mathbf{y}\in\mathbb{R}^{L}\) denotes the log-intensities. Similarly, \(P_{\text{th}}\), \(G_{\text{UV}}\) and \(A_{\text{V}}^{\text{tot}}\) are considered in log scale. Even in log scale, the parameters of interest cover intervals with quite different sizes. For instance, \(\log_{10}G_{\text{UV}}\in[0,5]\), while \(\log_{10}A_{\text{V}}^{\text{tot}}\in[0,1.602]\). In other words, \(A_{\text{V}}^{\text{tot}}\) covers an interval more than three times smaller than \(G_{\text{UV}}\). Both interpolation methods and ANN based regression typically suffer from this difference. The \(D\) parameters are thus standardized to have a zero mean and a unit standard deviation. This simple transformation generally improves accuracy for both interpolation methods and ANNs (Shalev-Shwartz & Ben-David 2014, chapter 25).
The test dataset was used to assess the accuracy of surrogate models on data not used in the training step. It contains \(N_{\text{test}}=3\,192\) points. These points were generated with 456 independent random draws from a uniform distribution on the (\(\log_{10}P_{\text{th}}\), \(\log_{10}G_{\text{UV}}\), \(\log_{10}A_{\text{V}}^{\text{tot}}\)) cube and with a uniform grid of 7 values on \(\alpha\). To ensure consistent preprocessing between the two sets, both the input values \(\mathbf{x}\) and output values \(\mathbf{y}\) of the test set undergo the same transformations as for the training set. In particular, the standardization applied to its input values \(\mathbf{x}\) relies on the means and standard deviations obtained on the training set, and its output values \(\mathbf{y}\) are considered in decimal log scale.
Numerical codes may yield numerical instabilities. In its domain of validity, the Meudon PDR code produces few of them. However, the considered complex nonlinear physics can also lead to physical bistabilities or multistabilities. For example, the H\({}_{2}\) heating process can produce bistable solutions (Burton et al. 1990; Rollig & Ossenkopf-Okada 2022). In such a case, profiles computed by the code, for example, of a species density or of the gas temperature, can oscillate between the possible solutions at each position in the modelled cloud. The line integrated intensities computed from these profiles can contain errors of up to a factor of 100 and thus are highly unreliable. The code being deterministic, an input vector \(\mathbf{x}\) consistently leads to a unique output vector \(\mathbf{y}\). However, in the regions of the parameter space with such multistabilities, variations of intensities can be very chaotic and challenging for a surrogate model to reproduce. Such chaotic values thus lead to the deterioration of the accuracy of any surrogate model, interpolation, or ANN, thus they should not be used. Unfortunately, as of today there exists no simple or complete procedure to check the physical validity of a pre-computed model of the Meudon PDR code. With a first scan of the datasets, we remove a few lines that are particularly affected. The total number of considered lines is therefore reduced from 5 409 to \(L=5\,375\). This simple filter leaves other outliers in the training and test datasets. Although we observe that these outliers are rare (i.e., less than 1% expected), we do not have any specific a priori knowledge on their location nor on their exact proportion. To manually check the validity of each value is unrealistic given the sizes of the two datasets. The most informative hypothesis we can make on outliers is that if one line in a pre-computed model is identified as an outlier, then it is likely for this precomputed model to contain other outliers, especially in the lines of the same species or of isotopologues. This hypothesis is exploited in the more thorough outlier detection method using an ANN, which is presented and described in Sect. 4.1.
Overall, the Meudon PDR code version to emulate is a function \(\mathbf{f}:\mathbf{x}\in\mathbb{R}^{D}\mapsto\mathbf{y}\in\mathbb{R}^{L}\), with \(D=4\) and \(L=5\,375\). We assume the predictions of the Meudon PDR code \(\mathbf{f}\) to vary continuously with respect to the inputs, except in the case of outliers that should be disregarded. We also assume \(\mathbf{f}\) to be differentiable. In Sect. 4, we build our emulators such that they satisfy these regularity properties.
### Comparison metrics
Interpolation methods and ANNs are compared on evaluation speed, memory requirements, and approximation accuracy. We describe here the metrics used for the comparison, regardless of how the surrogate models are defined or trained.
The evaluation speed is measured on the full set of \(L\) lines for 1 000 random points. The measurements are performed on a personal laptop equipped with a 11th Gen Intel(R) Core(TM) i7-1185G7, with eight logical cores running at 3.00 GHz. The ANNs and interpolation methods are run on CPU to obtain a meaningful comparison. Running ANNs on a GPU could further reduce their evaluation times. The implementations of interpolation methods are from the SciPy Python package, popular in ISM studies (Wu et al. 2018). Nearest-neighbor, linear, and RBF interpolation implementations allow for the evaluation of a vector function at once. Conversely, the spline interpolation implementation requires looping on the \(L\) lines, which is a slow process. To avoid an unfair comparison, the spline interpolation speeds are not evaluated.
The memory requirements are quantified with the number of parameters necessary to fully describe the surrogate model. Interpolation methods, for instance, require storing the full training set. It corresponds to \(N_{\text{train}}(D+L)\simeq 1.03\times 10^{8}\) parameters. In Python, these parameters are stored using 64-bit floating-point numbers. Storing the full grid requires about 1.65 GB.
The accuracies of surrogate models are evaluated on the test set, which contains points that they did not see during training. To quantify accuracies, we define a new metric called the error factor (EF). As line intensities are considered in log-scale, the absolute error (Eq. 2) corresponds to the ratio (in log scale) of predicted and true line intensities. The error factor is this ratio transformed back in linear scale. For a surrogate model \(\mathbf{f}\) on a given tuple \((\mathbf{x}_{\text{n}},\mathbf{y}_{\text{n}})\) and line \(\ell\), it is expressed as:
\[\text{EF}\left(\mathbf{f}:(\mathbf{x}_{\text{n}},y_{\text{ref}})\right)=\ 10^{ /(\kappa_{\text{n}})-\gamma_{\text{ref}})}=\max\left\{\frac{10^{/t(\kappa_{\text{n}} )}}{10^{\gamma_{ref}}},\ \frac{10^{\gamma_{ref}}}{10^{\gamma_{ref}}}\right\}, \tag{6}\]
where both \(y_{\text{ref}}\) and \(f_{\ell}(\mathbf{x}_{\text{n}})\) are line log-intensities. As the absolute value ensures positivity in log scale, an error factor is always superior or equal to 1. It can be expressed in percents using a \(100\times(\text{EF}-1)\) transformation. For readability, error factors are displayed in percents when \(\text{EF}<2\), that is 100%. An error factor that is not in percents is indicated by the multiplication sign. For instance, "\(\times 3\)" corresponds to \(\text{EF}=3\).
The error factor is a symmetrized relative error, as the absolute value also ensures symmetry in log scale. For small errors, namely, \(\text{EF}\simeq 1\), it is similar to the standard relative error. However, for larger errors, the error factor is more relevant in our case. A standard relative error would return 100% for a factor of two too high and 50% for a factor of two too low, while in both cases, \(\text{EF}=2\). In the worst case, a relative error of 100% corresponds to a factor of two too high or a prediction of exactly zero, while \(\text{EF}=2\) in the former case and \(\text{EF}=+\infty\) in the latter. Minimizing a standard relative error would therefore lead to an under-estimation tendency, which is not the case for the proposed error factor.
When applied to the full test set, the error factor yields a distribution of errors. This distribution is summarized by its mean, its 99th percentile, and its maximum. The mean provides an estimation of the average error to expect. The 99th percentile and
maximum provide upper bounds on the error. The maximum is very sensitive to outliers while the 99th percentile is more robust. To illustrate this sensitivity of upper bounds, consider a fictional dataset of error factors including 0.5% of outliers at much higher values. The maximum is affected by the outliers, which induces a pessimistic bias for the corresponding error upper bound estimation. The 99th percentile is not significantly affected by the outliers, and provides a more relevant estimator of the actual upper bound of the error factor for this fictional dataset. This example shows that the choice of percentile is a trade-off based on the expected proportion of outliers. Lower percentiles (e.g., 90 or 95) underestimate the upper bound on the error factor and percentiles higher than 99.5 would, in turn, be sensitive to outliers like the max. The training and test sets generated with the Meudon PDR case are expected to contain less than 1% of outliers. The 99th percentile is therefore expected to be an estimator of the error upper bound that is robust to outliers.
In current IRAM-30m observations, the relative day-to-day calibration accuracy ranges from 3% to 10% (see, e.g., Einig et al. 2023). The absolute flux calibration accuracy for ground based observations is more difficult to estimate but cannot be better than the relative calibration accuracy. For a surrogate model to be relevant for observations analysis and physical parameter inference, we set the constraint that satisfactory surrogate models must have a mean error factor below 10%.
## 4 Designing and training adapted ANN
The choice of architecture and training approach of ANNs are now discussed. In the following, ANNs are trained with the MSE loss function. In addition, we assume the Meudon PDR code to be differentiable. To derive an ANN satisfying this constraint, we set the activation function \(g\) to the exponential linear unit (ELU, Nwankpa et al. 2021). Unless explicitly mentioned, our ANNs have \(H=3\) hidden layers of equal size. This choice may not be optimal. A hyperparameter optimization step could improve the network performance, but would require a validation dataset and the training of many networks. As the results of Sect. 5 will show, this step is not necessary to obtain satisfactory results.
The specificities inherent in ISM models such as the Meudon PDR code, namely, the presence of outliers and the unusual dimensions of the problem, very few inputs to predict many outputs. To address these specificities, the required dedicated strategies are as summarized here and described in the subsections that follow: 1) we apply an outlier removal procedure; 2) we cluster lines to obtain homogeneous groups simpler to emulate with separate networks; 3) to select an adequate size for hidden layers, we resort to a dimension reduction technique; 4) we apply a polynomial transform to augment the input data and thus ease the learning of nonlinearities; and 5) finally, we replace the standard ANN architecture by a dense architecture exploits values in intermediate layers to re-use intermediate computations.
### Removing outliers from the training set
Outliers that come from either numerical instabilities or physical bistabilities or multistabilities can be found in both the training and test sets, as described in Sect.3.2. With a loss function such as the MSE, outliers in the training set greatly deteriorate the quality of a fitted neural network. Performing regression in presence of outliers is thus a crucial topic in machine learning. Multiple methods exist for nonlinear regression (Rousseeuw & Leroy 1987). We resort to the method proposed in Motulsky & Brown (2006). This method fits a statistical model to the training set with a strategy robust to outliers. Then, the training points with largest errors are reviewed. Identified outliers are removed, and a new statistical model is fitted to the cleaned training set. To avoid any risk of biasing our analysis towards optimistic results, we do not remove any value from the test set.
For this first fit, we resort to an ANN designed as described at the introduction of Sect. 4. The size of hidden layers is fixed with the dimension reduction strategy that will be described in Sect. 4.2.2. We also include the polynomial transform of the input, to be described presented in Sect. 4.3. For specific outlier removal step, this fit is performed using the Cauchy loss function (CL):
\[\text{CL}\left(\mathbf{f};(\mathbf{x}_{n},y_{nt})\right)=\ \log\left[1+(f_{\ell}( \mathbf{x}_{n})-y_{nt})^{2}\right]. \tag{7}\]
Figure 3 shows how the squared error (Eq. 4), the absolute error (Eq. 2), the error factor (Eq. 6), and Cauchy function penalize errors. The Cauchy function gives less weight to outliers than the other error functions, which makes it more robust to outliers.
The review of training points with high errors is performed with a manual procedure. An instability in a given model of the grid does not affect all lines, as all lines are not emitted in the same spatial regions of the model. Therefore, we only remove affected lines instead of the full model. To accelerate this procedure, we exploit similarities between lines are exploited. For instance, when one water line intensity is identified as an outlier, it is highly likely that most of the water line intensities of the corresponding precomputed model are outliers. We emphasize that outliers are associated to instabilities or multistabilities. Physically consistent intensities that are challenging to reproduce (e.g., due to fast variations in a change of regimes) are not considered as outliers and maintained in the training set. In total, 71 239 values were identified as outliers, making up 0.069% of the training set. We note that this outlier identification step is very informative, as it reveals regions of the parameter space that lead to multistabilities. However, studying these regions is beyond the scope of this paper. A binary mask matrix \(\mathbf{M}=(m_{nt})_{nt}\) is defined from this review. It permits to disregard only the identified outliers instead of removing all \(L\) lines of precomputed
Figure 3: Graph of different loss functions. As with the line intensities, errors are in decimal log scale. An error of 30 thus corresponds to a factor of \(10^{30}\) between predicted and true intensities. Since some line intensities range from \(10^{-50}\) to \(10^{-2}\), this kind of very high error can occur, especially early in the training phase.
models with at least one outlier. In this binary mask, \(m_{\mathit{ref}}=1\) indicates that \(y_{\mathit{ref}}\) is an outlier and should not be taken into account, and \(m_{\mathit{ref}}=0\) indicates that \(y_{\mathit{ref}}\) is not an outlier. Elements of the training set \((\mathbf{x}_{n},\mathbf{y}_{n})\in\mathbb{R}^{D}\times\mathbb{R}^{L}\) are augmented with corresponding binary mask vectors \(\mathbf{m}_{n}\in\{0,1\}^{L}\). On the one hand, ANNs can easily take this mask into account for training by computing the loss function and its gradient on non-masked values only. In the following, a masked version of the MSE relying on the binary mask \(\mathbf{M}\) is used when this outlier removal step is taken into account.
Existing implementations of interpolation methods, on the other hand, lack flexibility to handle such a mask during the fit. As some points of the grid are removed for some lines, the spline interpolation cannot be applied on the masked training set. Nearest-neighbor, piece-wise linear and RBF interpolation methods can be applied but would require line by line fits and predictions, as outliers don't occur for the same training points \(\mathbf{x}\) for all lines. Such a line-by-line manipulation would be extremely slow with a Python implementation. To present a somewhat meaningful comparison between ANNs and interpolation methods on the masked dataset, the masked values are imputed. This imputation relies on a line by line fit of an RBF interpolator with linear kernel. Masked values are replaced by interpolations computed from available data points. Interpolation methods are then fitted with this imputed training set.
### Exploiting correlations between line intensities
Line intensities computed by the Meudon PDR code come from the radiative de-excitation of energy levels. While non-local effects are accounted for in the radiative transfer, the excitation of many lines is affected to a large extent by local variables such as the gas temperature or density. As a result, high correlations between some lines are expected. Figure 4 shows the \(L\times L\) matrix of absolute Pearson correlation coefficients, with lines grouped by molecule. We indeed find some strong correlations. In particular, lines from the same species are often highly correlated, especially for water isotopologues and molecular hydrogen. However, some species produce lines that are not correlated. For instance, high energy lines of SO have a very small correlation with low energy lines, as the corresponding submatrix has a diagonal shape. Finally, some lines from different species are highly correlated, such as OH\({}^{+}\), SH\({}^{+}\), and H\({}_{2}\). Handling the \(L\) lines independently, as in the interpolation methods, ignores these correlations in the line intensities. We exploit these correlations with two strategies: a line clustering and a dimension reduction.
#### 4.2.1 Line clustering to divide and conquer
Some clusters of highly correlated lines appear in Fig. 4. These clusters are not simply related to the line carrier. We derive clusters of lines automatically from the correlation matrix using the spectral clustering algorithm (Shalev-Shwartz & Ben-David 2014, chapter 22). Spectral clustering defines clusters such that
Figure 4: Meudon PDR code correlations among the \(L=5\,375\) predicted lines from 27 chemical species, shown with the \(L\times L\) matrix of absolute Pearson correlation coefficients. A value of exactly 1 for two lines means that there exists an exact affine relationship between their log-intensities. The black squares on the diagonal group lines from a common chemical species. For readability reasons, only the names of species with more than 100 lines predicted by the Meudon PDR code are displayed.
Figure 5: Description of the four obtained line clusters. Top row: Composition of each cluster. Each bar indicates the proportion of lines of a species in a cluster. The red crosses correspond to exactly zero line. For each cluster, the three species with most lines and the corresponding number of lines are highlighted. Bottom row: Pearson correlation of the most representative line of each cluster with the three main physical parameters. The most representative line of a cluster is defined as the line with the highest average correlation with the other lines. A round marker at a vertex indicates a negative correlation.
lines from the same cluster are as similar as possible and such that lines from different clusters are as different as possible. It relies on similarity measures (such as a Pearson correlation), while most clustering algorithms are distance-based. We set the number of clusters to the value that maximizes the ratio of intra to inter-cluster mean correlations.
Figure 5 presents the four clusters we obtained. The mean intra and inter-cluster correlations are 0.895 et 0.462, respectively, while the mean correlation among all lines is 0.73. The obtained clusters contain 3 712, 1 272, 241, and 150 lines, respectively. This imbalance between clusters comes from the imbalance between molecules. For instance, H\({}_{2}\) corresponds to 3 282 lines, that is 61% of the lines computed by the Meudon PDR code, and these lines all are highly correlated as shown in Fig. 4. Appendix A provides a more complete description of the content of these four clusters. With this approach, an ANN is trained for each cluster.
#### 4.2.2 Using PCA to set the size of the last hidden layer
A second and complementary approach to exploit these correlations is to hypothesize that a vector \(\mathbf{y}\) with the \(L\) line intensities can be compressed to a vector of size \(\widetilde{L}<L\) with a limited loss of information. Formally, we hypothesize that the line intensities \(\mathbf{y}\) live in a subspace of dimension \(\widetilde{L}<L\), where \(\widetilde{L}\) can be estimated using a dimension reduction algorithm. We resort to a principal component analysis (PCA) (Shalev-Shwartz & Ben-David 2014, chapter 23) on the training set, which performs compressions using only affine transformations. We obtain that the compression of all \(L=5\,375\) lines, with only \(\widetilde{L}\simeq 1000\) principal components leads to a decompression with mean error factor below 0.1% on the training set, which confirms our hypothesis.
In an ANN such that \(D\ll L\), most parameters belong to the last hidden layer, as illustrated in Fig. 2. The size of this layer is thus critical to obtain a good accuracy. Too large a layer might lead to overfitting, while too small a layer could not capture the nonlinearities of the dataset. In regression, this last hidden layer applies an affine transformation. We therefore set its size to the estimated dimension \(\widetilde{L}\). To predict the \(\widetilde{L}\) intermediate values of the last hidden layer, which are then used to predict all \(L\) line intensities, the first two hidden layers are set with the same size.
For the networks trained on the four clusters of lines obtained in Sect. 4.2.1, the size of the last hidden layer is also set to the minimum number of principal components that ensures a decompression with mean error factor below 0.1% on the training set. The obtained sizes \(\widetilde{L}\) are approximately 500 (about 13% of the \(L=3\,712\) lines of the cluster), 350 (about 28%), 100 (about 41%), and 75 (50%), respectively. As the bigger clusters are the most homogeneous, they have the smallest ratio \(\widetilde{L}/L\) of subspace dimension \(\widetilde{L}\) with the total number of lines \(L\). The number of parameters necessary to describe four small specialized ANNs is thus greatly reduced in comparison to a single larger general network.
### A polynomial transform for learning nonlinearities
The nonlinearities in the Meudon PDR code make the approximation task challenging. In an ANN, nonlinearities come from the activation function \(g\). However, learning meaningful and diversified nonlinearities is difficult with few hidden layers. Conversely, an ANN with numerous layers can lead to overfitting and requires more time for evaluations and memory for storage. Preprocessing the physical parameters \(\mathbf{x}\) with a variety of pre-defined nonlinear functions eases this learning task, while maintaining a small network architecture. We chose to apply a polynomial transform \(P_{p}\) which replaces the input vector \(\mathbf{x}\) of a dimension \(D\) with an input vector containing all monomials computed from the \(D\) entries of degree up to \(p\). For instance, for \(D=3\) and \(p=2\), \(\mathbf{x}=(x_{1},x_{2},x_{3})\) is replaced with \(P_{2}(\mathbf{x})=(x_{1},\ x_{2},\ x_{3},\ x_{1}^{2},\ x_{2}^{2},\ x_{3}^{2},\ x_{1}x_{2},\ x_{1}x_{3},\ x_{2}x_{3})\in\mathbb{R}^{9}\). For \(D=4\) and \(p=3\), we have \(P_{3}(\mathbf{x})\in\mathbb{R}^{34}\). This approach is classic in regression (Ostertagova 2012) but less common in ANNs.
It is well known in polynomial regression that a high maximum degree \(p\) can lead to overfitting (Shalev-Shwartz & Ben-David 2014, chapter 11). The analysis of the physical processes indicates that the gas structure and emission properties depend on control quantities combining \(G_{\text{UV}}\), \(n_{H}\) (or \(P_{\text{th}}\)) and \(A_{\text{V}}^{\text{tot}}\). For instance, \(G_{\text{UV}}/n_{H}\) is known to play an important role in PDRs (Sternberg et al. 2014). It is therefore important to consider monomials combining these three physical parameters. In contrast, the angle \(\alpha\) is assumed to have a simpler role in the model. To avoid overfitting, we choose the minimum value that combines the three parameters, \(p=3\), and thus consider the polynomial transforms \(P_{3}\). This transformation is applied to the input variables after the preprocessing step described in Sect. 3.2 (log scale for \(P_{\text{th}}\), \(G_{\text{UV}}\) and \(A_{\text{V}}^{\text{tot}}\), and standardization of the \(D=4\) parameters). It is implemented as an additional first fixed hidden layer. The gradient \(\nabla_{\mathbf{x}}\mathbf{f}\) can thus be efficiently evaluated with automatic differentiation methods.
### Dense networks to reuse intermediate computations
The fully connected ANNs architecture considered so far (shown in Fig. 2) is widely used in the deep learning community. However, this architecture struggles to maintain input information in hidden layers, as it is transformed in nonlinear activation functions. It might therefore fail to reproduce very simple relationships. For instance, the intensity of UV-pumped lines of H\({}_{2}\) is highly correlated with \(G_{\text{UV}}\). Using \(G_{\text{UV}}\) directly to predict intensities of such lines thus might be more relevant than passing it through nonlinear transformations. This architecture also struggles to pass gradient information all the way back to the first hidden layers. This phenomenon, known as gradient vanishing, might lead to largely suboptimal trained networks. The recent residual (He et al. 2016) and dense architectures (Huang et al.
Figure 6: Structure of a dense ANN, with \(H=2\) hidden layers and the same sequence of layer input sizes \((i)_{j=1}^{h+1}\) used to illustrate the feedforward architecture in Fig. 2.
2017) address these two issues. We used the dense architecture for our regression problem.
A dense architecture is a special type of feedforward architecture where the input of a layer \(j+1\) is the concatenation of the input and output vectors of the previous layer \(j\): \(\mathbf{x}^{(j+1)}=\llbracket\mathbf{x}^{(j)},\mathbf{y}^{(j)}\rrbracket\). This architecture focuses on reusing intermediate values in hidden layers and can thus reduce the number of parameters to train.
Figure 6 illustrates this dense architecture for a simple ANN with \(H=2\) hidden layers and the same sequence of layer input sizes \((i_{j})_{j=1}^{n+1}\) used to illustrate the standard feedforward architecture in Fig. 2. The output sizes \(o_{j}\) of hidden layers are much smaller with the dense architecture, as the input of layer \(j\) concatenates the input and output of layer \(j-1\). The weight matrices \(\mathbf{W}^{(j)}\) of hidden layers are thus much smaller as well, which reduces the total number of parameters to train. By lowering the number of parameters to learn while providing the same number of inputs to the output layer, this architecture limits overfitting risks.
As the number of parameters per layer is reduced, we define ANNs with \(H=9\) hidden layers, which is six more layers than in the proposed networks with the standard architecture, yet still with a similar number of parameters. By definition, the size of the hidden layers in a dense architecture is strictly increasing, as the size \(i_{j+1}\) of a layer input is the sum \(i_{j}+o_{j}\) of the input and output sizes of the previous layer. The network is set so that the input \(i_{j+1}\) of a layer \(j+1\) is 50% larger than the input of the previous layer \(i_{j}\). With this geometric progression and the polynomial transform \(P_{3}\), the input of the output layer contains 1 296 neurons, which is 29.6% larger than the recommendation from PCA obtained in Sect. 4.2.2. However, out of these 1 296 neurons, 34 correspond to the input values, 17 to the output of the first hidden layer, 25 to the output of the second hidden layer, and so on. In other words, though the input of the output layer contains more neurons for the considered dense ANN than the PCA recommendation, a majority of these neurons are the result of fewer transformations.
When using this dense architecure strategy with the clustering approach, four dense networks with \(H=9\) hidden layers are designed. The size of the last hidden layer is also set to a slightly larger value than the PCA recommendation. The geometric progressions of these four networks are set to 35%, 30%, 15%, and 10%, respectively.
## 5 Results on the Meudon PDR code
Here, we compare ANNs designed and trained with the proposed strategies with interpolation methods with respect to accuracy, memory, and speed. Table 2 shows the results of the comparison. It is divided in two halves. The first presents models trained on the raw training set, while the second contains models trained on the cleaned training set (using the outlier detection procedure of Sect. 4.1). In each half, the results of interpolation methods are first listed, followed by ANNs combining one or more of the presented strategies.
### Performance analysis
The proposed ANNs outperform all interpolation methods on all aspects by a large margin: they are between 100 and 1 000 times faster than reasonably accurate interpolation methods and between 14 and 38 times lighter in terms of memory. Interpolation methods handle the prediction of \(L\) lines as \(L\) independent operations, while ANNs handle the \(L\) lines at once, which is much faster. Interpolation methods require storing the full training set that contains 103 million 64-bit floating point numbers, that is to say, 1.65 GB in size. In contrast, ANNs use shared intermediate values in hidden layers to predict all lines, which limits redundant computations and effectively compresses the dataset. They can thus be fully described with between 2.7 and 7.8 million parameters, that is to say, between 43 MB and 118 MB. Finally, the proposed ANNs are roughly twice as accurate as the best interpolation methods on average and between two and three times as accurate with respect to the 99th percentile. Overall, the proposed ANNs are the only surrogate models that yield a mean error factor lower than 10% and thus that are suited to a corrosion with actual observations.
### Removing outliers is crucial
When the outlier removal step is not applied, the distribution of errors is highly skewed for all surrogate models. For interpolation methods, the 99th percentile reveals that around 99% of the predictions correspond to errors lower than a factor of two. For the best ANN (R+P), it reveals that 99% of the errors are lower than 49.7%. However, for all methods, the maximum error is at least 80 times higher than the 99th percentile and reaches unacceptable values. An inspection of the highest errors reveals that they are close to training points with outliers, which indicates that these outliers significantly deteriorate the fit.
\begin{table}
\begin{tabular}{c c c|c c c c c} & \multicolumn{2}{c|}{Method} & \multicolumn{3}{c}{Error factor} & \multicolumn{2}{c}{Memory Speed} \\ & & mean & 99th per. & max & (MB) & (ms) \\ \hline \multirow{7}{*}{**M**} & \multirow{7}{*}{**M**} & near. neighbor & \(\times 13.1\) & \(\times 11.3\) & \(\times 3\)e5 & 1 650 & 62 \\ & & linear & 15.7 & \(\times 2.3\) & \(\times 143\) & 1 650 & 1.5e3 \\ \cline{3-8} & & linear & 15.7 & \(\times 2.3\) & \(\times 144\) & 1 650 & 1.5e3 \\ & & cubic & 11.2 & \(\times 2.2\) & \(\times 122\) & 1 650 & \\ & & quintic & 19.1 & \(\times 2.9\) & \(\times 304\) & 1 650 & \\ \hline \multirow{7}{*}{**M**} & \multirow{7}{*}{**M**} & linear & 10.2 & 90.8 & \(\times 3\)9 & 1 650 & 1.1e4 \\ & & cubic & 10.4 & \(\times 2.1\) & \(\times 112\) & 1 650 & 1.1e4 \\ & & quintic & 10.9 & \(\times 2.1\) & \(\times 118\) & 1 650 & 1.1e4 \\ \cline{2-8} & & R & 7.3 & 64.8 & \(\times 81\) & **118** & **12** \\ & & R+P & **6.2** & **49.7** & \(\times 84\) & **118** & 13 \\ \hline \hline \multirow{7}{*}{**M**} & near. neighbor & \(\times 13.1\) & \(\times 11.6\) & \(\times 3\)e5 & 1 650 & 62 \\ & & linear & 15.9 & \(\times 2.4\) & \(\times 143\) & 1 650 & 1.5e3 \\ \cline{1-1} & & linear & 15.9 & \(\times 2.4\) & \(\times 144\) & 1 650 & \\ \cline{1-1} & & cubic & 11.1 & \(\times 2.2\) & \(\times 120\) & 1 650 & \\ \cline{1-1} & & quintic & 20.0 & \(\times 2.7\) & \(\times 285\) & 1 650 & \\ \cline{1-1} \cline{2-8} & & linear & 10.3 & 97.3 & \(\times 97.5\) & 1 650 & 1.1e4 \\ \cline{1-1} & & cubic & 10.5 & \(\times 2.0\) & \(\times 106\) & 1 650 & 1.1e4 \\ \cline{1-1} & & quintic & 10.9 & \(\times 2.0\) & \(\times 114\) & 1 650 & 1.1e4 \\ \cline{1-1} & & R & 5.1 & 42.0 & \(\times 3\)**2.8** & 118 & 12 \\ \cline{1-1} & & R+P & 5.5 & 42.3 & \(\times 4\)1 & 118 & 13 \\ \cline{1-1} & & R+P+C & 4.9 & 44.5 & \(\times 44\) & 51 & 14 \\ \cline{1-1} & & R+P+D & **4.5** & **33.1** & \(\times 33.8\) & 125 & **11** \\ \cline{1-1} & & R+P+C+D & 4.8 & 37.9 & \(\times 37.6\) & **43** & 14 \\ \hline \end{tabular}
* **Notes.** Evaluation speeds are measured on the full set of \(L\) lines for 1 000 random points. The measurements are performed on a personal laptop equipped with eight logical cores running at 3.00 GHz. Error factors are evaluated on the test set. For neural network architectures, C stands for a line clustering and specialist networks, D for a dense architecture, P for a polynomial transform and R for the design of the last hidden layer using PCA.
\end{table}
Table 2: Performance of interpolation methods and of the proposed ANNs, with and without the removal of outlier from the training set.
After removing outliers from the training set, interpolation methods do not show average accuracy improvement. Only slight improvements can be observed on the 99th percentile and maximum EF, especially for the RBF interpolation methods. Replacing outliers with interpolated values is therefore not relevant to derive surrogate models based on interpolation methods in this case. In contrast, the two ANNs trained both with and without the outlier removal step (R and R+P) show consequent improvements. With outlier removal, the mean EF decreased from 7.3% and 6.2% to 5.1% and 5.5%, respectively. Similarly, the 99th percentile dropped from 64.8% and 49.7% to 42% and 42.3%. Finally, the maximum error is reduced by more than a factor of two. These important improvements demonstrate the interest of filtering outliers from the training set before training ANNs.
### Combining polynomial transform with dense network or line clustering
The polynomial transform improves the accuracy in presence of outliers in the training set, but then causes it to deteriorate after the outlier removal step. It provides flexibility to learn abrupt nonlinearities caused by outliers. However, with outliers removed, the function to learn is smoother. The EF on the masked training set is 1.44% without the polynomial transform (R) and 0.77% with it (R+P), while the EF on the test set is lower without the polynomial transform. This improvement on the training set does not lead to an improvement on the test set, suggesting an overfit. The polynomial transform therefore requires additional strategies to better reproduce data unused during the training phase.
Both the clustering step and dense architecture, used with the polynomial transform, led to better accuracy. The surrogate model that exploits the line clustering but not the dense architecture (R+P+C) improves the mean accuracy by 0.2 percentage points, while requiring 57% fewer parameters than the first two networks (R and R+P). A potential cause of the average error factor improvement is the separation of the training of each specialized ANN. Since H\({}_{2}\) lines represent 61% of all \(L\) lines, they dominate the loss function and thus are learned in priority. To separate them from other clusters might have improved performance on those other clusters.
The surrogate model based on a single network with dense architecture (R+P+D) is the most accurate on average and provides the lowest error upper bound for the robust 99th percentile estimator. Even with more trainable parameters than the first two networks, it does not overfit. It is also the fastest model as reusing intermediate values reduces the number of computations.
Finally, combining both line clustering and dense architectures (R+P+C+D) yields the lowest memory usage with only 2.7 million parameters, that is 43.2 MB, which is 38 times lighter than for interpolation methods. It also provides very good accuracy, both on the average and for the upper bounds.
Overall, a dense architecture and the line clustering effectively limit overfitting and thus perform better on data unseen during the training phase. The line clustering leads to the lightest models regarding memory requirements, while the dense architecture leads to the most accurate models.
## 6 Conclusion
The interpretation of observations of atomic and molecular tracers in galactic and extragalactic ISM requires comparison with state-of-the-art astrophysical models to infer physical conditions. Such inference procedure requires numerous evaluations of the numerical model, which is particularly the case for Bayesian approaches. Inference on large observations maps, which are becoming more and more common, further relies on many evaluations. The ISM models are often too slow to perform such inference and are generally approximated using interpolation methods run on grids of precomputed models. These interpolation approaches induce errors that are seldom quantified in the literature. Besides, these methods can have high evaluation time and memory costs.
In this work, the general problem of deriving a fast, accurate and memory-light surrogate model for a time-consuming ISM numerical model has been addressed. The proposed approach has been assessed in the case of the Meudon PDR code, a state-of-the-art ISM code. In this work, four common families of interpolation methods (nearest-neighbor, linear, spline, and RBF) are compared to specifically designed ANNs. We find that ANNs outperform all interpolation methods by a large margin in terms accuracy, speed, and memory usage.
Attaining this performance level for ISM models requires addressing their specificities. First, ISM models usually predict many observables (e.g., line intensities of many species) from few parameters (e.g., gas density or temperature), which is unusual in ANN applications - except in the case of ANNs that generate structured data such as images, text, or times series. Second, due to numerical instabilities or physical bistabilities or multistabilities, such models sometimes produce outliers that harm the training process. We proposed and combined five strategies to design and train adapted ANNs:
* To identify outliers, we first train an ANN with a loss function robust to large errors. Training points corresponding to large errors are manually reviewed. Identified outliers are removed from the training set.
* Lines are clustered into homogeneous subsets that are simpler to emulate: for each cluster one ANN is defined and trained.
* A dimension reduction technique (PCA) is used to determine an adequate size of hidden layers.
* A polynomial transform of the input physical parameters provides precomputed nonlinearities to the network, which permits the learning of nonlinearities with a limited number of hidden layers.
* A dense architecture exploits intermediate computations and thus limits redundant computations. Using such an architecture instead of the standard feedforward ANN architecture improves speed and avoids overfitting.
With the proposed strategies, ANNs can achieve 4.5% average accuracy, while the best interpolation method, RBF, attains 10.2%. The upper bound on the errors, quantified using their 99th percentile, reach 33.1% for our ANNs compared to 97% for the RBF interpolation. Besides, our ANNs are 1 000 times faster than RBF and are more than ten times lighter in terms of memory. The most accurate model presented in this article (denoted R+P+D in Table 2) is publicly available9. Footnote 9: [https://ism.obspm.fr/files/ArticleData/2023_Palud_Einig/2023_Palud_Einig_trained_ANN.zip](https://ism.obspm.fr/files/ArticleData/2023_Palud_Einig/2023_Palud_Einig_trained_ANN.zip)
Although this paper focuses on an application to the Meudon PDR code, the proposed strategies are general enough to be applicable to many other ISM models. The fast and accurate ANN emulators obtained in this article enable the performance of fully Bayesian inference on observation maps using the Meudon PDR code, a physically comprehensive model. Such an approach will
be presented in an upcoming paper Palud et al. (in prep). It will also permit efficient analyses of large observations maps produced by today's instruments (e.g., JWST, ALMA), such as the ORION-B dataset observed by the IRAM 30m (Pety et al. 2017).
###### Acknowledgements.
This work was partly supported by the CNRS through 80Prime project OrionStar. A MITI interdisciplinary program, by the ANR project "Chaier 1A Shenderick" ANR-20-CHA-003-101 held by P. Chainais, and by the national support within the _programme of "investisements d'avenii_ ANR-16-IDEX-0004 ULNE and Region HDF. It also received support from the French Agence Nationale de la Recherche through the DAOISG grant ANR-21-CE3-0010, and from the Programme National "Physique de Chile du Milenet Interstellaire" (PCMI) of CNRS/INSU with INC/INP, co-funded by CEA and CNES. JRG and MOST thank the Spanish MICNN for funding support under grant PID2019-106110G-100. Part of the research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration (80NM0018D0004). D.C.L. was supported by USRA through a grant for SOFIA Program 09-0015.
|
2308.15478 | An Adaptive Tangent Feature Perspective of Neural Networks | In order to better understand feature learning in neural networks, we propose
a framework for understanding linear models in tangent feature space where the
features are allowed to be transformed during training. We consider linear
transformations of features, resulting in a joint optimization over parameters
and transformations with a bilinear interpolation constraint. We show that this
optimization problem has an equivalent linearly constrained optimization with
structured regularization that encourages approximately low rank solutions.
Specializing to neural network structure, we gain insights into how the
features and thus the kernel function change, providing additional nuance to
the phenomenon of kernel alignment when the target function is poorly
represented using tangent features. We verify our theoretical observations in
the kernel alignment of real neural networks. | Daniel LeJeune, Sina Alemohammad | 2023-08-29T17:57:20Z | http://arxiv.org/abs/2308.15478v3 | # An Adaptive Tangent Feature Perspective
###### Abstract
In order to better understand feature learning in neural networks, we propose a framework for understanding linear models in tangent feature space where the features are allowed to be transformed during training. We consider linear transformations of features, resulting in a joint optimization over parameters and transformations with a bilinear interpolation constraint. We show that this optimization problem has an equivalent linearly constrained optimization with structured regularization that encourages approximately low rank solutions. Specializing to neural network structure, we gain insights into how the features and thus the kernel function change, providing additional nuance to the phenomenon of kernel alignment when the target function is poorly represented using tangent features. In addition to verifying our theoretical observations in real neural networks on a simple regression problem, we empirically show that an adaptive feature implementation of tangent feature classification has an order of magnitude lower sample complexity than the fixed tangent feature model on MNIST and CIFAR-10.
## 1 Introduction
Tremendous research effort has been expended in developing and understanding neural networks [1, 2, 3, 4]. In terms of development, this effort has been met with commensurate tremendous practical success, dominating the state of the art [5, 6, 7] and establishing a new normal of replacing intricately engineered solutions with the conceptually simpler approach of learning from data [8].
Paradoxically, this simple principle of learning has not made the theoretical understanding of the success of neural networks easier [3, 9]. Neural networks stand in stark contrast to the engineered solutions that they have replaced--instead of leveraging vast amounts of human expertise about a particular problem, for which theoretical guarantees can be specifically tailored, neural networks appear to be universal learning machines, adapting easily to a wide range of tasks given enough data. The tall task of the theoretician is to prove that neural networks efficiently learn essentially any function of interest, while being trained through an opaque non-convex learning process on data with often unknown and mathematically uncharacterizable structure.
One promising theoretical direction for understanding neural networks has been through linearizing networks using the neural tangent kernel (NTK) framework [10, 11]. Given an appropriate initialization, infinitely wide neural networks can be shown to have constant gradients throughout training, such that the function agrees with its first-order Taylor approximation and is therefore a linear model, where the (tangent) features are the gradient of the network at initialization. The NTK framework reduces the complexity of neural networks down to linear (kernel) regression, which is significantly better theoretically understood [9, 12, 13, 14]. However, real neural networks still outperform their NTK approximants [15], and the fundamental assumption of the NTK--that the gradients do not change during training--is typically not satisfied in theory or practice [16, 17].
In this work, we take a step towards understanding the effects of the change of the gradients and how these changes allow the features to adapt to data. Rather than considering neural networks directly, we consider a relaxation of the problem in which the tangent features are allowed to adapt to the
problem alongside the regression coefficients. By studying this underlying fundamental problem, we shed light on effects observed in real neural networks. Our specific contributions are as follows.
* We introduce a framework of linear feature adaptivity enabling two complementary views of the same optimization problem: as regression using adaptive features, and equivalently as structured regression using fixed features.
* We show how restricting the adaptivity imposes specific regularizing structure on the solution, resulting in a group approximate low rank penalty on a neural network based model.
* We consider the resulting adapted kernel and provide new insights on the phenomenon of NTK alignment [18, 19, 20, 21], specifically when the target function is poorly represented using the initial tangent features.
* We empirically evaluate our adaptive feature model in neural networks on MNIST and CIFAR-10, which provides an order of magnitude better sample complexity compared to fixed tangent features.
Our work is quite far from an exact characterization of real neural networks, and our empirical results indicate that adaptivity as we propose cannot fully explain the performance of real neural networks. Nevertheless, our framework extends the class of phenomena can be explained by linear models built on tangent features, and so we believe it to be a valuable contribution towards understanding neural networks.
**Related work.** The neural tangent kernel has been proposed and studied extensively in the fixed tangent feature regime [10, 11, 15, 22]. Recent works have studied the alignment of the tangent kernel with the target function: Baratin et al. [18] empirically demonstrate tangent kernel alignment, and Atanasov et al. [19] show that linear network tangent kernels align with the target function early during training. Seleznova and Kutyniok [20] demonstrate alignment under certain initialization scalings, and characterize kernel alignment and neural collapse under a block structure assumption on the NTK [21]. In contrast, by characterizing the adaptive feature learning problem instead of neural networks specifically, we are able to gain more nuanced insights about kernel alignment.
The idea of simultaneously learning features and fitting regression models has appeared in the literature in tuning kernel parameters [23], multiple kernel learning [24], and automatic relevance determination (ARD) [25], which has been shown to correspond to a sparsifying iteratively reweighted \(\ell_{1}\) optimization [26]. Other areas in which joint factorized optimization results in structured models include matrix factorization [27] and adaptive dropout [28], which are equivalent to iteratively reweighted \(\ell_{2}\) optimization. Our work provides generic results on optimization of matrix products with rotationally invariant penalties, which complements the existing literature.
All proofs can be found in Appendix A.
## 2 An adaptive feature framework
We first formalize our adaptive feature framework, which enables us to jointly consider feature learning and regression. Our formulation is motivated by highly complex overparameterized models such as neural networks and specializes to the interpolation regime in which the training dataset can be perfectly fit by the model.
**Notation.** Given a vector \(\mathbf{v}_{\mathbf{x}}\in\mathbb{R}^{P}\) parameterized by another vector \(\mathbf{x}\in\mathbb{R}^{Q}\), we use the "denominator" layout of the derivative such that \(\nabla_{\mathbf{x}}\mathbf{v}_{\mathbf{x}}=\partial\mathbf{v}_{\mathbf{x}}/ \partial\mathbf{x}\in\mathbb{R}^{P\times Q}\), and given a scalar \(v_{\mathbf{X}}\in\mathbb{R}\) parameterized by a matrix \(\mathbf{X}\in\mathbb{R}^{P\times Q}\), we orient \(\nabla_{\mathbf{X}}v_{\mathbf{X}}\in\mathbb{R}^{Q\times P}\). The vectorization of a matrix \(\mathbf{X}=[\mathbf{x}_{1}\quad\ldots\quad\mathbf{x}_{Q}]\in\mathbb{R}^{P \times Q}\) is the stacking of columns such that \(\mathrm{vec}(\mathbf{X})^{\mathsf{T}}=\begin{bmatrix}\mathbf{x}_{1}^{\mathsf{ T}}&\ldots&\mathbf{x}_{Q}^{\mathsf{T}}\end{bmatrix}\in\mathbb{R}^{PQ}\). For \(\mathbf{x}\in\mathbb{R}^{\min\{P,Q\}}\), we denote by \(\mathrm{diag}_{P\times Q}(\mathbf{x})\in\mathbb{R}^{P\times Q}\) the (possibly non-square) matrix with \(\mathbf{x}\) along the main diagonal, and we omit the subscript \(P\times Q\) when \(P=Q\). We denote the set \(\{1,\ldots,N\}\) by \([N]\). Given a vector \(\mathbf{x}\in\mathbb{R}^{P}\), we denote by \([\mathbf{x}]_{j}\) for \(j\in[P]\) the \(j\)-th coordinate of \(\mathbf{x}\). Given a square
matrix \(\mathbf{X}\in\mathbb{R}^{P\times P}\), we let \(\lambda_{i}(\mathbf{X})\) denote its \(i\)-th largest eigenvalue. The characteristic function \(\chi_{\mathcal{A}}\) of a set \(\mathcal{A}\) satisfies \(\chi_{\mathcal{A}}(\mathbf{x})=0\) for \(\mathbf{x}\in\mathcal{A}\) and \(\chi_{\mathcal{A}}(\mathbf{x})=\infty\) for \(\mathbf{x}\notin\mathcal{A}\).
### First-order expansion with average gradients
Consider a differentiably parameterized function \(f_{\mathbf{\theta}}\colon\mathbb{R}^{D}\to\mathbb{R}^{C}\) with parameters \(\mathbf{\theta}\in\mathbb{R}^{P}\), such as a neural network. Our goal is to fit this function to data \((\mathbf{x}_{1},\mathbf{y}_{1}),\ldots,(\mathbf{x}_{N},\mathbf{y}_{N})\in \mathbb{R}^{D\times C}\) by solving
\[\operatorname*{minimize}_{\mathbf{\theta}\in\mathbb{R}^{P}}\ \sum_{i=1}^{N}\ell( \mathbf{y}_{i},f_{\mathbf{\theta}}(\mathbf{x}_{i})),\]
where \(\ell\colon\mathbb{R}^{C}\times\mathbb{R}^{C}\to\mathbb{R}\) is some loss function. In order characterize the solution, we need to understand how \(f_{\mathbf{\theta}}\) changes with \(\mathbf{\theta}\). One way that we can understand \(f_{\mathbf{\theta}}\) is through the fundamental theorem of calculus for line integrals. Letting \(\mathbf{\theta}_{0}\) be some reference parameters, such as random initialization or pretrained parameters,
\[f_{\mathbf{\theta}}(\mathbf{x})-f_{\mathbf{\theta}_{0}}(\mathbf{x})=\int_{\mathbf{\theta} _{0}}^{\mathbf{\theta}}\nabla_{\mathbf{\theta}^{\prime}}f_{\mathbf{\theta}^{\prime}}( \mathbf{x})d\mathbf{\theta}^{\prime}=\underbrace{\Big{(}\int_{0}^{1}\nabla_{\mathbf{ \theta}^{\prime}}f_{\mathbf{\theta}^{\prime}}(\mathbf{x})\big{|}_{\mathbf{\theta}^{ \prime}=(1-t)\mathbf{\theta}_{0}+t\mathbf{\theta}}dt\Big{)}}_{\triangleq\overline{ \nabla f_{\mathbf{\theta}}(\mathbf{x})}}(\mathbf{\theta}-\mathbf{\theta}_{0}).\]
That is, our model is a linear predictor using the average **tangent features**\(\overline{\nabla f_{\mathbf{\theta}}}(\mathbf{x})\) and coefficients \(\mathbf{\theta}-\mathbf{\theta}_{0}\). When \(\mathbf{\theta}\approx\mathbf{\theta}_{0}\), we should expect that \(\overline{\nabla f_{\mathbf{\theta}}}(\mathbf{x})\approx\nabla_{\mathbf{\theta}_{0}}f _{\mathbf{\theta}_{0}}(\mathbf{x})\), which are the tangent features at initialization. In fact, this has been shown to hold for very wide neural networks in the "lazy training" regime even for \(\mathbf{\theta}\) at the end of training, in which case the problem can be understood as kernel regression using the neural tangent kernel [10, 11]. However, outside of those special circumstances, it is likely that \(\overline{\nabla f_{\mathbf{\theta}}}(\mathbf{x})\) will change with \(\mathbf{\theta}\). In general, it is difficult to say anything further about how \(f_{\mathbf{\theta}}\) should change with \(\mathbf{\theta}\), or what properties the optimal \(\mathbf{\theta}\) and \(\overline{\nabla f_{\mathbf{\theta}}}(\mathbf{x})\) for a prediction problem should have.
### Adaptive feature model assumptions
In order to proceed with understanding the fitting of such a function, we must make additional assumptions. At this point, we do not wish to specialize to neural networks; instead, we make a few heuristic assumptions based on general properties of overparameterized models.
**Interpolation.** We assume that for any \(\widetilde{\mathbf{y}}_{1},\ldots,\widetilde{\mathbf{y}}_{N}\) of interest, there exists \(\widetilde{\mathbf{\theta}}\in\mathbb{R}^{P}\) such that \(\widetilde{\mathbf{y}}_{i}=f_{\widetilde{\mathbf{\theta}}}(\mathbf{x}_{i})\) for all \(i\in[N]\). This assumption is in line with the growing literature on interpolating predictors [9, 29, 30, 31, 32] and typically holds for neural networks, which are universal function approximators given enough parameters [33]. In particular, we require that this assumption holds for
\[\widehat{\mathbf{y}}_{i}\triangleq\operatorname*{arg\,min}_{\widetilde{\mathbf{ y}}\in\mathbb{R}^{C}}\ell(\mathbf{y}_{i},\widetilde{\mathbf{y}}),\]
such that for \(\widehat{\mathbf{\theta}}\in\mathbb{R}^{P}\), we have \(\widehat{\mathbf{y}}_{i}=f_{\widehat{\mathbf{\theta}}}(\mathbf{x}_{i})\) for all \(i\in[N]\), which is typically satisfiable in regression. In classification problems, the minimizers are typically infinite valued and never realized unless there is label noise, in which case we should let \(\widehat{\mathbf{y}}_{i}\triangleq\operatorname*{arg\,min}_{\widetilde{\mathbf{ y}}\in\mathbb{R}^{C}}\sum_{j\colon\mathbf{x}_{j}=\mathbf{x}_{i}}\ell( \mathbf{y}_{j},\widetilde{\mathbf{y}})\).
**Factorized features.** We assume that for each \(\mathbf{\theta}\), there exists a matrix \(\mathbf{M}_{\mathbf{\theta}}\in\mathbb{R}^{P\times P}\) such that for all \(\mathbf{x}\in\mathbb{R}^{D}\), \(\overline{\nabla f_{\mathbf{\theta}}}(\mathbf{x})=\nabla_{\mathbf{\theta}_{0}}f_{\mathbf{ \theta}_{0}}(\mathbf{x})\mathbf{M}_{\mathbf{\theta}}\). That is, the features are a linear transformation of the initial tangent features at the reference parameters \(\mathbf{\theta}_{0}\). This assumption is slightly stronger than the interpolation assumption: it also requires consistency at other values of \(\mathbf{x}\) not in the training data (if the interpolation assumption holds, we should expect \(\mathbf{M}_{\mathbf{\theta}}\) to exist that satisfies this equality for all \(\mathbf{x}_{i}\)). For highly overparameterized models, we believe this assumption roughly approximates reality.
**Independent optimization.** We assume that \(\mathbf{M}_{\mathbf{\theta}}\) can change essentially independently of \(\mathbf{\theta}\), and so we simply write \(\mathbf{M}\) instead of \(\mathbf{M}_{\mathbf{\theta}}\). In this key assumption, we allow ourselves to depart from the rigid parameterization of a particular model (such as a neural network) and ask what a less
restricted model would choose to do. This assumption is motivated by the compositional nature of neural networks and the observation that change in the gradients for a particular parameter is often more heavily affected by changes in parameters in other layers than by changes to the parameter itself. Despite the independence of \(\mathbf{M}\) and \(\boldsymbol{\theta}\), if the change of \(\boldsymbol{\theta}\) is small, we still expect change in \(\mathbf{M}\) to be small as well.
**Local optimization.** We assume that we search for a solution near our reference parameters \(\boldsymbol{\theta}_{0}\). This assumption coincides with the practice of optimization by gradient-based methods. Since we do not want to bias the optimization toward any particular direction, we apply an \(\ell_{2}\) penalty on the deviation of \(\boldsymbol{\theta}\) from \(\boldsymbol{\theta}_{0}\). Since \(\mathbf{M}\) should also not deviate much from its initial value of \(\mathbf{M}=\mathbf{I}_{P}\), we also apply some appropriate regularizer \(\Omega\colon\mathbb{R}^{P\times P}\to\mathbb{R}\), to be chosen later.
## 3 Adaptive feature learning
Combining the above assumptions, we obtain our learning model as the optimization problem
\[\widehat{\mathbf{M}},\,\widehat{\boldsymbol{\theta}}\triangleq\operatorname*{ arg\,min}_{\mathbf{M}\in\mathbb{R}^{P\times P},\,\boldsymbol{\theta}\in \mathbb{R}^{P}}\,\Omega(\mathbf{M})+\|\boldsymbol{\theta}-\boldsymbol{\theta }_{0}\|_{2}^{2}\,\,\,\text{s.t.}\,\,\,\widehat{\mathbf{y}}_{i}-f_{\boldsymbol {\theta}_{0}}(\mathbf{x}_{i})=\nabla_{\boldsymbol{\theta}_{0}}f_{\boldsymbol{ \theta}_{0}}(\mathbf{x}_{i})\mathbf{M}(\boldsymbol{\theta}-\boldsymbol{\theta }_{0})\,\,\forall\,i\in[N]. \tag{1}\]
This learning problem is very similar to regularized linear interpolation, except that it has a bilinear interpolation constraint (linear individually in each \(\mathbf{M}\) and \(\boldsymbol{\theta}\)) rather than a simple linear constraint. This bilinear structure will lead to significant structure in both \(\widehat{\mathbf{M}}\) and \(\widehat{\boldsymbol{\theta}}\). In particular, this structure will vary with choice of \(\Omega\).
Since \(\mathbf{M}_{\boldsymbol{\theta}_{0}}=\mathbf{I}_{P}\), we let \(\Omega\) regularize the deviation of \(\mathbf{M}\) from \(\mathbf{I}_{P}\). In order to capture a wide variety of behaviors, we consider regularizers built from the following class of spectral regularizers for a strictly quasi-convex function \(\omega\colon\mathbb{R}\to\mathbb{R}\) having minimum value \(\omega(1)=0\):
\[\Omega_{\omega}(\mathbf{M})=\sum_{j=1}^{P}\omega(\lambda_{j}(\mathbf{M})).\]
This penalty applies only to the eigenvalues, so the eigenvectors of \(\mathbf{M}\) are free. Simple examples of choices for \(\omega\) are \(\omega(\lambda)=|\lambda-1|^{p}\) for \(p>0\). For any penalty \(\omega\), we also define the **effective penalty**
\[\tilde{\omega}(v)\triangleq\min_{z\geq 1}\omega(z)+\frac{v^{2}}{z^{2}}\quad \text{and}\quad\widetilde{\Omega}_{\omega}(\mathbf{M})\triangleq\Omega_{ \tilde{\omega}}(\mathbf{M}).\]
As we will see, the bilinear constraint results in an equivalent formulation of the optimization problem in eq. (1) as an optimization with a linear constraint with penalty \(\widetilde{\Omega}\).
Without loss of generality, we will consider \(\mathbf{M}\) to be symmetric positive semidefinite, since we can always rotate \(\boldsymbol{\theta}\) accordingly without changing the optimization problem.
When considering solutions, we are also interested in the kernel corresponding to the learned features. Specifically, for \(\mathbf{x},\mathbf{x}^{\prime}\in\mathbb{R}^{D}\) we define the **adapted kernel** as the feature inner products
\[K(\mathbf{x},\mathbf{x}^{\prime})\triangleq\overline{\nabla f_{\widehat{ \boldsymbol{\theta}}}}(\mathbf{x})\overline{\nabla f_{\widehat{\boldsymbol{ \theta}}}}(\mathbf{x}^{\prime})^{\mathsf{T}}=\nabla_{\boldsymbol{\theta}_{0} }f_{\boldsymbol{\theta}_{0}}(\mathbf{x})\widehat{\mathbf{M}}^{2}\nabla_{ \boldsymbol{\theta}_{0}}f_{\boldsymbol{\theta}_{0}}(\mathbf{x}^{\prime})^{ \mathsf{T}}\in\mathbb{R}^{C\times C}.\]
We correspondingly define the initial kernel \(K_{0}(\mathbf{x},\mathbf{x}^{\prime})=\nabla_{\boldsymbol{\theta}_{0}}f_{ \boldsymbol{\theta}_{0}}(\mathbf{x})\nabla_{\boldsymbol{\theta}_{0}}f_{ \boldsymbol{\theta}_{0}}(\mathbf{x}^{\prime})^{\mathsf{T}}\) for \(\mathbf{M}=\mathbf{I}_{P}\), which is equal to the standard neural tangent kernel in neural networks.
### Structureless feature learning
It is instructive to first consider the effect of unrestricted optimization of \(\mathbf{M}\); that is, if the features were allowed to change without any structural constraints on how features and parameters must interact. In this case, we simply solve eq. (1) directly, using \(\Omega=\Omega_{\omega}\) applied to the full \(\mathbf{M}\).
**Theorem 1**.: _There is a solution to eq. (1) with \(\Omega=\Omega_{\omega}\) satisfying_
\[\widehat{\mathbf{M}}=\mathbf{I}_{P}+(s-1)\|\widehat{\boldsymbol{\beta}}\|_{2} ^{-2}\widehat{\boldsymbol{\beta}}\widehat{\boldsymbol{\beta}}^{\mathsf{T}} \quad\text{and}\quad\widehat{\boldsymbol{\theta}}=\boldsymbol{\theta}_{0}+s^{ -1}\widehat{\boldsymbol{\beta}}.\]
_where \(s=\arg\min_{z\geq 1}\omega(z)+\frac{\|\widehat{\mathbf{\beta}}\|_{2}^{2}}{z^{2}}\) and_
\[\widehat{\mathbf{\beta}}=\operatorname*{arg\,min}_{\mathbf{\beta}\in\mathbb{R}^{P}}\| \mathbf{\beta}\|_{2}\ \ \text{s.t.}\ \ \widehat{\mathbf{y}}_{i}-f_{\mathbf{\theta}_{0}}(\mathbf{x}_{i})=\nabla_{\mathbf{\theta}_{0} }f_{\mathbf{\theta}_{0}}(\mathbf{x}_{i})\mathbf{\beta}\ \forall\ i\in[N].\]
_Furthermore, the adapted kernel for this solution is given by_
\[K(\mathbf{x},\mathbf{x}^{\prime})=K_{0}(\mathbf{x},\mathbf{x}^{\prime})+(s^{2 }-1)\|\widehat{\mathbf{\beta}}\|_{2}^{-2}\underbrace{(f_{\widehat{\mathbf{\theta}}}( \mathbf{x})-f_{\mathbf{\theta}_{0}}(\mathbf{x}))(f_{\widehat{\mathbf{\theta}}}( \mathbf{x}^{\prime})-f_{\mathbf{\theta}_{0}}(\mathbf{x}^{\prime}))^{\mathsf{T}}}_ {\triangleq K_{\mathbf{\beta}(\mathbf{x},\mathbf{x}^{\prime})}}.\]
Note that the equivalent formulation in terms of \(\mathbf{\beta}\) corresponds exactly to standard ridgeless regression [30] using the initial tangent kernel features \(\nabla_{\mathbf{\theta}_{0}}f_{\mathbf{\theta}_{0}}(\mathbf{x})\)--therefore, when no structure is imposed on the adaptive features, the resulting predictions are exactly the same as in NTK regression. However, even though the predictions are no different, we already can see a qualitative difference in the description of the system from NTK analysis. Specifically, this adaptive feature perspective reveals how the adapted kernel itself changes: it is a low rank perturbation to the original kernel that directly captures the model output as the label kernel \(K_{\widehat{\mathbf{y}}}\). This kernel alignment effect has been empirically observed in real neural networks that depart from the lazy-training regime [18, 19, 20].
Moreover, the strength of the kernel alignment is directly related to the difficulty of the regression task as measured by the norm of \(\widehat{\mathbf{\beta}}\). Note that \(s\) takes minimum value at \(\|\widehat{\mathbf{\beta}}\|_{2}=0\) and is increasing otherwise, which means that if the initial tangent features \(\nabla_{\mathbf{\theta}_{0}}f_{\mathbf{\theta}_{0}}(\mathbf{x})\) are sufficient to fit \(\widehat{\mathbf{y}}_{i}\) with a small \(\widehat{\mathbf{\beta}}\), then \(s\approx 1\) and \(K(\mathbf{x},\mathbf{x}^{\prime})\approx K_{0}(\mathbf{x},\mathbf{x}^{\prime})\).2 It is only when the task is difficult and a larger \(\widehat{\mathbf{\beta}}\) is required that \(s^{2}-1\gg 0\) and we observe kernel alignment with the label kernel \(K_{\widehat{\mathbf{y}}}\). We illustrate this in an experiment with real neural networks on an MNIST regression task in Figure 1.
Footnote 2: While there is a \(\|\widehat{\mathbf{\beta}}\|_{2}^{-2}\) factor as well, this is always canceled by the implicit \(\|\widehat{\mathbf{\beta}}\|_{2}^{2}\) in \(K_{\widehat{\mathbf{y}}}\), such that the overall scale of \(K_{\widehat{\mathbf{y}}}\) is determined entirely by the factor \((s^{2}-1)\).
### Neural network structure
We now wish to specialize to structures that arise in neural networks. The developments in this section will be built around parameterization of the network by matrices, but these results could be extended straightforwardly to higher order tensors (such as filter kernels in convolutional neural networks). Generically, the parameters of a neural network are \(\mathbf{\theta}=\operatorname{concat}(\operatorname{vec}(\mathbf{W}_{1}),\dots, \operatorname{vec}(\mathbf{W}_{L}))\), where \(\mathbf{W}_{\ell}\in\mathbb{R}^{P_{\ell}\times Q_{\ell}}\) are parameter matrices such that \(P=\sum_{\ell=1}^{L}P_{\ell}Q_{\ell}\). These matrices comprise operations in a directed acyclic graph along with other operations such as nonlinearities and batch or layer normalizations, which do not have parameters and thus do not contribute tangent features. We first state our neural network model, and then we explain the motivation of each component.
Figure 1: **A more difficult task yield higher label kernel alignment.** We perform regression using a multi-layer perceptron on 500 MNIST digits from classes 2 and 3. We construct target labels \(y_{i}\) as the best linear fit of binary \(\pm 1\) labels using random neural network features. Then we train two networks, one (**left**) trained to predict \(y_{i}\), and one (**right**) trained to predict \(\operatorname{sign}(y_{i})\). We present the adapted kernel and label kernel matrices for data points ordered according to \(y_{i}\) and report the cosine similarity of the adapted kernel and the label kernel. The harder task of regression with binarized labels has a higher label kernel alignment. Further details are given in Appendix B.1.
**Model A**.: _The neural network model has the following properties:_
1. _The parameter vector_ \(\mathbf{\theta}-\mathbf{\theta}_{0}\in\mathbb{R}^{P}\) _consists of_ \(L\) _matrices_ \(\mathbf{W}_{1}\in\mathbb{R}^{P_{1}\times Q_{1}}\)__,...,_ \(\mathbf{W}_{L}\in\mathbb{R}^{P_{L}\times Q_{L}}\)_._
2. _The feature transformation operator_ \(\mathbf{M}\in\mathbb{R}^{P\times P}\) _is parameterized by_ \(\mathbf{M}_{\ell}^{(1)}\in\mathbb{R}^{P_{\ell}\times P_{\ell}}\) _and_ \(\mathbf{M}_{\ell}^{(2)}\in\mathbb{R}^{Q_{\ell}\times Q_{\ell}}\) _for each_ \(\ell\in[L]\) _such that application of_ \(\mathbf{M}\) _to_ \(\mathbf{\theta}\) _results in the mapping_ \((\mathbf{W}_{\ell})_{\ell=1}^{L}\mapsto(\mathbf{M}_{\ell}^{(1)}\mathbf{W}_{ \ell}\mathbf{M}_{\ell}^{(2)})_{\ell=1}^{L}\)_._
3. _For strictly quasi-convex functions_ \(\omega_{\ell}^{(1)}\) _and_ \(\omega_{\ell}^{(2)}\) _for each_ \(\ell\in[L]\) _minimized at_ \(\omega_{\ell}^{(1)}(1)=0\) _and_ \(\omega_{\ell}^{(2)}(1)=0\)_, the regularizer is given by_ \(\Omega(\mathbf{M})=\sum_{\ell=1}^{L}\Omega_{\omega_{\ell}^{(1)}}(\mathbf{M}_{ \ell}^{(1)})+\Omega_{\omega_{\ell}^{(2)}}(\mathbf{M}_{\ell}^{(2)})\)_._
4. _The final matrix_ \(\mathbf{W}_{L}\) _has_ \(Q_{L}=C\) _and a fixed transformation of_ \(\mathbf{M}_{L}^{(2)}=\mathbf{I}_{C}\) _(corresponding to_ \(\omega_{L}^{(2)}=\chi_{\{1\}}\)_), and there is a mapping_ \(\mathbf{z}\colon\mathbb{R}^{D}\to\mathbb{R}^{P_{\ell}}\) _such that_ \(\nabla_{\mathrm{vec}(\mathbf{W}_{L})}f_{\mathbf{\theta}_{0}}(\mathbf{x})=\mathbf{I }_{C}\otimes\mathbf{z}(\mathbf{x})^{\mathrm{T}}\)_._
The first component is simply a reparameterization of \(\mathbf{W}_{\ell}\) as the difference from initialization, to simplify notation. The other components are motivated as follows.
**Feature transformations.** Consider a single weight matrix \(\mathbf{W}_{\ell}\) and the gradient of the \(k\)-th output \(f_{\mathbf{\theta}}^{(k)}(\mathbf{x})\). The matrix structure of \(\mathbf{W}_{\ell}\) limits the way in which the gradient tends to change. As such, rather than considering a \(P_{\ell}Q_{\ell}\times P_{\ell}Q_{\ell}\) matrix \(\mathbf{M}_{\ell}\) such that \(\overline{\nabla_{\mathrm{vec}(\mathbf{W}_{\ell})_{\ell}}f_{\mathbf{\theta}_{0}} }(\mathbf{x})=\nabla_{\mathrm{vec}(\mathbf{W}_{\ell})}f_{\mathbf{\theta}_{0}}( \mathbf{x})\mathbf{M}_{\ell}\), it is more natural to consider a Kronecker factorization \(\mathbf{M}_{\ell}=\mathbf{M}_{\ell}^{(2)\mathrm{T}}\otimes\mathbf{M}_{\ell}^{ (1)}\) such that
\[\overline{\nabla_{\mathbf{W}_{\ell}}f_{\mathbf{\theta}}^{(k)}}(\mathbf{x})=\mathbf{ M}_{\ell}^{(2)}\nabla_{\mathbf{W}_{\ell}}f_{\mathbf{\theta}_{0}}^{(k)}(\mathbf{x}) \mathbf{M}_{\ell}^{(1)},\]
which corresponds to the mapping \(\mathbf{W}_{\ell}\mapsto\mathbf{M}_{\ell}^{(1)}\mathbf{W}_{\ell}\mathbf{M}_{ \ell}^{(2)}\).
**Independent optimization.** We assume that the feature transformations \(\mathbf{M}_{\ell}^{(1)}\) and \(\mathbf{M}_{\ell}^{(2)}\) are independently optimized from each other and from the feature transformations corresponding to other weights \(\ell^{\prime}\neq\ell\). The motivation for this assumption comes from the fact that at initialization in wide fully connected neural networks, the gradients at each layer are known to be uncorrelated [34]. Because of independence within each layer, we also need to define the **joint penalty** for \(v\geq 1\) as
\[(\omega_{1}\oplus\omega_{2})(v)\triangleq\inf_{1\leq z\leq v}\omega_{1}(z)+ \omega_{2}(\tfrac{v}{z}).\]
It is straightforward to verify that if \(\omega_{1}\) and \(\omega_{2}\) are strictly quasi-convex such that \(\omega_{1}(1)=\omega_{2}(1)=0\), then \(\omega_{1}\oplus\omega_{2}\) has the same properties; see Appendix A.1.
**Final weight matrix.** The final operation in most neural networks is linear matrix multiplication by the final weight matrix \(\mathbf{W}_{L}\in\mathbb{R}^{P_{L}\times C}\), such that if \(\mathbf{z}_{\mathbf{\theta}}(\mathbf{x})\in\mathbb{R}^{P_{L}}\) are the penultimate layer features, then the output is a vector \(f_{\mathbf{\theta}}(\mathbf{x})=\mathbf{W}_{L}^{\mathrm{T}}\mathbf{z}_{\mathbf{ \theta}}(\mathbf{x})\in\mathbb{R}^{C}\). Introducing \(\mathbf{M}_{L}\in\mathbb{R}^{P_{L}\times P_{L}}\) such that \(\mathbf{z}_{\mathbf{\theta}}(\mathbf{x})=\mathbf{M}_{L}\mathbf{z}_{\mathbf{\theta}_{0}} (\mathbf{x})\), we have the Kronecker formulation
\[f_{\mathbf{\theta}}(\mathbf{x})=\underbrace{(\mathbf{I}_{C}\otimes(\mathbf{z}_{\bm {\theta}_{0}}(\mathbf{x})^{\mathrm{T}}\mathbf{M}_{L}))}_{\overline{\nabla_{ \mathrm{vec}(\mathbf{W}_{L})}}f_{\mathbf{\theta}}(\mathbf{x})}\mathrm{vec}(\mathbf{ W}_{L}).\]
With the above neural network model in hand, this brings us to our main result on the solution to the adaptive feature learning problem.
**Theorem 2**.: _There is a solution to eq. (1) under Model A such that for each \(\ell\in[L]\),_
\[\widehat{\mathbf{M}}_{\ell}^{(1)}=\mathbf{U}_{\ell}\mathbf{S}_{\ell}^{(1)} \mathbf{U}_{\ell}^{\mathrm{T}},\quad\widehat{\mathbf{W}}_{\ell}=\mathbf{U}_{ \ell}\mathbf{\Sigma}_{\ell}\mathbf{V}_{\ell}^{\mathrm{T}},\quad\widehat{\mathbf{M }}_{\ell}^{(2)}=\mathbf{V}_{\ell}\mathbf{S}_{\ell}^{(2)}\mathbf{V}_{\ell}^{ \mathrm{T}},\]
_where \(\mathbf{U}_{\ell}\in\mathbb{R}^{P_{\ell}\times P_{\ell}}\), \(\mathbf{V}_{\ell}\in\mathbb{R}^{Q_{\ell}\times Q_{\ell}}\) are orthogonal matrices and \(\mathbf{S}_{\ell}^{(1)}=\mathrm{diag}(\mathbf{s}_{\ell}^{(1)})\), \(\mathbf{\Sigma}_{\ell}=\mathrm{diag}_{P_{\ell}\times Q_{\ell}}(\mathbf{\sigma}_{\ell})\), \(\mathbf{S}_{\ell}^{(2)}=\mathrm{diag}(\mathbf{s}_{\ell}^{(2)})\) are given by minimizers_
\[[\mathbf{s}_{\ell}^{(1)}]_{j},[\mathbf{\sigma}_{\ell}]_{j},[\mathbf{s}_{\ell}^{(2) }]_{j}=\operatorname*{arg\,min}_{s_{1},s_{2}\geq 1,\sigma\geq 0}\omega_{1}(s_{1})+ \omega_{2}(s_{2})+\sigma^{2}\text{ \ s.t. \ s}_{1}\sigma s_{2}=[\mathbf{d}_{\ell}]_{j}\text{ \ for }j\leq\min\{P_{\ell},Q_{\ell}\}\]
_and \([\mathbf{s}_{\ell}^{(1)}]_{j}=1\), \([\mathbf{s}_{\ell}^{(2)}]_{j}=1\) for \(j>\min\{P_{\ell},Q_{\ell}\}\), such that \(\widehat{\mathbf{B}}_{\ell}=\mathbf{U}_{\ell}\mathrm{diag}_{P_{\ell}\times Q_{ \ell}}(\mathbf{d}_{\ell})\mathbf{V}_{\ell}^{\mathsf{T}}\) satisfy_
\[(\widehat{\mathbf{B}}_{\ell})_{\ell=1}^{L}\in\operatorname*{arg\,min}_{ \mathbf{B}_{\ell}\in\mathbb{R}^{P_{\ell}\times Q_{\ell}}}\sum_{\ell=1}^{L} \widetilde{\Omega}_{\omega_{\ell}^{(1)}\oplus_{\omega_{\ell}^{(2)}}}(\mathbf{ B}_{\ell})\ \text{ s.t. }\ \widehat{\mathbf{y}}_{i}-f_{\mathbf{\theta}_{0}}(\mathbf{x}_{i})=\sum_{\ell=1}^{L} \nabla_{\mathrm{vec}(\mathbf{W}_{\ell})}f_{\mathbf{\theta}_{0}}(\mathbf{x}_{i}) \mathrm{vec}(\mathbf{B}_{\ell})\ \forall\ i\in[N].\]
In other words, we have reduced the bilinearly constrained optimization over \(\mathbf{M}\) and \(\mathbf{\theta}\) to a linearly constrained optimization over \((\mathbf{B}_{\ell})_{\ell=1}^{L}\), just as we did in the unstructured case. However, this time, due to the matrix structure of \(\mathbf{W}_{\ell}\) and corresponding structure in \(\mathbf{M}_{\ell}^{(1)}\) and \(\mathbf{M}_{\ell}^{(2)}\), the new equivalent optimization is much richer than simple minimum norm regression using tangent features. To better understand the regularization in this new problem, we have the following result.
**Proposition 3**.: _Let \(\omega\) be a continuous strictly quasi-convex function minimized at \(\omega(1)=0\). Then \(v^{2}\mapsto\tilde{w}(v)\) is an increasing concave function, and \(\tilde{\omega}(v)=v^{2}+o(v^{2})\)._
That is, no matter the original penalties \(\omega_{\ell}^{(1)}\) and \(\omega_{\ell}^{(2)}\), the resulting \(\widetilde{\Omega}_{\omega_{\ell}^{(1)}\oplus_{\omega_{\ell}^{(2)}}}\) will always be a spectral penalty with 1) sub-quadratic tail behavior, and 2) quadratic behavior for small values. We illustrate this for a few examples in Figure 2. For one example, when \(\omega(v)=(v-1)^{2}\), the effective penalty \(\tilde{\omega}(v)\) behaves like \(|v|\) for large \(v\), making \(\widetilde{\Omega}_{\omega}(\mathbf{B})\) like the nuclear norm for large singular values and like the Frobenius norm for small singular values. In general, \(\tilde{\omega}\) has slower tails than \(\omega\). This behavior is highly related of the equivalence of the nuclear norm as the sum of Frobenius norms of two factors [27], which coincides with the case \(\omega(v)=v^{2}\); however, since we constrain the \(\mathbf{M}\) to be near \(\mathbf{I}_{P}\), we retain Frobenius norm behavior near 0.
The result of this effective regularization is a model that is able to leverage structures through the group approximate low-rank penalty while also being robust to noise and model misspecification through the Frobenius norm penalty for small singular values. From this perspective, we can conceptually consider a decomposition of the matrices \(\mathbf{B}_{\ell}\approx\mathbf{B}_{\ell}^{\mathrm{LO}}+\mathbf{B}_{\ell}^{ \mathrm{NTK}}\), where \(\mathbf{B}_{\ell}^{\mathrm{LO}}\) are low rank and capture the structure learned from the data that is predictive of the training labels, while \(\mathbf{B}_{\ell}^{\mathrm{NTK}}\) form the component that fits the residual (after regression using \(\mathbf{B}_{\ell}^{\mathrm{LO}}\)) via standard NTK interpolation with Frobenius norm minimization. In this way, the adaptiveness of the neural network is able to get the benefits of both strategies.
We also see special structure in the final layer weights which are directly connected to the outputs. For example, for a single output (\(C=1\)) when \(\mathbf{B}_{L}\) is simply a vector \(\mathbf{\beta}_{L}\), the final layer contribution to the adapted kernel (decomposing as \(K(\mathbf{x},\mathbf{x}^{\prime})=\sum_{\ell=1}^{L}K_{\ell}(\mathbf{x}, \mathbf{x}^{\prime})\)) is similar to the structureless feature learning setting, but with only the final layer features:
\[K_{L}(\mathbf{x},\mathbf{x}^{\prime})=\mathbf{z}(\mathbf{x})^{\mathsf{T}}( \widehat{\mathbf{M}}_{L}^{(1)})^{2}\mathbf{z}(\mathbf{x}^{\prime})=\mathbf{z} (\mathbf{x})^{\mathsf{T}}\mathbf{z}(\mathbf{x}^{\prime})+(s_{L}^{2}-1)\| \widehat{\mathbf{\beta}}_{L}\|_{2}^{-2}(\mathbf{z}(\mathbf{x})^{\mathsf{T}} \widehat{\mathbf{\beta}}_{L})(\mathbf{z}(\mathbf{x}^{\prime})^{\mathsf{T}} \widehat{\mathbf{\beta}}_{L}).\]
That is, the contribution of the final layer to the kernel is like the label kernel, but instead of the labels, it consists of outputs representable by the final layer tangent features at initialization \(\mathbf{z}(\mathbf{x})^{\mathsf{T}}\widehat{\mathbf{\beta}}_{L}\). We can see this effect in Figure 1, where even in the case where the neural network is trained on binarized labels (right), the kernel bears some resemblance to the label kernel of the unbinarized labels (left).
Any residual component of the target function that is not representable by the final layer tangent features \(\mathbf{z}(\mathbf{x})\) must be built with the remaining weights \((\widehat{\mathbf{B}}_{\ell})_{\ell=1}^{L-1}\). If this residual is small, then the regularization behaves like the Frobenius norm, and the contribution to the kernel is simply the NTK. It may be the case, however, that for some data points, the residual is quite large, in which case it will take a large value of \(\widehat{\mathbf{B}}_{\ell}\) to close the gap, which will result in a change to the kernel.
To understand this change the kernel, consider the extremely simple case where \(\widehat{\mathbf{B}}_{\ell}\) has only a rank-one dimension in which it escapes the Frobenius norm penalty (call this \(\widehat{\mathbf{B}}_{\ell}^{\mathrm{LO}}\)). Then we observe the same form of the contribution as in the final layer case:
\[K_{\ell}(\mathbf{x},\mathbf{x}^{\prime}) =\nabla_{\mathrm{vec}(\mathbf{W}_{\ell})}f_{\mathbf{\theta}_{0}}( \mathbf{x})\nabla_{\mathrm{vec}(\mathbf{W}_{\ell})}f_{\mathbf{\theta}_{0}}( \mathbf{x}^{\prime})^{\mathsf{T}}\] \[\quad+(s_{\ell}^{2}-1)\|\widehat{\mathbf{B}}_{\ell}^{\mathrm{LO}} \|_{F}^{-2}\nabla_{\mathrm{vec}(\mathbf{W}_{\ell})}f_{\mathbf{\theta}_{0}}(\mathbf{x })\widehat{\mathbf{B}}_{\ell}^{\mathrm{LO}}\widehat{\mathbf{B}}_{\ell}^{ \mathrm{LO}^{\mathsf{T}}}\nabla_{\mathrm{vec}(\mathbf{W}_{\ell})}f_{\mathbf{\theta}_{0}} (\mathbf{x}^{\prime})^{\mathsf{T}}.\]
That is, it also contributes something akin to the label kernel; however, the magnitude of this contribution increases with the "difficulty" of the task in terms of the magnitude of \(\|\widehat{\mathbf{B}}_{F}^{\text{LO}}\|_{F}\) and correspondingly \(s_{\ell}\), and can be much larger than the label kernel. However, in the high dimensional tangent feature space, combined with the low rank regularization, this residual kernel will typically only be aligned with the data points that have a large residual. We can see this in Figure 1 and even more clearly in Figure 3, where for a network trained on binarized targets \(\operatorname{sign}(y_{i})\), the kernel has large values in the middle (corresponding to \(y_{i}\approx 0\), which are the most difficult points, being the place where the residual \(\operatorname{sign}(y)-y\) has a large jump discontinuity).
For higher rank perturbations, the result can be viewed as a sum of such rank-one residual kernels; however, the meaning of these components and the relative magnitude of their weightings becomes more difficult to reason about. As the target function becomes more complex (such as under excessive label noise), the weights are likely to become high rank in order to capture this complexity.
## 4 Discussion
In this work, we have proposed a framework of adaptive feature learning using tangent features that sheds light on some phenomena observed in real neural networks. The most important question one should ask, however, is the degree to which real neural networks actually fit this model, and whether there still remains a gap in our understanding.
**Sample complexity.** In terms of practical performance, we have found that the adaptive feature learning model offers significant sample complexity advantages over fixed neural tangent kernel features. In Figure 4, we compare the two models and real neural networks on MNIST and CIFAR-10. For the same test error, the adaptive feature model requires an order of magnitude fewer samples. However, given enough samples, both the adaptive and fixed feature models appear to converge to the same error floor, while the real neural networks significantly outperform both models and appear to have a faster convergence rate in the case of CNNs on CIFAR-10. This suggests that there are fundamental limitations of the tangent kernel feature space that even adaptive feature learning cannot avoid, although adaptivity does close a non-negligible part of the gap in the low-sample regime. An interesting direction for future work is to analyze the convergence rate of adaptive feature learning under Model A and determine under what conditions there might be improved rates over fixed tangent features.
Figure 3: **Adapted kernel reveals difficult structure.** For the neural network from Figure 1 trained on binarized labels \(\operatorname{sign}(y_{i})\) (\(\mathbf{left}\)), the target target function (green, solid) is difficult while the function \(\mathbf{x}\mapsto y\) (black, dotted) is easily predicted using tangent features. The network must learn (**right**) to fit the residual (red, dashed), which results in the kernel (orange, \(\times\)) being highly influenced by difficult training points (near \(y=0\)).
#### Average features vs. final features.
In our analysis we considered the average features over a linear path in parameter space, but this is difficult to work with in practice, compared to for example the features at the end of training. Of course, we can equivalently write the average features as an average transformation by \(\mathbf{M}_{\boldsymbol{\theta}^{\prime}}\) such that \(\nabla_{\boldsymbol{\theta}^{\prime}}f_{\boldsymbol{\theta}^{\prime}}(\mathbf{x })=\nabla_{\boldsymbol{\theta}_{0}}f_{\boldsymbol{\theta}_{0}}(\mathbf{x}) \mathbf{M}_{\boldsymbol{\theta}^{\prime}}\) along the path from \(\boldsymbol{\theta}_{0}\) to \(\boldsymbol{\theta}\):
\[\overline{\nabla f_{\boldsymbol{\theta}}}(\mathbf{x})=\nabla_{\boldsymbol{ \theta}_{0}}f_{\boldsymbol{\theta}_{0}}(\mathbf{x})\mathbf{M}=\nabla_{ \boldsymbol{\theta}_{0}}f_{\boldsymbol{\theta}_{0}}(\mathbf{x})\Big{(}\mathbf{ I}_{P}+\int_{0}^{1}(\mathbf{M}_{\boldsymbol{\theta}^{\prime}}-\mathbf{I}_{P}) \big{|}_{\boldsymbol{\theta}^{\prime}=(1-t)\boldsymbol{\theta}_{0}+t \boldsymbol{\theta}}dt\Big{)}.\]
In general, an integral of matrices, even if individually low rank, should not be a low rank matrix. However, the average \(\mathbf{M}\) is often a low-rank perturbation to \(\mathbf{I}_{P}\), which is a remarkable coincidence unless all \(\mathbf{M}_{\boldsymbol{\theta}^{\prime}}\) along this path have the same principal subspace as \(\mathbf{M}\) but with different eigenvalues. We thus expect the average features and final features to be similar, but not necessarily the same (in general, due to the averaging effect, the final tangent features will be larger for example). This difference can be important in downstream analysis--for example, in transfer learning, we would let \(\boldsymbol{\theta}_{0}\) be the solution to a previous optimization problem, and the initial tangent features would be the final tangent features of that optimization, rather than the average tangent features. Thus another direction for further study is the extent to which average features and final features coincide. In our limited (unpublished) observations, the final feature kernel closely resembles a re-scaled version of the average feature kernel, which coincides with recent results for linear networks [19].
The above limitations notwithstanding, we believe this framework still provides valuable insight into real phenomena in neural networks. We discuss two examples.
#### Benign overfitting.
Neural networks have shown remarkable resilience to noisy labels, sparking the recent theoretical research area of studying models that interpolate noisy labels yet still generalize well [3, 9, 13, 29]. Much research in this area has been concerned with finding fixed feature regimes in which this "benign overfitting" can occur in ridge(less) regression settings, but Donhauser et al. [32] have shown that when the ground truth function is sparsely represented by the features, remarkably, the optimal \(\ell_{p}\) penalty is not \(p\in\{1,2\}\), but rather in between, since some resemblance to sparsity-inducing \(p=1\) encourages learning structure, but something like \(p=2\) is necessary to absorb noise without harming prediction (for more discussion regarding the latter point, see [29]). Our results reflect these desired optimal properties precisely, since the effective penalties always have sub-quadratic tail behavior (promoting structure) and quadratic behavior near zero (absorbing noise). This raises another future research question, regarding the optimality of these effective penalties compared to \(\ell_{p},\,p\in(1,2)\) penalties.
Figure 4: **Adaptive feature learning improves low-sample performance.** We perform 10-class classification on MNIST (**left**) and CIFAR-10 (**right**) using three different models: a linear model using fixed tangent features (blue, solid), the adaptive feature optimization in eq.1 under Model A (orange, dashed), and standard neural networks (green, dash–dot). The adaptive feature achieves the same performance as the non-adaptive tangent feature model with an order of magnitude fewer samples. Further details are given in Appendix B.2.
Low rank optimization.Our framework sheds interesting insight on the success of low rank deviation optimizations in neural networks, which have been used to characterize task difficulty and compress networks [35] and recently to efficiently fine-tune pretrained large language models in the low rank adaptation (LoRA) method [36]. In LoRA, for example, each weight matrix is parameterized as \(\mathbf{W}_{\ell}=\mathbf{U}_{\ell}\mathbf{V}_{\ell}^{\mathsf{T}}\) for \(\mathbf{U}_{\ell}\in\mathbb{R}^{P_{\ell}\times R_{\ell}}\) and \(\mathbf{V}_{\ell}\in\mathbb{R}^{Q_{\ell}\times R_{\ell}}\), where \(R_{\ell}\ll P_{\ell},Q_{\ell}\). Due to the sub-quadratic spectral regularization of the adaptive feature optimization, if the target task is sufficiently related to the source task, such that the target function is well represented by the tangent features at \(\mathbf{\theta}_{0}\) of the pretrained model, then according to Theorem 2, the model is inclined to learn an approximately low-rank deviation from the initial parameters even if the \(\mathbf{W}_{\ell}\) were not explicitly constrained to be low rank. By constraining the weights to be low rank by design, LoRA can essentially recover the same solution, if not an even more aggressively structured one, at a fraction of the computational time and memory cost.
## Acknowledgements
The experiments in this paper were written in PyTorch [37] and JAX [38] with code assistance from GitHub Copilot and ChatGPT. DL was supported by ARO grant 2003514594. SA and computing resources were supported by Richard G. Baraniuk via NSF grants CCF-1911094, IIS-1838177, and IIS-1730574; ONR grants N00014-18-1-2571, N00014-20-1-2534, and MURI N00014-20-1-2787; AFOSR grant FA9550-22-1-0060; and a Vannevar Bush Faculty Fellowship, ONR grant N00014-18-1-2047.
|
2303.04613 | The Descriptive Complexity of Graph Neural Networks | We analyse the power of graph neural networks (GNNs) in terms of Boolean
circuit complexity and descriptive complexity.
We prove that the graph queries that can be computed by a polynomial-size
bounded-depth family of GNNs are exactly those definable in the guarded
fragment GFO+C of first-order logic with counting and with built-in relations.
This puts GNNs in the circuit complexity class (non-uniform) TC^0. Remarkably,
the GNN families may use arbitrary real weights and a wide class of activation
functions that includes the standard ReLU, logistic "sigmod", and hyperbolic
tangent functions. If the GNNs are allowed to use random initialisation and
global readout (both standard features of GNNs widely used in practice), they
can compute exactly the same queries as bounded depth Boolean circuits with
threshold gates, that is, exactly the queries in TC^0.
Moreover, we show that queries computable by a single GNN with piecewise
linear activations and rational weights are definable in GFO+C without built-in
relations. Therefore, they are contained in uniform TC^0. | Martin Grohe | 2023-03-08T14:32:59Z | http://arxiv.org/abs/2303.04613v4 | # The Descriptive Complexity of Graph Neural Networks
###### Abstract
We analyse the power of graph neural networks (GNNs) in terms of Boolean circuit complexity and descriptive complexity.
We prove that the graph queries that can be computed by a polynomial-size bounded-depth family of GNNs are exactly those definable in the guarded fragment \(\mathsf{GFO}+\mathsf{C}\) of first-order logic with counting and with built-in relations. This puts GNNs in the circuit complexity class \(\mathsf{TC}^{0}\). Remarkably, the GNN families may use arbitrary real weights and a wide class of activation functions that includes the standard ReLU, logistic "sigmoid", and hyperbolic tangent functions. If the GNNs are allowed to use random initialisation and global readout (both standard features of GNNs widely used in practice), they can compute exactly the same queries as bounded depth Boolean circuits with threshold gates, that is, exactly the queries in \(\mathsf{TC}^{0}\).
Moreover, we show that queries computable by a single GNN with piecewise linear activations and rational weights are definable in \(\mathsf{GFO}+\mathsf{C}\) without built-in relations. Therefore, they are contained in uniform \(\mathsf{TC}^{0}\).
## 1 Introduction
Graph neural networks (GNNs) [10, 28] are deep learning models for graph data that play a key role in machine learning on graphs (see, for example, [7]). A GNN describes a distributed algorithm carrying out local computations at the vertices of the input graph. At any time, each vertex has a "state", which is a vector of reals, and in each computation step it sends a message to all its neighbours. The messages are also vectors of reals, and they only depend on the current state of the sender and the receiver. Every
vertex aggregates the messages it receives and computes its new state depending on the old state and the aggregated messages. The message and state-update functions are computed by feedforward neural networks whose parameters are learned from data.
In this paper, we study the _expressiveness_ of GNNs: which functions on graphs or their vertices can be computed by GNNs? We provide answers in terms of Boolean circuits and logic, that is, computation models of classical (descriptive) complexity theory. An interesting and nontrivial aspect of this is that GNNs are "analogue" computation models operating on and with real numbers. The weights of neural networks may be arbitrary reals, and the activation functions may even be transcendental functions such as the logistic function \(x\mapsto\frac{1}{1+e^{-x}}\).
We always want functions on graphs to be _isomorphism invariant_, that is, isomorphic graphs are mapped to the same value. Similarly, we want functions on vertices to be _equivariant_, that is, if \(v\) is a vertex of a graph \(G\) and \(f\) is an isomorphism from \(G\) to a graph \(H\), then \(v\) and \(f(v)\) are mapped to the same value. Functions computed by GNNs are always invariant or equivariant, and so are functions defined in logic (a.k.a. _queries_).
In a machine learning context, it is usually assumed that the vertices of the input graph are equipped with additional features in the form of vectors over the reals; we speak of _graph signals_ in this paper. The function values are also vectors over the reals. Thus a function on the vertices of a graph is an equivariant transformation between graph signals. When comparing GNNs and logics or Boolean circuits, we focus on Boolean functions, where the input signal is Boolean, that is, it associates a \(\{0,1\}\)-vector with every vertex of the input graph, and the output is just a Boolean value \(0\) or \(1\). In the logical context, it is natural to view Boolean signals as vertex labels. Thus a Boolean signal in \(\{0,1\}^{k}\) is described as a sequence of \(k\) unary relations on the input graph. Then an invariant Boolean function becomes a _Boolean query_ on labelled graphs, and an equivariant Boolean function on the vertices becomes a _unary query_. To streamline the presentation, in this paper we focus on unary queries and equivariant functions on the vertices. All our results also have versions for Boolean queries and functions on graphs, but we only discuss these in occasional remarks. While we are mainly interested in queries (that is, Boolean functions), our results also have versions for functions with arbitrary real input and output signals. These are needed for the proofs anyway. But since the exact statements become unwieldy, we keep them out of the introduction. Before discussing further background, let us state our central result.
**Theorem 1.1**.: _Let \(\mathcal{G}\) be a unary query on labelled graphs. Then the following are equivalent._
1. \(\mathcal{G}\) _is computable by a polynomial-weight bounded-depth family of GNNs with rpl approximable activation functions._
2. \(\mathcal{G}\) _is definable in the guarded fragment_ \(\mathsf{GFO}+\mathsf{C}_{\mathrm{nu}}\) _of first-order logic with counting and with built-in relations._
The result requires more explanations. First of all, it is a _non-uniform_ result, speaking about computability by families of GNNs and a logic with built-in relation. A family
\(\mathcal{N}=(\mathfrak{N}^{(n)})_{n\in\mathbb{N}}\) of GNNs consists of GNNs \(\mathfrak{N}^{(n)}\) for input graphs of size \(n\). _Bounded depth_ refers to the number of message passing rounds, or layers, of the GNNs as well as the depth of the feed-forward neural networks they use for their message and state-update functions. We would like the GNN \(\mathfrak{N}^{(n)}\) to be of "size" polynomial in \(n\), but since we allow arbitrary real weights as parameters of the neural networks, it is not clear what this actually means. We define the _weight_ of a GNN to be the number of computation nodes of the underlying neural networks plus the absolute values of all weights. The class of _rpl (rational piecewise linear) approximable_ functions contains all functions that are commonly used as activation functions for neural networks, for example, the rectified linear unit, the logistic function, the hyperbolic tangent function, the scaled exponential linear unit (see Section 2.4 for background on neural networks and their activation functions).
On the logical side, _first-order logic with counting_ FO+C is the two-sorted extension of first-order logic over relational structures that has variables ranging over the non-negative integers, bounded arithmetic on the integer side, and counting terms that give the number of assignments satisfying a formula. In the _\(k\)-variable fragment_ FO\({}^{k}+\)C, only \(k\) variables ranging over the vertices of the input graphs are allowed (but arbitrarily many variables for the integer part). The _guarded fragment_ GFO+C is a fragment of FO\({}^{2}+\)C where quantification over vertices is restricted to the neighbours of the current vertex. _Built-in relations_ are commonly used in descriptive complexity to introduce non-uniformity to logics and compare them to non-uniform circuit complexity classes. Formally, they are just arbitrary relations on the non-negative integers that the logic can access, independently of the input structure.
It is well-known that over ordered input structures, FO+C with built-in relations captures the circuit complexity class (non-uniform) TC\({}^{0}\), consisting of Boolean functions (in our context: queries) that are computable by families of bounded-depth polynomial-size Boolean circuits with threshold gates. This implies that, as a corollary to Theorem 1.1, we get the following.
**Corollary 1.2**.: _Every unary query that is computable by a polynomial-weight bounded-depth family of GNNs with rpl approximable activation functions is in TC\({}^{0}\)._
The strength of GNNs can be increased by extending the input signals with a random component [2, 27]. In [2], it was even proved that such _GNNs with random initialisation_ can approximate all functions on graphs. The caveat of this result is that it is non-uniform and that input graphs of size \(n\) require GNNs of size exponential in \(n\) and depth linear in \(n\). We ask which queries can be computed by polynomial-weight, bounded-depth families of GNNs. Surprisingly, this gives us a converse of Corollary 1.2 and thus a characterisation of TC\({}^{0}\).
**Theorem 1.3**.: _Let \(\mathcal{G}\) be a unary query on labelled graphs. Then the following are equivalent._
1. \(\mathcal{G}\) _is computable by a polynomial-weight bounded-depth family of GNNs with random initialisation and with rpl approximable activation functions._
_._
2. \(\mathds{G}\) _is computable in_ \(\mathsf{TC}^{0}\)_._
We mention that, following [2], we allow GNNs with random initialisation to also use a feature known as _global readout_, which means that in each message-passing round of a GNN computation, the vertices not only receive messages from their neighbours, but the aggregated state of all vertices. There is also a version of Theorem 1.1 for GNNs with global readout.
### Related Work
A fundamental result on the expressiveness of GNNs [24, 32] states that two graphs are distinguishable by a GNN if and only if they are distinguishable by the 1-dimensional Weisfeiler-Leman (WL) algorithm, a simple combinatorial algorithm originally introduced as a graph isomorphism heuristics [23, 31]. This result has had considerable impact on the subsequent development of GNNs, because it provides a yardstick for the expressiveness of GNN extensions (see [25]). Its generalisation to higher-order GNNs and higher-dimensional WL algorithms [24] even gives a hierarchy of increasingly more expressive formalisms against which such extensions can be compared. However, these results relating GNNs and their extensions to the WL algorithm only consider a restricted form of expressiveness, the power to distinguish two graphs. Furthermore, the results are _non-uniform_, that is, the distinguishing GNNs depend on the input graphs or at least on their size, and the GNNs may be arbitrarily large and deep. Indeed, the GNNs from the construction in [32] may be exponentially large in the graphs they distinguish. Those of [24] are polynomial. Both have recently been improved by [1], mainly showing that the messages only need to contain logarithmically many bits.
We are not the first to study the logical expressiveness of GNNs (see [12] for a recent survey). It was proved in [3] that all unary queries definable in the guarded fragment \(\mathsf{GC}\) of the extension \(\mathsf{C}\) of first-order logic by counting quantifiers \(\exists^{\geq n}x\) ("there exist at least \(n\) vertices \(x\) satisfying some formula") are computable by a GNN. The logic \(\mathsf{GC}\) is weaker than our \(\mathsf{GFO}+\mathsf{C}\) in that it does not treat the numbers \(n\) in the quantifiers \(\exists^{\geq n}x\) as variables, but as fixed constants. What is interesting about this result, and what makes it incomparable to ours, is that it is a _uniform_ result: a query definable in \(\mathsf{GC}\) is computable by a single GNN across all graph sizes. There is a partial converse to this result, also from [3]: all unary queries that are definable in first-order logic and computable by a GNN are actually definable in \(\mathsf{GC}\). Note, however, that there are queries computable by GNNs that are not definable in first-order logic.
A different approach to capturing GNNs by logic has been taken in [9]. There, the authors introduce a new logic \(\mathsf{MPLang}\) that operates directly on the reals. The logic, also a guarded (or modal) logic, is simple and elegant and well-suited to translate GNN computations to logic. The converse translation is more problematic, though. But to be fair, it is also in our case, where it requires families of GNNs and hence non-uniformity. However, the purpose of the work in [9] is quite different from ours. It is our goal to describe GNN computations in terms of standard descriptive complexity and thus to be able to quantify the computational power of GNNs in the framework of classical
complexity. It is the goal of [9] to expand logical reasoning to real-number computations in a way that is well-suited to GNN computations. Of course, both are valid goals.
There is another line of work that is important for us. In the 1990s, researchers studied the expressiveness of feedforward neural networks (FNNs) and compared it to Boolean computation models such as Turing machines and circuits (for example, [18, 21, 22, 29]). Like GNNs, FNNs are analogue computation models operating on the reals, and this work is in the same spirit as ours. An FNN has fixed numbers \(p\) of inputs and \(q\) of outputs, and it thus computes a function from \(\mathbb{R}^{p}\) to \(\mathbb{R}^{q}\). Restricted to Boolean inputs, we can use FNNs with \(p\) inputs to decide subsets of \(\{0,1\}^{p}\), and we can use families of FNNs to decide languages. It was proved in [21] that a language is decidable by a family of bounded-depth polynomial-weight FNNs using piecewise-polynomial activation functions if and only if it is in \(\mathsf{TC}^{0}\). It may seem that our Corollary 1.2, at least for GNNs with piecewise-linear (or even piecewise-polynomial) activations, follows easily from this result. But this is not the case, because when processing graphs, the inputs to the FNNs computing the message and update functions of the GNN may become large through the aggregation ranging over all neighbours. Also, the arguments of [21] do not extend to rpl-approximable activation functions like the logistic function. There has been related work [18] that extends to a wider class of activation functions including the logistic function, using arguments based on o-minimality. But the results go in a different direction; they bound the VC dimension of FNN architectures and do not relate them to circuit complexity.
### Techniques
The first step in proving the difficult implication (1)\(\Rightarrow\)(2) of Theorem 1.1 is to prove a uniform result for a simpler class of GNNs; this may be of independent interest.
**Theorem 1.4**.: _Let \(\mathcal{G}\) be a unary query computable by a GNN with rational weights and piecewise linear activations. Then \(\mathcal{G}\) is definable in \(\mathsf{GFO}+\mathsf{C}\)._
Compare this with the result of [3]: every query definable in the (weaker) logic \(\mathsf{GC}\) is computable by a GNN and in fact a GNN with rational weights and piecewise linear activations. Thus we may write \(\mathsf{GC}\subseteq\mathsf{GNN}\subseteq\mathsf{GFO}+\mathsf{C}.\) It is not hard to show that both inclusions are strict.
To prove Theorem 1.4, we need to show that the rational arithmetic involved in GNN computations, including unbounded linear combinations, can be simulated, at least approximately, in the logic \(\mathsf{GFO}+\mathsf{C}\). Establishing this is a substantial part of this paper, and it may be of independent interest.
So how do we prove the forward implication of Theorem 1.1 from Theorem 1.4? It was our first idea to look at the results for FNNs. In principle, we could use the linear-programming arguments of [21]. This would probably work, but would be limited to piecewise linear or piecewise polynomial activations. We could then use o-minimality to extend our results to wider classes of activation functions. After all, o-minimality was also applied successfully in the somewhat related setting of constraint databases [19].
Again, this might work, but our analytical approach seems simpler and more straightforward. Essentially, we use the Lipschitz continuity of the functions computed by FNNs to show that we can approximate arbitrary GNNs with rpl-approximable activations by GNNs with rational weights and piecewise linear activations, and then we apply Theorem 1.3.
Let us close the introduction with a few remarks on Theorem 1.3. The reader may have noted that assertion (1) of the theorem involves randomness in the computation model, whereas (2) does not. To prove the implication (1)\(\Rightarrow\)(2) we use the well-known "Adleman Trick" that allows us to trade randomness for non-uniformity. To prove the converse implication, the main insight is that with high probability, random node initialisation gives us a linear order on the vertices of the input graph. Then we can use the known fact that FO+C with built-in relations captures \(\mathsf{TC}^{0}\) on ordered structures.
### Structure of this Paper
After collecting preliminaries from different areas in Section 2, in Section 3 we develop a machinery for carrying out the required rational arithmetic in first-order logic with counting and its guarded fragment. This is a significant part of this paper. We then introduce GNNs (Section 4) and prove the uniform Theorem 1.4 (Section 5). We prove the forward direction of Theorem 1.1 in Section 6 and the backward direction in Section 7. Finally, we prove Theorem 1.3 in Section 8.
## 2 Preliminaries
By \(\mathbb{Z},\mathbb{N},\mathbb{N}_{>0},\mathbb{Q},\mathbb{R}\) we denote the sets of integers, nonnegative integers, positive integers, rational numbers, and real numbers, respectively. Instead or arbitrary rationals, we will often work with _dyadic rationals_, that is, rationals whose denominator is a power of two. These are precisely the numbers that have a presentation as finite precision binary floating point numbers. We denote the set of dyadic rationals by \(\mathbb{Z}\!\left[\frac{1}{2}\right]\).
We denote the binary representation of \(n\in\mathbb{N}\) by \(\operatorname{bin}(n)\). The _bitsize_ of \(n\) is length of the binary representation, that is,
\[\operatorname{bsize}(n)\coloneqq|\operatorname{bin}(n)|=\begin{cases}1&\text{ if }n=0,\\ \left[\log(n+1)\right]&\text{ if }n>0,\end{cases}\]
where \(\log\) denotes the binary logarithm. We denote the \(i\)th bit of the binary representation of \(n\in\mathbb{N}\) by \(\operatorname{Bit}(i,n)\), where we count bits starting from \(0\) with the lowest significant bit. It will be convenient to let \(\operatorname{Bit}(i,n)\coloneqq 0\) for all \(i\geq\operatorname{bsize}(n)\). So
\[n=\sum_{i=0}^{\operatorname{bsize}(n)-1}\operatorname{Bit}(i,n)\cdot 2^{i}= \sum_{i\in\mathbb{N}}\operatorname{Bit}(i,n)\cdot 2^{i}.\]
The _bitsize_ of an integer \(n\in\mathbb{Z}\) is \(\operatorname{bsize}(n)\coloneqq 1+\operatorname{bsize}(|n|)\), and the _bitsize_ of a dyadic rational \(q=\frac{n}{2^{\ell}}\in\mathbb{Z}\!\left[\frac{1}{2}\right]\) in reduced form is \(\operatorname{bsize}(q)\coloneqq\operatorname{bsize}(n)+\ell+1\).
We denote tuples (of numbers, variables, vertices, et cetera) using boldface letters. Usually, a \(k\)-tuple \(\mathbf{t}\) has entries \(t_{1},\ldots,t_{k}\). The empty tuple is denoted \(\varnothing\) (just like the emptyset, this should never lead to any confusion), and for every set \(S\) we have \(S^{0}=\{\varnothing\}\). For tuples \(\mathbf{t}=(t_{1},\ldots,t_{k})\) and \(\mathbf{u}=(u_{1},\ldots,u_{\ell})\), we let \(\mathbf{tu}=(t_{1},\ldots,t_{k},u_{1},\ldots,u_{\ell})\). To improve readability, we often write \((\mathbf{t},\mathbf{u})\) instead of \(\mathbf{tu}\). This does not lead to any confusion, because we never consider nested tuples.
For a vector \(\mathbf{x}=(x_{1},\ldots,x_{k})\in\mathbb{R}^{k}\), the \(\ell_{1}\)_-norm_ (a.k.a Manhattan norm) is \(\|\mathbf{x}\|_{1}\coloneqq\sum_{i=1}^{k}|x_{i}|\), the \(\ell_{2}\)_-norm_ (a.k.a Euclidean norm) is \(\|\mathbf{x}\|_{2}\coloneqq\sqrt{\sum_{i=1}^{k}x_{i}^{2}}\), and the \(\ell_{\infty}\)_-norm_ (a.k.a. maximum norm) \(\|\mathbf{x}\|_{\infty}\coloneqq\max_{i\in[k]}|x_{i}|\). As \(\frac{1}{k}\|\mathbf{x}\|_{1}\leq\|\mathbf{x}\|_{\infty}\leq\|\mathbf{x}\|_{2}\leq\|\mathbf{x }\|_{1}\), it does not make much of a difference which norm we use; most often, it will be convenient for us to use the \(\ell_{\infty}\)-norm.
### Functions and Approximations
A function \(f:\mathbb{R}^{p}\to\mathbb{R}^{q}\) is _Lipschitz continuous_ if there is some constant \(\lambda\), called a _Lipschitz constant_ for \(f\), such that for all \(\mathbf{x},\mathbf{y}\in\mathbb{R}^{p}\) it holds that \(\|f(\mathbf{x})-f(\mathbf{y})\|_{\infty}\leq\lambda\,\|\mathbf{x}-\mathbf{y}\|_{\infty}\).
A function \(L:\mathbb{R}\to\mathbb{R}\) is _piecewise linear_ if there are \(n\in\mathbb{N}\), \(a_{0},\ldots,a_{n}\), \(b_{0},\ldots,b_{n}\), \(t_{1},\ldots,t_{n}\in\mathbb{R}\) such that \(t_{1}<t_{2}<\ldots<t_{n}\) and
\[L(x)=\begin{cases}a_{0}x+b_{0}&\text{if $x<t_{1}$,}\\ a_{i}x+b_{i}&\text{if $t_{i}\leq x<t_{i+1}$ for some $i<n$,}\\ a_{n}x+b_{n}&\text{if $x\geq t_{n}$}\end{cases}\]
if \(n\geq 1\), or \(L(x)=a_{0}x+b_{0}\) for all \(x\) if \(n=0\). Note that there is a unique _minimal representation_ of \(L\) with minimal number \(n\) of pieces. We call \(t_{1},\ldots,t_{n}\) in the minimal representation of \(L\) the _thresholds_ of \(L\); these are precisely the points where \(L\) is non-linear. \(L\) is _rational_ if all its parameters \(a_{i},b_{i},t_{i}\) in the minimal representation are dyadic rationals.1 If \(L\) is rational, then its _bitsize_ of \(\operatorname{bsize}(L)\) is the sum of the bitsizes of all the parameters \(a_{i},b_{i},t_{i}\) of the minimal representation. Observe that if \(L\) is continuous then it is Lipschitz continuous with Lipschitz constant \(\max_{0\leq i\leq n}a_{i}\).
Footnote 1: Throughout this paper we work with dyadic rationals. For this reason, we are a little sloppy in our terminology. For example, we call a function “rational piecewise linear” when the more precise term would be “dyadic-rational piecewise linear”.
**Example 2.1**.: The most important example of a rational piecewise linear function for us is the _rectified linear unit_\(\operatorname{relu}:\mathbb{R}\to\mathbb{R}\) defined by \(\operatorname{relu}(x)\coloneqq\max\{0,x\}\).
In fact, it is not hard to see that every piecewise linear function can be written as a linear combination of \(\operatorname{relu}\)-terms. For example, the _identity_ function \(\operatorname{id}(x)=x\) can be written as \(\operatorname{relu}(x)-\operatorname{relu}(-x)\), and the _linearised sigmoid_ function \(\operatorname{lsig}:\mathbb{R}\to\mathbb{R}\) can be defined by \(\operatorname{lsig}(x)=0\) if \(x<0\), \(\operatorname{lsig}(x)=x\) if \(0\leq x<1\), an \(\operatorname{lsig}(x)=1\) if \(x\geq 1\), can be written as \(\operatorname{relu}(x)-\operatorname{relu}(x-1)\).
We need a notion of approximation between functions on the reals. Let \(f,g:\mathbb{R}\to\mathbb{R}\) and \(\varepsilon\in\mathbb{R}_{>0}\). Then \(g\) is an _\(\varepsilon\)-approximation_ of \(f\) if for all \(x\in\mathbb{R}\) it holds that
\[\big{|}f(x)-g(x)\big{|}\leq\varepsilon|f(x)|+\varepsilon.\]
Note that we allow for both an additive and a multiplicative approximation error. This notion is not symmetric, but if \(g\)\(\varepsilon\)-approximates \(f\) for some \(\varepsilon<1\) then \(f\)\(\frac{\varepsilon}{1-\varepsilon}\)-approximates \(g\).
We call a function \(f:\mathbb{R}\to\mathbb{R}\)_rpl-approximable_ if for every \(\varepsilon>0\) there is a continuous rational piecewise linear function \(L\) of bitsize polynomial in \(\varepsilon^{-1}\) that \(\varepsilon\)-approximates \(f\).
**Example 2.2**.: The _logistic function_\(\operatorname{sig}(x)=\frac{1}{1+e^{-x}}\) and the _hyperbolic tangent_\(\tanh(x)=\frac{e^{x}-e^{-x}}{e^{x}+e^{-x}}\) are rpl-approximable. Examples of unbounded rpl-approximable functions are the _soft plus function_\(\ln(1+e^{x})\) and the _exponential linear units_ defined by \(\operatorname{elu}_{\alpha}(c)=x\) if \(x>0\) and \(\alpha(e^{x}-1)\) if \(x\leq 0\), where \(\alpha>0\) is a constant. We omit the straightforward proofs based on simple calculus.
### Graphs and Signals
Graphs play two different roles in this paper: they are the basic data structures on which logics and graph neural networks operate, and they form the skeleton of Boolean circuits and neural networks. In the first role, which is the default, we assume graphs to be undirected, in the second role they are directed acyclic graphs (dags).
We always denote the vertex set of a graph or dag \(G\) by \(V(G)\) and the edge set by \(E(G)\). We denote edges by \(vw\) (without parentheses). We assume the vertex set of all graphs in this paper to be finite and nonempty. The _order_ of a graph \(G\) is \(|G|\coloneqq|V(G)|\), and the _bitsize_\(\operatorname{bsize}(G)\) of \(G\) is the size of a representation of \(G\). (For simplicity, we can just take adjacency matrices, then \(\operatorname{bsize}(G)=|G|^{2}\).) The class of all (undirected) graphs is denoted by \(\mathscr{F}\).
For a vertex \(v\) in an (undirected) graph, we let \(N_{G}(v)\coloneqq\{w\in V(G)\mid vw\in E(G)\}\) be the _neighbourhood_ of \(v\) in \(G\), and we let \(N_{G}[v]\coloneqq\{v\}\cup N_{G}(v)\) be the _closed neighbourhood_. Furthermore, we let \(\deg_{G}(v)\coloneqq|N_{G}(v)|\) be the _degree_ of \(v\) in \(G\). For a vertex \(v\) in a directed graph \(G\), we let \(N_{G}^{+}(v)\coloneqq\{w\in V(G)\mid vw\in E(G)\}\) be the _out-neighbourhood_ of \(v\) and \(N_{G}^{-}(v)\coloneqq\{u\in V(G)\mid uv\in E(G)\}\) the _in-neighbourhood_, and we let \(\deg_{G}^{+}(v)\coloneqq|N_{G}^{+}(v)|\) and \(\deg_{G}^{-}(v)\coloneqq|N_{G}^{-}(v)|\) be the _out-degree_ and _in-degree_. In these and similar notations we omit the index \({}_{G}\) if the graph is clear from the context. We call nodes of in-degree \(0\)_sources_ and nodes of out-degree \(0\)_sinks_. The _depth_\(\operatorname{dp}_{G}(v)\) of node \(v\) in a dag \(G\) is the length of the longest path from a source to \(v\). The _depth_\(\operatorname{dp}(G)\) of a dag \(G\) is the maximum depth of a sink of \(G\).
When serving as data for graph neural networks, the vertices of graphs usually have real-valued features, which we call _graph signals_. An _\(\ell\)-dimensional signal_ on a graph \(G\) is a function \(\mathbf{x}:V(G)\to\mathbb{R}^{\ell}\). We denote the class of all \(\ell\)-dimensional signals on \(G\) by \(\mathscr{S}_{\ell}(G)\) and the class of all pairs \((G,\mathbf{x})\), where \(\mathbf{x}\) is an \(\ell\)-dimensional signal on \(G\), by \(\mathscr{S}\mathscr{S}_{\ell}\). An \(\ell\)-dimensional signal is _Boolean_ if its range is contained in \(\{0,1\}^{\ell}\). By \(\mathscr{S}_{\ell}^{\mathrm{bool}}(G)\) and \(\mathscr{S}\mathscr{S}_{\ell}^{\mathrm{bool}}\) we denote the restrictions of the two classes to Boolean signals.
Isomorphisms between pairs \((G,\mathbf{x})\in\mathcal{GF}_{\ell}\) are required to preserve the signals. We call a mapping \(f:\mathcal{GF}_{\ell}\to\mathcal{GF}_{m}\) a _signal transformation_ if for all \((G,\mathbf{x})\in\mathcal{GF}_{\ell}\) we have \(f(G,\mathbf{x})=(G,\mathbf{x}^{\prime})\) for some \(\mathbf{x}^{\prime}\in\mathcal{G}_{m}(G)\). Such a signal transformation \(f\) is _equivariant_ if for all isomorphic \((G,\mathbf{x}),(H,\mathbf{y})\in\mathcal{GF}_{\ell}\), every isomorphisms \(h\) from \((G,\mathbf{x})\) to \((H,\mathbf{y})\) is also an isomorphism from \(f(G,\mathbf{x})\) to \(f(H,\mathbf{y})\).
We can view signals \(\mathbf{x}\in\mathcal{S}_{\ell}(G)\) as matrices in the space \(\mathbb{R}^{V(G)\times\ell}\). Flattening them to vectors of length \(|G|\ell\), we can apply the usual vector norms to graph signals. In particular, we have \(\left\|\mathbf{x}\right\|_{\infty}=\max\big{\{}\left\|\mathbf{x}(v)\right\|_{\infty} \big{|}\,v\in V(G)\big{\}}\). Sometimes, we need to restrict a signal to a subsets of \(W\in V(G)\). We denote this restriction by \(\mathbf{x}|_{W}\), which is a matrix in \(\mathbb{R}^{W\times\ell}\).
### Boolean Circuits
A _Boolean circuit_\(\mathfrak{C}\) is a directed acyclic graph where all nodes except for the sources are labelled as _negation_, _disjunction_, or _conjunction_ nodes. Negation nodes must have in-degree \(1\). Sources (that is, node of in-degree \(0\)) are _input nodes_, and we always denote them by \(X_{1},\ldots,X_{p}\). Similarly, sinks (that is, nodes of out-degree \(0\)) are _output nodes_, and we denote them by \(Y_{1},\ldots,Y_{q}\). The number \(p\) of input nodes is the _input dimension_ of \(\mathfrak{C}\), and the number \(q\) of output nodes the _output dimension_. Most of the time, we consider circuits that also have _threshold nodes_ of arbitrary positive in-degree, where a \(\geq t\)-threshold node evaluates to \(1\) if at least \(t\) of its in-neighbours evaluate to \(1\). To distinguish them from the Boolean circuits over the standard basis we refer to such circuits as _threshold circuits_. The _depth_\(\operatorname{dp}(\mathfrak{C})\) of a circuit \(\mathfrak{C}\) is the maximum length of a path from an input node to an output node. The _order_\(|\mathfrak{C}|\) of \(\mathfrak{C}\) is the number of nodes, and the _size_ is the number of nodes plus the number of edges.
A circuit \(\mathfrak{C}\) of input dimension \(p\) and output dimension \(q\) computes a function \(f_{\mathfrak{C}}:\{0,1\}^{p}\to\{0,1\}^{q}\) defined in the natural way. To simplify the notation, we simply denote this function by \(\mathfrak{C}\), that is, we write \(\mathfrak{C}(\mathbf{x})\) instead of \(f_{\mathfrak{C}}(\mathbf{x})\) to denote the output of \(\mathfrak{C}\) on input \(\mathbf{x}\in\{0,1\}^{p}\).
In complexity theory, we mostly study which languages \(L\subseteq\{0,1\}^{*}\) or functions \(F:\{0,1\}^{*}\to\{0,1\}^{*}\) can be computed by families \(\mathcal{G}=(\mathfrak{C}_{n})_{n\in\mathbb{N}_{>0}}\) of circuits, where \(\mathfrak{C}_{n}\) is a circuit of input dimension \(n\). Such a family \(\mathcal{G}\)_computes_\(F\) if for all \(n\in\mathbb{N}_{>0}\), \(\mathfrak{C}_{n}\) computes the restriction \(F_{n}\) of \(F\) to \(\{0,1\}^{n}\). We say that \(\mathcal{G}\)_decides_\(L\) if it computes its characteristic function. _Non-uniform_\(\mathsf{TC}^{0}\) is the class of all languages that are decided by a family \(\mathcal{G}=(\mathfrak{C}_{n})_{n\in\mathbb{N}_{>0}}\) of threshold circuits of _bounded depth_ and _polynomial size_. There is also a class _(dlogtime) uniform_\(\mathsf{TC}^{0}\) where the family \(\mathcal{G}\) itself is required to be easily computable; we refer the reader to [4]. We will never work directly with uniform circuit families, but instead use a logical characterisation in terms of first-order logic with counting (Theorem 3.3)
An important fact that we shall use is that standard arithmetic functions on the bit representations of natural numbers can be computed by bounded-depth polynomial-size threshold circuits. We define \(\operatorname{ADD}_{2n}:\{0,1\}^{2n}\to\{0,1\}^{n+1}\) to be the bitwise addition of two \(n\)-bit numbers (whose result may be an \((n+1)\)-bit number). We let \(\operatorname{ADD}:\{0,1\}^{*}\to\{0,1\}^{*}\) be the function that coincides with \(\operatorname{ADD}_{2n}\) on inputs of even size and
maps all inputs of odd size to \(0\). Similarly, we define \(\mathrm{SUB}_{2n}:\{0,1\}^{2n}\to\{0,1\}^{2n}\) and \(\mathrm{SUB}:\{0,1\}^{*}\to\{0,1\}^{*}\) for the truncated subtraction \(m\div n\coloneqq\max\{0,m-n\}\), \(\mathrm{MUL}_{2n}:\{0,1\}^{2n}\to\{0,1\}^{2n}\) and \(\mathrm{MUL}:\{0,1\}^{*}\to\{0,1\}^{*}\) for multiplication. We also introduce a binary integer division function \(\mathrm{DIV}_{2n}:\{0,1\}^{2n}\to\{0,1\}^{n}\) mapping \(n\)-bit numbers \(k,\ell\) to \(\lfloor k/\ell\rfloor\) (with some default value, say \(0\), if \(\ell=0\)). The _iterated addition_ function \(\mathrm{ITADD}_{n^{2}}:\{0,1\}^{n^{2}}\to\{0,1\}^{2n}\) and the derived and \(\mathrm{ITADD}:\{0,1\}^{*}\to\{0,1\}^{*}\) add \(n\) numbers of \(n\)-bits each. Finally, we need the less-than-or-equal-to predicate \(\mathrm{LEQ}_{2n}:\{0,1\}^{2n}\to\{0,1\}\) and \(\mathrm{LEQ}:\{0,1\}^{*}\to\{0,1\}\).
**Lemma 2.3** ([8],[14]).: ADD_,_ MUL_,_ DIV_,_ ITADD_, and_ LEQ _are computable by dlogtime uniform families of bounded-depth polynomial-size threshold circuits._
The fact that ADD, MUL, ITADD, and LEQ are computable by families of bounded-depth polynomial-size threshold circuits goes back to [8] (also see [30]). The arguments given there are non-uniform, but it is not hard to see that they can be "uniformised". The situation for DIV is more complicated. It was known since the mid 1980s that DIV is computable by a non-uniform (or polynomial-time uniform) family of bounded-depth polynomial-size threshold circuits, but the uniformity was only established 15 years later in [14, 15].
### Feedforward Neural Networks
It will be convenient for us to formalise feedforward neural networks in a similar way as Boolean circuits. A more standard presentation of multilayer perceptrons can easily be seen as a special case. A _feedforward neural network architecture_\(\mathfrak{A}\) is a triple \(\big{(}V,E,(\mathfrak{a}_{v})_{v\in V}\big{)}\), where \((V,E)\) is a directed acyclic graph that we call the _skeleton_ of \(\mathfrak{A}\) and for every vertex \(v\in V\), \(\mathfrak{a}_{v}:\mathbb{R}\to\mathbb{R}\) is a continuous function that we call the _activation function_ at \(v\). A _feedforward neural network (FNN)_ is a tuple \(\mathfrak{F}=(V,E,(\mathfrak{a}_{v})_{v\in V},\boldsymbol{w},\boldsymbol{v})\), where \(\big{(}V,E,(\mathfrak{a}_{v})_{v\in V}\big{)}\) is an FNN architecture, \(\boldsymbol{w}=(w_{e})_{e\in E}\in\mathbb{R}^{E}\) associates a _weight_\(w_{e}\) with every edge \(e\in E\), and \(\boldsymbol{b}=(b_{v})_{v\in V}\in\mathbb{R}^{V}\) associates a _bias_\(b_{v}\) with every node \(v\in V\). As for circuits, the sources of the dag are _input nodes_, and we denote them by \(X_{1},\ldots,X_{p}\). sinks are _output nodes_, and we denote them by \(Y_{1},\ldots,Y_{q}\). We define the _order_\(|\mathfrak{F}|\), the _depth_\(\mathrm{dp}(\mathfrak{F})\), the _input dimension_, and the _output dimension_ of \(\mathfrak{F}\) in the same way as we did for circuits.
To define the semantics, let \(\mathfrak{A}=\big{(}V,E,(\mathfrak{a}_{v})_{v\in V}\big{)}\). For each node \(v\in V\), we define a function \(f_{\mathfrak{A},v}:\mathbb{R}^{p}\times\mathbb{R}^{E}\times\mathbb{R}^{V}\to \mathbb{R}\) inductively as follows. Let \(\boldsymbol{x}=(x_{1},\ldots,x_{p})\in\mathbb{R}^{n}\), \(\boldsymbol{b}=(b_{v})_{v\in V}\in\mathbb{R}^{V}\), and \(\boldsymbol{w}=(w_{e})_{e\in E}\in\mathbb{R}^{E}\). Then
\[f_{\mathfrak{A},v}(\boldsymbol{x},\boldsymbol{w},\boldsymbol{b})\coloneqq \begin{cases}x_{i}&\text{if $v$ is the input node $X_{i}$},\\ \mathfrak{a}_{v}\big{(}b_{v}+\sum_{v^{\prime}\in N^{-}(v)}f_{\mathfrak{A},v^{ \prime}}(\boldsymbol{x},\boldsymbol{b},\boldsymbol{w})\cdot w_{v^{\prime}v} \big{)}&\text{if $v$ is not an input node}.\end{cases}\]
We define \(f_{\mathfrak{A}}:\mathbb{R}^{p}\times\mathbb{R}^{E}\times\mathbb{R}^{V}\to \mathbb{R}^{q}\) by
\[f_{\mathfrak{A}}(\boldsymbol{x},\boldsymbol{w},\boldsymbol{b})\coloneqq\big{(} f_{\mathfrak{A},Y_{1}}(\boldsymbol{x},\boldsymbol{b},\boldsymbol{w}),\ldots,f_{ \mathfrak{A},Y_{q}}(\boldsymbol{x},\boldsymbol{b},\boldsymbol{w})\big{)}.\]
For an FNN \(\mathfrak{F}=\big{(}V,E,(\mathfrak{a}_{v})_{v\in V},\mathbf{w},\mathbf{b}\big{)}\) with architecture \(\mathfrak{A}=\big{(}V,E,(\mathfrak{a}_{v})_{v\in V}\big{)}\) we define functions \(f_{\mathfrak{F},v}:\mathbb{R}^{p}\to\mathbb{R}\) for \(v\in V\) and \(f_{\mathfrak{F}}:\mathbb{R}^{p}\to\mathbb{R}^{q}\) by
\[f_{\mathfrak{F},v}(\mathbf{x}) =f_{\mathfrak{A},v}(\mathbf{x},\mathbf{w},\mathbf{b}),\] \[f_{\mathfrak{F}}(\mathbf{x}) =f_{\mathfrak{A}}(\mathbf{x},\mathbf{w},\mathbf{b}).\]
As for circuits, to simplify the notation we usually denote the functions \(f_{\mathfrak{A}}\) and \(f_{\mathfrak{F}}\) by \(\mathfrak{A}\) and \(\mathfrak{F}\), respectively.
_Remark 2.4_.: The reader may have noticed that we never use the activation function \(\mathfrak{a}_{v}\) or the bias \(b_{v}\) for input nodes \(v=X_{i}\). We only introduce them for notational convenience. _We may always assume that \(\mathfrak{a}_{v}\equiv 0\) and \(b_{v}=0\) for all input nodes \(v\)._
Typically, the weights \(w_{e}\) and biases \(b_{e}\) are learned from data. We are not concerned with the learning process here, but only with the functions computed by pre-trained models.
_Throughout this paper, we assume the activation function in neural networks to be Lipschitz continuous._ Our theorems can also be proved with weaker assumptions on the activation functions, but assuming Lipschitz continuity simplifies the proofs, and since all activation functions typically used in practice are Lipschitz continuous, there is no harm in making this assumption. Since linear functions are Lipschitz continuous and the concatenation of Lipschitz continuous functions is Lipschitz continuous as well, it follows that for all FNNs \(\mathfrak{F}\) the function \(f_{\mathfrak{F}}\) is Lipschitz continuous. A consequence of the Lipschitz continuity is that the output of an FNN can be linearly bounded in the input. For later reference, we state these facts as a lemma.
**Lemma 2.5**.: _Let \(\mathfrak{F}\) be an FNN of input dimension \(p\)._
1. _There is a Lipschitz constant_ \(\lambda=\lambda(\mathfrak{F})\in\mathbb{N}_{>0}\) _for_ \(\mathfrak{F}\) _such that for all_ \(\mathbf{x},\mathbf{x}^{\prime}\in\mathbb{R}^{p}\)_,_ \[\big{\|}\mathfrak{F}(\mathbf{x})-\mathfrak{F}(\mathbf{x}^{\prime})\big{\|}_{\infty} \leq\lambda\,\big{\|}\mathbf{x}-\mathbf{x}^{\prime}\big{\|}_{\infty}\,.\]
2. _There is a_ \(\gamma=\gamma(\mathfrak{F})\in\mathbb{N}_{>0}\) _such that for all_ \(\mathbf{x}\in\mathbb{R}^{p}\)_,_ \[\|\mathfrak{F}(\mathbf{x})\|_{\infty}\leq\gamma\cdot\big{(}\,\big{\|}\mathbf{x}\big{\|} _{\infty}+1\big{)}.\]
Proof.: Assertion (1) is simply a consequence of the fact that concatenation of Lipschitz continuous functions is Lipschitz continuous. For (2), note that by (1) we have
\[\big{\|}\mathfrak{F}(\mathbf{x})\big{\|}_{\infty}\leq\lambda(\mathfrak{F})\,\big{\|} \mathbf{x}\big{\|}_{\infty}+\big{\|}\mathfrak{F}(\mathbf{0})\big{\|}_{\infty}\,.\]
We let \(\gamma\coloneqq\max\big{\{}\lambda(\mathfrak{F}),\big{\|}\mathfrak{F}(\mathbf{0}) \big{\|}_{\infty}\,\big{\}}\).
We often make further restrictions on the FNNs we consider. An FNN is _piecewise linear_ if all its activation functions are piecewise linear. An FNN is _rational piecewise linear_ if all weights and biases are dyadic rationals and all activation functions are rational piecewise linear. The relu function and the linearised sigmoid function (see
Example 2.1) are typical examples of rational piecewise linear activation functions. An FNN is _rpl approximable_ if all its activation functions are rpl approximable. The logistic function and the hyperbolic tangent function (see Example 2.2) are typical examples of rpl approximable activation functions.
It is a well known fact that FNNs can simulate threshold circuits.
**Lemma 2.6**.: _For every threshold circuit \(\mathfrak{C}\) of input dimension \(p\) there is an FNN \(\mathfrak{F}=(V,E,(\mathfrak{a}_{v})_{v\in V},(w_{e})_{e\in E},(b_{v})_{v\in V})\) of input dimension \(p\) such that \(|\mathfrak{F}|=O(|\mathfrak{C}|)\), \(\mathfrak{a}_{v}=\operatorname{relu}\) for all \(v\), \(w_{e}\in\{1,-1\}\) for all \(e\), \(b_{v}\in\mathbb{N}\) is bounded by the maximum threshold in \(\mathfrak{C}\) for all \(v\), and \(\mathfrak{C}(\boldsymbol{x})=\mathfrak{F}(\boldsymbol{x})\) for all \(\boldsymbol{x}\in\{0,1\}^{p}\)._
Proof.: We we simulate \(\mathfrak{C}\) gatewise, noting that a Boolean \(\neg x\) negation can be expressed as \(\operatorname{relu}(1-x)\) and a threshold \(\sum x_{i}\geq t\) can be expressed as \(\operatorname{relu}(\sum x_{i}-t+1)-\operatorname{relu}(\sum x_{i}-t)\) for Boolean inputs \(x,x_{i}\).
### Relational Structures
A _vocabulary_ is a finite set \(\tau\) of relation symbols. Each relation symbol \(R\in\tau\) has an _arity_\(\operatorname{ar}(R)\in\mathbb{N}\). A _\(\tau\)-structure_\(A\) consists of a finite set \(V(A)\), the _universe_ or _vertex set_, and a relation \(R(A)\subseteq V(A)^{k}\) for every relation symbol \(R\in\tau\) of arity \(\operatorname{ar}(R)=k\). For a \(\tau\)-structure \(A\) and a subset \(\tau^{\prime}\subseteq\tau\), the _restriction of \(A\) to \(\tau^{\prime}\)_ is the \(\tau^{\prime}\)-structure \(A|_{\tau^{\prime}}\) with \(V(A|_{\tau^{\prime}})\coloneqq V(A)\) and \(R(A|_{\tau^{\prime}})\coloneqq R(A)\) for all \(R\in\tau^{\prime}\). The _order_ of a structure \(A\) is \(|A|\coloneqq|V(A)|\).
For example, a graph may be viewed as an \(\{E\}\)-structure \(G\), where \(E\) is a binary symbol, such that \(E(G)\) is symmetric and irreflexive. A pair \((G,\ell)\in\mathscr{G}_{\ell}^{\operatorname{bool}}\), that is, a graph with a Boolean signal \(\ell:V(G)\to\{0,1\}^{\ell}\), may be viewed as an \(\{E,P_{1},\ldots,P_{\ell}\}\)-structure \(G_{\ell}\) with \(V(G_{\ell})=V(G)\), \(E(G_{\ell})=E(G)\), and \(P_{i}(G_{\ell})=\{v\in V(G)\mid\ell(v)_{i}=1\}\). We may think of the \(P_{i}\) as labels and hence refer to \(\{E,P_{1},\ldots,P_{\ell}\}\)-structures whose \(\{E\}\)-restriction is an undirected graph as _\(\ell\)-labeled graphs_. In the following, we do not distinguish between graphs with Boolean signals and the corresponding labeled graphs.
A _\(k\)-ary query_ on a class \(\mathscr{C}\) of structures is an equivariant mapping \(\mathscr{G}\) that associates with each structure \(A\in\mathscr{C}\) a mapping \(\mathscr{G}(A):V(A)^{k}\to\{0,1\}\). In this paper, we are mainly interested in \(0\)-ary (or _Boolean_) and unary queries on (labelled) graphs. We observe that a Boolean query on \(\ell\)-labelled graphs is an invariant mapping from \(\mathscr{G}_{\ell}^{\operatorname{bool}}\) to \(\{0,1\}\) and a unary query is an equivariant signal transformations from \(\mathscr{G}_{\ell}^{\operatorname{bool}}\) to \(\mathscr{G}_{1}^{\operatorname{bool}}\).
## 3 First-Order Logic with Counting
Throughout this section, we fix a vocabulary \(\tau\). We introduce two types of variables, _vertex variables_ ranging over the vertex set of a structure, and _number variables_ ranging over \(\mathbb{N}\). We typically denote vertex variables by \(x\) and variants like \(x^{\prime},x_{1}\), number variables by \(y\) and variants, and we use \(z\) and variants to refer to either vertex or number variables.
We define the sets of \(\mathsf{FO}+\mathsf{C}\)_-formulas_ and \(\mathsf{FO}+\mathsf{C}\)_-terms_ of vocabulary \(\tau\) inductively as follows:
* All number variables and \(\mathsf{0},\mathsf{1}\) are \(\mathsf{FO}+\mathsf{C}\)-terms.
* For all \(\mathsf{FO}+\mathsf{C}\)-terms \(\theta,\theta^{\prime}\) the expressions \(\theta+\theta^{\prime}\) and \(\theta\cdot\theta^{\prime}\) are \(\mathsf{FO}+\mathsf{C}\)-terms.
* For all \(\mathsf{FO}+\mathsf{C}\)-terms \(\theta,\theta^{\prime}\) the expression \(\theta\leq\theta^{\prime}\) is an \(\mathsf{FO}+\mathsf{C}\)-formula.
* For all vertex variables \(x_{1},\ldots,x_{k}\) and all \(k\)-ary \(R\in\tau\) the expressions \(x_{1}=x_{2}\) and \(R(x_{1},\ldots,x_{k})\) are \(\mathsf{FO}+\mathsf{C}\)-formulas (_relational atoms_).
* For all \(\mathsf{FO}+\mathsf{C}\)-formulas \(\varphi,\psi\) the expressions \(\neg\varphi\) and \(\varphi\wedge\psi\) are \(\mathsf{FO}+\mathsf{C}\)-formulas.
* For all \(\mathsf{FO}+\mathsf{C}\)-formulas \(\varphi\), all \(k,\ell\in\mathbb{N}\) with \(k+\ell\geq 1\), all vertex variables \(x_{1},\ldots,x_{k}\), all number variables \(y_{1},\ldots,y_{\ell}\), and all \(\mathsf{FO}+\mathsf{C}\)-terms \(\theta_{1},\ldots,\theta_{\ell}\), \[\#(x_{1},\ldots,x_{k},y_{1}<\theta_{1},\ldots,y_{\ell}<\theta_{\ell}).\varphi\] is an \(\mathsf{FO}+\mathsf{C}\)-term (a _counting term_).
A _\(\tau\)-interpretation_ is a pair \((A,\alpha)\), where \(A\) is a \(\tau\)-structure and \(\alpha\) is an _assignment over \(A\)_, that is, a mapping from the set of all variables to \(V(A)\cup\mathbb{N}\) such that \(\alpha(x)\in V(A)\) for every vertex variable \(x\) and \(\alpha(y)\in\mathbb{N}\) for every number variable \(y\). For a tuple \(\boldsymbol{z}=(z_{1},\ldots,z_{k})\) of distinct variables, and a tuple \(\boldsymbol{c}=(c_{1},\ldots,c_{k})\in(V(A)\cup\mathbb{N})^{k}\) such that \(c_{i}\in V(A)\) if \(z_{i}\) is a vertex variable and \(c_{i}\in\mathbb{N}\) if \(z_{i}\) is a number variable, we let \(\alpha\frac{\boldsymbol{c}}{\boldsymbol{z}}\) be the interpretation with \(\alpha\frac{\boldsymbol{c}}{\boldsymbol{z}}(z_{i})=c_{i}\) and \(\alpha\frac{\boldsymbol{c}}{\boldsymbol{z}}(z)=\alpha(z)\) for all \(z\not\in\{z_{1},\ldots,z_{k}\}\). We inductively define a value \([\![\theta]\!]^{(A,a)}\in\mathbb{N}\) for each \(\mathsf{FO}+\mathsf{C}\)-term \(\theta\) and a Boolean value \([\![\varphi]\!]^{(A,a)}\in\{0,1\}\) for each \(\mathsf{FO}+\mathsf{C}\)-formula \(\varphi\).
* We let \([\![y]\!]^{(A,a)}\coloneqq\alpha(y)\) and \([\![0]\!]^{(A,a)}\coloneqq 0\), \([\![1]\!]^{(A,a)}\coloneqq 1\).
* We let \([\![\theta+\theta^{\prime}]\!]^{(A,a)}\coloneqq[\![\theta]\!]^{(A,a)}+[\![ \theta^{\prime}]\!]^{(A,a)}\) and \([\![\theta\cdot\theta^{\prime}]\!]^{(A,a)}\coloneqq[\![\theta]\!]^{(A,a)}\cdot[ \![\theta^{\prime}]\!]^{(A,a)}\).
* We let \([\![\theta\leq\theta^{\prime}]\!]^{(A,a)}=1\) if and only if \([\![\theta]\!]^{(A,a)}\leq[\![\theta^{\prime}]\!]^{(A,a)}\).
* We let \([\![x_{1}=x_{2}]\!]^{(A,a)}=1\) if and only if \(\alpha(x_{1})=\alpha(x_{2})\) and \([\![R(x_{1},\ldots,x_{k})]\!]^{(A,a)}=1\) if and only if \(\alpha(x_{1}),\ldots,\alpha(x_{k})\big{)}\in R(A)\).
* We let \([\![\neg\varphi]\!]^{(A,a)}\coloneqq 1-[\![\varphi]\!]^{(A,a)}\) and \([\![\varphi\wedge\psi]\!]^{(A,a)}\coloneqq[\![\varphi]\!]^{(A,a)}\cdot[\![ \varphi]\!]^{(A,a)}\).
* We let \[[\![\#(x_{1},\ldots,x_{k},y_{1}<\theta_{1},\ldots,y_{\ell}<\theta_{\ell}). \varphi]\!]^{(A,a)}\] be the number of tuples \((a_{1},\ldots,a_{k},b_{1},\ldots,b_{\ell})\in V(A)^{k}\times\mathbb{N}^{\ell}\) such that
* \(b_{i}<[\![\theta_{i}]\!]^{(A,a\frac{(a_{1},\ldots,a_{k},b_{1},\ldots,b_{i-1})} {(x_{1},\ldots,x_{k},y_{1},\ldots,y_{i-1})})}\) for all \(i\in[\ell]\);
* \([\![\varphi]\!]^{(A,a\frac{(a_{1},\ldots,a_{k},b_{1},\ldots,b_{\ell})}{(x_{1}, \ldots,x_{k},y_{1},\ldots,y_{\ell})})}=1\).
Note that we allow _adaptive bounds_: the bound on the variable \(y_{i}\) may depend on the values for all previous variables. While it can be shown that this does not increase the expressive power of the plain logic, the adaptive bounds do add power to an extension of the logic with function variables (see Section 3.4).
For \(\mathsf{FO}+\mathsf{C}\)-formulas \(\varphi\), instead of \(\llbracket\varphi\rrbracket^{(A,a)}=1\) we also write \((A,\mathpzc{e})\vDash\varphi\).
An \(\mathsf{FO}+\mathsf{C}\)_-expression_ is either an \(\mathsf{FO}+\mathsf{C}\)-term or an \(\mathsf{FO}+\mathsf{C}\)-formula. The set \(\operatorname{free}(\xi)\) of _free variables_ of an \(\mathsf{FO}+\mathsf{C}\)-expression \(\xi\) is defined inductively in the obvious way, where for a counting term we let
\[\operatorname{free}\big{(}\#(x_{1},\ldots,x_{k},y_{1}<\theta_{1},\ldots,y_{\ell}<\theta_{\ell}).\varphi\big{)}\coloneqq\\ \big{(}\operatorname{free}(\varphi)\setminus\{x_{1},\ldots,x_{k},y_{1},\ldots,y_{\ell}\}\big{)}\cup\bigcup_{i=1}^{\ell}\operatorname{free} \big{(}\theta_{i}\setminus\{x_{1},\ldots,x_{k},y_{1},\ldots,y_{i-1}\}\big{)}.\]
A _closed term_ is a term without free variables, and a _sentence_ is a formula without free variables.
For an expression \(\xi\), the notation \(\xi(z_{1},\ldots,z_{k})\) stipulates that \(\operatorname{free}(\xi)\subseteq\{z_{1},\ldots,z_{k}\}\). It is easy to see that the value \(\llbracket\xi\rrbracket^{(A,a)}\) only depends on the interpretations \(c_{i}\coloneqq\mathpzc{e}(z_{i})\) of the free variables. Thus we may avoid explicit reference to the assignment \(\mathpzc{e}\) and write \(\llbracket\xi\rrbracket^{A}\,(c_{1},\ldots,c_{k})\) instead of \(\llbracket\xi\rrbracket^{(A,a)}\). If \(\xi\) is a closed term or sentence, we just write \(\llbracket\xi\rrbracket^{A}\). For formulas \(\varphi(z_{1},\ldots,z_{k})\), we also write \(A\vDash\varphi(c_{1},\ldots,c_{k})\) instead of \(\llbracket\varphi\rrbracket^{A}\,(c_{1},\ldots,c_{k})=1\)..
Observe that every \(\mathsf{FO}+\mathsf{C}\)-formula \(\varphi(x_{1},\ldots,x_{k})\) of vocabulary \(\tau\) defines a \(k\)-ary query on the class of \(\tau\)-structures, mapping a structure \(A\) to the set of all \((a_{1},\ldots,a_{k})\in A^{k}\) such that \(A\vDash\varphi(a_{1},\ldots,a_{k})\).
We defined the logic \(\mathsf{FO}+\mathsf{C}\) with a minimal syntax, avoiding unnecessary operators. However, we can use other standard arithmetical and logical operators as abbreviations:
* For \(n\geq 2\), we can use \(n\) as an abbreviation for the corresponding sum of \(1\)s.
* We use \(\mathsf{ord}\) as an abbreviation for the term \(\#x.x=x\). Then \(\llbracket\mathsf{ord}\rrbracket^{A}=|A|\) for all structures \(A\), that is, \(\mathsf{ord}\) defines the order of a structure.
* We can express the relations \(\coloneqq,\geq,<,>\) on \(\mathbb{N}\) using Boolean combinations and \(\leq\).
* We can express Boolean connectives like \(\vee\) or \(\to\) using \(\neg\) and \(\wedge\).
* For vertex variables \(x\), we can express existential quantification \(\exists x.\varphi\) as \(1\leq\#x.\varphi\). Then we can express universal quantification \(\forall x.\varphi\) using \(\exists x\) and \(\neg\) in the usual way. In particular, this means that we can view first-order logic \(\mathsf{FO}\) as a fragment of \(\mathsf{FO}+\mathsf{C}\).
* For number variables \(y\), we can similarly express bounded quantification \(\exists y<\theta.\varphi\) and \(\forall y<\theta.\varphi\).
* In counting terms, we do not have to use strict inequalities to bound number variables. For example, we write \(\#(x,y\leq\theta).\varphi\) to abbreviate \(\#(x,y<\theta+1).\varphi\).
* We can express truncated subtraction \(\tilde{\preceq}\), minimum and maximum of two numbers, and integer division: \[y\tilde{\preceq}y^{\prime} \text{ abbreviates }\#(y^{\prime\prime}<y).(y^{\prime}\leq y^{\prime \prime}),\] \[\mathsf{min}(y,y^{\prime}) \text{ abbreviates }\#(y^{\prime\prime}<y).y^{\prime\prime}<y^{\prime},\] \[\mathsf{max}(y,y^{\prime}) \text{ abbreviates }\#(y^{\prime\prime}<y+y^{\prime}).(y^{\prime \prime}<y\lor y^{\prime\prime}<y^{\prime}),\] \[\mathsf{div}(y,y^{\prime}) \text{ abbreviates }\#(y^{\prime\prime}\leq y).y^{\prime}\cdot y^{ \prime\prime}<y.\] Note that for all structures \(A\) and \(b,b^{\prime}\in\mathbb{N}\) we have \[\left[\mathsf{div}\right]^{A}(b,b^{\prime})=\begin{cases}\left\lfloor\frac{ b}{b^{\prime}}\right\rfloor&\text{if }b^{\prime}\neq 0,\\ b+1&\text{if }b^{\prime}=0\text{ and }b\neq 0,\\ 0&\text{if }b^{\prime}=b=0.\end{cases}\]
_Remark 3.1_.: There are quite a few different versions of first-order logic with counting in the literature. The logics that only involve counting quantifiers \(\exists^{\geq n}\) for constant \(n\) are strictly weaker than our \(\mathsf{FO+C}\), and so are logics with modular counting quantifiers.
Counting logics with quantification over numbers have first been suggested, quite informally, by Immerman [16]. The 2-sorted framework was later formalised by Gradel and Otto [11]. Essentially, our \(\mathsf{FO+C}\) corresponds to Kuske and Schweikardt's [20]\(\mathsf{FCON}(\{\mathbb{P}_{\leq}\})\), with one important difference: we allow counting terms also over number variables. This makes no difference over ordered structures, but it makes the logic stronger over unordered structures (at least we conjecture that it does; this is something that with current techniques one can probably only prove modulo some complexity theoretic assumptions). Other differences, such as that [20] use the integers for the numerical part, whereas we use the non-negative integers, are inessential. Importantly, the two logics and other first-order logics with counting, such as first-order logic with a majority quantifier, are equivalent over ordered arithmetic structures and thus all capture the complexity class uniform \(\mathsf{TC}^{0}\), as will be discussed in the next section.
A simple lemma that we will frequently use states that all \(\mathsf{FO+C}\)-terms are polynomially bounded. At this point, the reader may safely ignore the reference to function variables in the assertion of the lemma; we will only introduce them in Section 3.3. We just mention them here to avoid confusion in later applications of the lemma.
**Lemma 3.2**.: _For every \(\mathsf{FO+C}\)-term \(\theta(x_{1},\ldots,x_{k},y_{1},\ldots,y_{k})\) without function variables there is a polynomial \(\pi(X,Y)\) such that for all structures \(A\), all \(a_{1},\ldots,a_{k}\in V(A)\), and all \(b_{1},\ldots,b_{\ell}\in\mathbb{N}\) it holds that_
\[\theta^{A}(a_{1},\ldots,a_{k},b_{1},\ldots,b_{k})\leq\pi\Big{(}|A|,\max\big{\{} b_{i}\bigm{|}i\in[\ell]\big{\}}\Big{)}.\]
Proof.: A straightforward induction on \(\theta\).
### Descriptive Complexity
We review some results relating the logic \(\mathsf{FO}+\mathsf{C}\) to the complexity class \(\mathsf{TC}^{0}\). In the descriptive complexity theory of "small" complexity classes (say, within \(\mathsf{PTIME}\)), we need to expand structures by a linear order of the vertex set (and possibly additional arithmetical relations). We introduce a distinguished binary relation symbol \(\leqslant\), which we assume to be not contained in the usual vocabularies \(\tau\). Note that \(\leqslant\) is distinct from \(\leq\), which we use for the standard linear order on \(\mathbb{N}\). We denote the interpretation of \(\leqslant\) in a structure \(A\) by \(\leqslant^{A}\) instead of \(\leqslant(A)\), and we use the symbol in infix notation.
An _ordered \(\tau\)-structure_ is a \(\tau\cup\{\leqslant\}\)-structure \(A\) where \(\leqslant^{A}\) is a linear order of the vertex set \(V(A)\). It will be convenient to have the following notation for ordered structures \(A\). For \(0\leq i<n\coloneqq|A|\), we \(\langle i\rangle_{A}\) be the \((i+1)\)st element of the linear order "\(\leqslant^{A}\), that is, we have \(V(A)=\{\langle i\rangle_{A}\mid 0\leq i<n\}\) with \(\langle 0\rangle_{A}\leqslant^{A}\langle 1\rangle_{A}\leqslant^{A}... \leqslant^{A}\left(n-1\right)_{A}\). We omit the subscript \({}_{A}\) if \(A\) is clear from the context. The reason that ordered structures are important in descriptive complexity is that they have simple canonical representations as bitstrings. Let \(s(A)\in\{0,1\}^{*}\) denote the string representing an ordered structure \(A\). Then for every class \(\mathscr{C}\) of ordered \(\tau\)-structures, we let \(L(\mathscr{C})=\{s(A)\mid A\in\mathscr{C}\}\).
**Theorem 3.3** (Barrington, Immerman, and Straubing [4]).: _Let \(\mathscr{C}\) be a class of ordered \(\tau\)-structures. Then \(L(\mathscr{C})\) is in uniform \(\mathsf{TC}^{0}\) if and only if there is an \(\mathsf{FO}+\mathsf{C}\)-sentence \(\psi\) of vocabulary \(\tau\cup\{\leqslant\}\) such that for all ordered \(\tau\)-structures \(A\) it holds that \(A\in\mathscr{C}\iff A\models\psi\)._
We need to rephrase this theorem for queries. While the theorem is really about Boolean queries, we state a version for unary queries. For a class \(\mathscr{C}\) of \(\tau\)-structures, we let \(\mathscr{C}_{\leqslant}\) be the class of all ordered \(\tau\) structures \(A\) with \(A|_{\tau}\in\mathscr{C}\), and we let \(L_{\leqslant}(\mathscr{C})\coloneqq L(\mathscr{C}_{\leqslant})\). Extending this to unary queries, for a query \(\mathscr{C}\) on a class \(\mathscr{C}\) of \(\tau\)-structures, we let
\[L_{\leqslant}(\mathscr{C})\coloneqq\left\{s(A)\#\operatorname{bin}(i)\mathrel {\big{|}}A\in\mathscr{C}_{\leqslant},\mathscr{C}(A)(\langle i\rangle)=1\right\}\]
We say that a formula \(\varphi(x)\) of vocabulary \(\tau\cup\{\leqslant\}\) is _order invariant_ if for all ordered \(\tau\)-structures \(A,A^{\prime}\) with \(A|\tau=A^{\prime}|\tau\) and all \(a\in V(A)\) it holds that \(A\models\varphi(a)\iff A^{\prime}\models\varphi(a)\). We say that a unary query \(\mathscr{C}\) on a class of \(\tau\)-structures is definable in _order-invariant_\(\mathsf{FO}+\mathsf{C}\) if there is an order invariant \(\mathsf{FO}+\mathsf{C}\)-formula \(\varphi(x)\) of vocabulary \(\tau\cup\{\leqslant\}\) such that for all \(A\in\mathscr{C}_{\leqslant}\) and all \(a\in V(A)\) it holds that \(A\models\varphi(a)\iff\mathscr{C}(A|_{\tau})(a)=1\).
**Corollary 3.4**.: _Let \(\mathscr{C}\) be a unary query. Then \(L_{\leqslant}(\mathscr{C})\) is in uniform \(\mathsf{TC}^{0}\) if and only if \(\mathscr{C}\) is definable in order-invariant \(\mathsf{FO}+\mathsf{C}\)._
### Non-Uniformity and Built-in Relations
The usual way to capture non-uniformity in descriptive complexity is by adding _built-in relations_. The classical way is to only consider structures with universe \(\{0,\ldots,n-1\}\), for some \(n\in\mathbb{N}\), and then add relation symbols \(S\) to the language that have a fixed interpretation \(S^{(n)}\subseteq\{0,\ldots,n-1\}^{k}\) in all structures with universe \(\{0,\ldots,n-1\}\). Slightly more abstractly, we can consider ordered structures and transfer the definition of \(S^{(n)}\) to all linearly ordered structures of order \(n\) via the natural mapping \(i\mapsto\langle i\rangle\).
We take a slightly different approach to built-in relations here, which allows us to also use them over structures that are not necessarily ordered. A _built-in numerical relation_ is simply a relation over \(\mathbb{N}\), that is, a subset \(N\subseteq\mathbb{N}^{k}\) for some \(k\geq 0\), the _arity_ of \(N\). We use the same letter \(N\) to denote both the relation \(N\subseteq\mathbb{N}^{k}\) and a \(k\)-ary relation symbol representing it in the logic. In other words, the relation symbol \(N\) will be interpreted by the same relation \(N\subseteq\mathbb{N}^{k}\) in all structures. We extend the logic \(\mathsf{FO}\!+\!\mathsf{C}\) by new atomic formulas \(N(y_{1},\ldots,y_{k})\) for all \(k\)-ary numerical relations \(N\) and number variables \(y_{1},\ldots,y_{k}\), with the obvious semantics. By \(\mathsf{FO}\!+\!\mathsf{C}_{\mathrm{nu}}\) we denote the extension of \(\mathsf{FO}\!+\!\mathsf{C}\) to formulas using arbitrary built-in numerical relations.2
Footnote 2: Think of the index ‘nu’ as an abbreviation of either ‘numerical’ or ‘non-uniform’
Then it follows easily from Theorem 3.3 that \(\mathsf{FO}\!+\!\mathsf{C}_{\mathrm{nu}}\) captures (non-uniform) \(\mathsf{TC}^{0}\). (Or it can be proved directly, in fact, it is much easier to prove than Theorem 3.3.)
**Corollary 3.5**.: _Let \(\mathscr{C}\) be a class of ordered \(\tau\)-structures. Then \(L(\mathscr{C})\) is in \(\mathsf{TC}^{0}\) if and only if there is an \(\mathsf{FO}\!+\!\mathsf{C}_{\mathrm{nu}}\)-sentence \(\psi\) of vocabulary \(\tau\cup\{\leqslant\}\) such that for all ordered \(\tau\)-structures \(A\) it holds that \(A\in\mathscr{C}\iff A\vDash\psi\)._
We also state the version of this result for unary queries.
**Corollary 3.6**.: _Let \(\mathscr{G}\) be a unary query. Then \(L_{\xi}(\mathscr{G})\) is in \(\mathsf{TC}^{0}\) if and only if \(\mathscr{G}\) is definable in order-invariant \(\mathsf{FO}\!+\!\mathsf{C}_{\mathrm{nu}}\)._
### Types and Second-Order Variables
The counting extension of first-order logic refers to a 2-sorted extension of relational structures and adheres to a strict type discipline. For the extension we are going to introduce next, we need to make this formal. We assign a _type_ to each variable: a vertex variable has type \(\mathtt{v}\), and number variable has type \(\mathtt{n}\). A \(k\)-tuple \((z_{1},\ldots,z_{k})\) of variables has a type \((t_{1},\ldots,t_{k})\in\{\mathtt{v},\mathtt{n}\}^{k}\), where \(t_{i}\) is the type of \(z_{i}\). We denote the type of a tuple \(\boldsymbol{z}\) by \(\mathrm{tp}(\boldsymbol{z})\). For a structure \(A\) and a type \(\boldsymbol{t}=(t_{1},\ldots,t_{k})\in\{\mathtt{v},\mathtt{n}\}^{k}\) we let \(A^{\boldsymbol{t}}\) be the set of all \((c_{1},\ldots,c_{k})\in(V(A)\cup\mathbb{N})^{k}\) such that \(c_{i}\in V(A)\) if \(t_{i}=\mathtt{v}\) and \(c_{i}\in\mathbb{N}\) if \(t_{i}=\mathtt{n}\).
Now we extend our logic by relation variables (denoted by uppercase letters \(X,Y\) and function variables (denoted by \(U,V\)). Each relation variable \(X\) has a type \(\mathrm{tp}(X)\) of the form \(\{\boldsymbol{t}\}\), and each function variable \(U\) has a type \(\mathrm{tp}(U)\) of the form \(\boldsymbol{t}\to\mathtt{n}\), for some \(\boldsymbol{t}\in\{\mathtt{v},\mathtt{n}\}^{k}\). We extend the logic \(\mathsf{FO}\!+\!\mathsf{C}\) by allowing additional atomic formulas \(X(\xi_{1},\ldots,\xi_{k})\) and terms \(U(\xi,\ldots,\xi_{k})\), where \(X\) is a relation variable of type \(\{(t_{1},\ldots,t_{k})\}\) for some tuple \((t_{1},\ldots,t_{k})\in\{\mathtt{v},\mathtt{n}\}^{k}\), \(U\) a function variable of type \((t_{1},\ldots,t_{k})\to\mathtt{n}\), and for all \(i\in[k]\), if \(t_{i}=\mathtt{v}\) then \(\xi_{i}\) is a vertex variable and if \(t_{i}=\mathtt{n}\) then \(\xi_{i}\) is a term.
To define the semantics, let \(A\) be a structure and \(\omega\) an assignment over \(A\). Then \(\omega\) maps each relation variable \(X\) of type \(\{\boldsymbol{t}\}\) to a subset \(\omega(X)\subseteq A^{\boldsymbol{t}}\) and each function variable \(U\) of type \(\boldsymbol{t}\to\mathtt{n}\) to a function \(\omega(U):A^{\boldsymbol{t}}\to\mathbb{N}\). Moreover, for a tuple \(\boldsymbol{\xi}=(\xi_{1},\ldots,\xi_{k})\) of vertex variables and terms we let \([\boldsymbol{\xi}]^{A,e})=(c_{1},\ldots,c_{k})\) where \(c_{i}=\omega(\xi_{i})\) if \(\xi_{i}\) is a vertex variables and \(c_{i}=[\xi_{i}]^{(A,e)}\) if \(c_{i}\) is a term. We let
\[[\![X(\boldsymbol{\xi})]^{(A,\omega)}\coloneqq\begin{cases}1&\text{if }[ \boldsymbol{\xi}]^{A,a})\in\omega(X),\\ 0&\text{otherwise},\end{cases}\]
\[\llbracket U(\mathbf{\xi})\rrbracket^{(A,a)}\coloneqq a(U)\big{(}\,\llbracket\mathbf{\xi} \rrbracket^{A,a}\,\big{)}.\]
Observe that a function variable \(U\) of type \(\varnothing\to\texttt{n}\) is essentially just a number variable, if we identify a \(0\)-ary function with the value it takes on the empty tuple \(\varnothing\). It is still useful sometimes to use \(0\)-ary function variables. We usually write \(\omega(U)\) instead of \(\omega(U)(\varnothing)\) to denote their value. We call a relation variable _purely numerical_ if it is of type \(\{\texttt{n}^{k}\}\), for some \(k\geq 0\). Similarly, we call a function variable _purely numerical_ if it is of type \(\texttt{n}^{k}\to\texttt{n}\). To distinguish them from the "second-order" relation and function variables, we refer to our original "first-order" vertex variables and number variables as _individual variables_.
When we list variables of an expression in parentheses, as in \(\xi(\mathbf{z})\), we only list the free individual variables, but not the free relation or function variables. Thus \(\xi(\mathbf{z})\) stipulates that all free individual variables of \(\xi\) occur in \(\mathbf{z}\). However, \(\xi\) may have free relation variables and free function variables that are not listed in \(\mathbf{z}\). For a structure \(A\), an assignment \(\omega\), and a tuple \(\mathbf{c}\in A^{\text{tp}(\mathbf{z})}\), we write \(\llbracket\xi\rrbracket^{(A,a)}\,(\mathbf{c})\) instead of \(\llbracket\xi\rrbracket^{(A,a,\frac{\mathbf{c}}{\mathbf{z}})}\). If \(\xi\) is a formula, we may aslo write \((A,\omega)\models\xi(\mathbf{c})\).
The role of relation variables and function variables is twofold. First, we will use them to specify "inputs" for formulas, in particular formulas defining numerical functions. (In the next sections we will see how to use relation and function variables to specify natural and rational numbers.) And second, we may just use relation and function variables as placeholders for formulas and terms that we may later substitute for them.
Note that Lemma 3.2 no longer holds if the term \(\theta\) contains function variables, because these variables may be interpreted by functions of super-polynomial growth. Nevertheless, even in the presence of relation and function variables, our logic remains "first-order", because we do not allow quantification over theses variables.
### Arithmetic in FO+C
In this section, we will show that arithmetic on bitwise representations of integers is expressible in FO+C. Almost none of the formulas we shall define make any reference to a structure \(A\); they receive their input in the form of purely numerical relation variables and function variables and only refer to the numerical part, which is the same for all structures. We call an FO+C-expression \(\xi\)_arithmetical_ if it contains no vertex variables. Note that if \(\xi\) is an arithmetical expression, then for all structures \(A,A^{\prime}\) and all assignments \(\omega,\omega^{\prime}\) over \(A,A^{\prime}\), respectively, such that \(\omega(y)=\omega^{\prime}(y)\) for all number variables \(y\) and \(\omega(Z)=\omega^{\prime}(Z)\) for all purely numerical relation or function variables \(Z\), we have \(\llbracket\xi\rrbracket^{(A,a)}=\llbracket\xi\rrbracket^{(A^{\prime},\omega^{ \prime})}\). Thus there is no need to mention \(A\) at all; we may write \(\llbracket\xi\rrbracket^{a}\) instead of \(\llbracket\xi\rrbracket^{(A,a)}\). In fact, we can even use the notation \(\llbracket\xi\rrbracket^{a}\) if \(\omega\) is only a partial assignment that assigns values only to number variables and purely numerical relation and function variables. We call such a partial assignment a _numerical_ assignment. As usual, if \(\xi=\xi(y_{1},\ldots,y_{k})\) has all free individual variables among \(y_{1},\ldots,y_{k}\), for \(b_{1},\ldots,b_{k}\in\mathbb{N}\) we may write \(\llbracket\xi\rrbracket^{a}\,(b_{1},\ldots,b_{k})\) instead of \(\llbracket\xi\rrbracket^{a}\frac{b_{1}\ldots b_{k}}{y_{1}\ldots,y_{k}}\). If, in addition, \(\xi\) has no free relation or function variables, we may just write \(\llbracket\xi\rrbracket\,(b_{1},\ldots,b_{k})\).
It is worth mentioning that arithmetical \(\mathsf{FO}+\mathsf{C}\)-formulas are formulas of _bounded arithmetic_ (see, for example, [13]) augmented by bounded counting terms.
The following well-known lemma is the foundation for expressing bitwise arithmetic in \(\mathsf{FO}+\mathsf{C}\). The proof for the logic with counting is easy. There is also a (significantly deeper) version of the lemma for first-order logic without counting, which goes back to Bennett [5] (see [17, Section 1.2.1] for a proof).
**Lemma 3.7**.: _There is an arithmetical \(\mathsf{FO}+\mathsf{C}\)-formula \(\mathsf{bit}(y,y^{\prime})\) such that for all \(i,n\in\mathbb{N}\),_
\[\llbracket\mathsf{bit}\rrbracket\,(i,n)=\mathrm{Bit}(i,n).\]
Proof.: Clearly, there is a formula \(\mathsf{pow2}(y)\) expressing that \(y\) is a power of \(2\); it simply states that all divisors of \(y\) are divisible by \(2\). Then the formula
\[\mathsf{exp2}(y,y^{\prime})\coloneqq\mathsf{pow2}(y^{\prime})\wedge y=\#y^{ \prime\prime}<y^{\prime}.\mathsf{pow2}(y^{\prime\prime})\]
expresses that \(y^{\prime}=2^{y}\).
To express the bit predicate, we define the auxiliary formula
\[\mathsf{pow2bit}(y,y^{\prime})\coloneqq\mathsf{pow2}(y)\wedge\exists y_{1}< y^{\prime}.\exists y_{2}<y.y^{\prime}=2y_{1}y+y+y_{2},\]
expressing that \(y\) is \(2^{i}\) for some \(i\) and the \(i\)th bit of \(y^{\prime}\) is \(1\). We let
\[\mathsf{bit}(y,y^{\prime})\coloneqq\exists y^{\prime\prime}<y^{\prime}. \big{(}\mathsf{exp2}(y,y^{\prime\prime})\wedge\mathsf{pow2bit}(y^{\prime\prime },y^{\prime})\big{)}.\qed\]
**Corollary 3.8**.: _There is an arithmetical \(\mathsf{FO}+\mathsf{C}\)-term \(\mathsf{len}(y)\) such that for all \(n\in\mathbb{N}\),_
\[\llbracket\mathsf{len}\rrbracket\,(n)=|\,\mathrm{bin}(n)|.\]
Proof.: Observe that for \(n\geq 1\),
\[|\,\mathrm{bin}(n)|=1+\max\big{\{}i\,\big{|}\,\mathrm{Bit}(i,n)=1\big{\}}.\]
Noting that \(|\,\mathrm{bin}(n)|\leq n\) for all \(n\geq 1\), the following term defines the length for all \(n\geq 1\):
\[1+\#z<y.\exists y^{\prime}\leq y.\Big{(}z<y^{\prime}\wedge\mathsf{bit}(y^{ \prime},y)\wedge\forall y^{\prime\prime}\leq y\big{(}y^{\prime}<y^{\prime \prime}\to\neg\mathsf{bit}(y^{\prime\prime},y)\big{)}\Big{)}.\]
Note that that this term also gives us the correct result for \(n=0\), simply because the formula \(\exists y^{\prime}\leq y.\Big{(}z<y^{\prime}\wedge\ldots\big{)}\) is false for all \(z\) if \(y=0\).
In the proof of the previous lemma we used a trick that is worthwhile to be made explicit. Suppose we have a formula \(\varphi(y,\boldsymbol{z})\) that defines a function \(\boldsymbol{z}\mapsto y\), that is, for all structures \(A\) and \(\boldsymbol{c}\in A^{\mathrm{tp}(\boldsymbol{z})}\) there is a unique \(b=f_{A}(\boldsymbol{c})\in\mathbb{N}\) such that \(A\vDash\varphi(b,\boldsymbol{c})\). Often, we then want a term expressing the same function. In general, there is no such term, because the function may grow too fast. (Recall that all terms are polynomially bounded by Lemma 3.2, but the function \(f_{A}\) may grow exponentially fast.) Suppose, however, that we have a term \(\theta(\boldsymbol{z})\) that yields an upper bound for this function, that
is, \(f_{A}(\mathbf{c})<\left[\theta\right]^{A}(\mathbf{c})\) for all \(A\) and \(\mathbf{c}\in A^{\operatorname{tp}(\mathbf{z})}\). Then we obtain a term \(\eta(\mathbf{z})\) such that \(\left[\eta\right]^{A}(\mathbf{c})=f_{A}(\mathbf{c})\) as follows: we let
\[\eta(\mathbf{z})\coloneqq\#y^{\prime}<\theta(\mathbf{z}).\exists y<\theta(\mathbf{z}). \big{(}y^{\prime}<y\wedge\varphi(y,\mathbf{z})\big{)}.\]
It is our goal for the rest of this section to express bitwise arithmetic in \(\mathsf{FO}+\mathsf{C}\). We will use relation variables to encode binary representations of natural numbers. Let \(Y\) be a relation variable of type \(\mathsf{n}\), and let \(\omega\) be a numerical interpretation. We think of \(Y\) as representing the number whose \(i\)th bit is \(1\) if and only if \(i\in\omega(Y)\). But as \(\omega(Y)\) may be infinite, this representation is not yet well defined. We also need to specify a bound on the number of bits we consider, which we can specify by a \(0\)-ary function variable \(U\). Then the pair \((Y,U)\) represents the number
\[\left\langle\!\left\langle Y,U\right\rangle\!\right\rangle^{\omega}\coloneqq \sum_{i\in\omega(Y),i<\omega(U)}2^{i} \tag{3.A}\]
We can also specify numbers by formulas and terms. We let \(\widehat{y}\) be a distinguished number variable (that we fix for the rest of this paper). Let \(\chi\) be a formula and \(\theta\) a term. We usually assume that \(\widehat{y}\) occurs freely in \(\chi\) and does not occur in \(\theta\), but neither is necessary. Let \(A\) be a structure and \(\omega\) an assignment over \(A\). Recall that \(\omega\frac{i}{\widehat{y}}\) denotes the assignment that maps \(\widehat{y}\) to \(i\) and coincides with \(\omega\) on all other variables. We let
\[\left\langle\!\left\langle\chi,\theta\right\rangle\!\right\rangle^{(A,a)} \coloneqq\sum_{\begin{subarray}{c}(A,e,\frac{1}{\widehat{y}})\succeq\chi,\\ i<\left[\theta\right]^{(A,a)}\end{subarray}}2^{i}. \tag{3.B}\]
If \(\chi\) and \(\theta\) are arithmetical, we may write \(\left\langle\!\left\langle\chi,\theta\right\rangle\!\right\rangle^{\omega}\) instead of \(\left\langle\!\left\langle\chi,\theta\right\rangle\!\right\rangle^{(A,a)}\).
The following Lemmas 3.9, 3.10, and 3.14 follow easily from the facts that the arithmetic operations are in uniform \(\mathsf{TC}^{0}\) (Lemma 2.3) and \(\mathsf{FO}+\mathsf{C}\) captures uniform \(\mathsf{TC}^{0}\) (Theorem 3.3). However, we find it helpful to sketch at least some of the proofs, in particular the proof of Lemma 3.10 for iterated addition. While researchers in the circuit-complexity community seem to be well-aware of the fact that iterated addition is in _uniform_\(\mathsf{TC}^{0}\), all references we know only show that it is in non-uniform \(\mathsf{TC}^{0}\), so we think it is worthwhile to give a proof of this lemma. Our proof is purely logical, circumventing circuit complexity altogether, so it may be of independent interest.
**Lemma 3.9**.: _Let \(Y_{1},Y_{2}\) be relation variables of type \(\{\mathsf{n}\}\), and let \(U_{1},U_{2}\) be function variables of type \(\varnothing\to\mathsf{n}\)._
1. _There are arithmetical_ \(\mathsf{FO}+\mathsf{C}\)_-formulas_ \(\mathsf{add}\)_,_ \(\mathsf{sub}\) _and arithmetical_ \(\mathsf{FO}+\mathsf{C}\)_-terms_ \(\mathsf{bd}\)_-_\(\mathsf{add}\)_,_ \(\mathsf{bd}\)_-_\(\mathsf{sub}\) _such that for all structures_ \(A\) _and assignments_ \(\omega\) _over_ \(A\)_,_ \[\left\langle\!\left\langle\mathsf{add},\mathsf{bd}\text{-}\mathsf{ add}\right\rangle\!\right\rangle^{\omega} =\left\langle\!\left\langle Y_{1},U_{1}\right\rangle\!\right\rangle^{ \omega}+\left\langle\!\left\langle Y_{2},U_{2}\right\rangle\!\right\rangle^{ \omega},\] \[\left\langle\!\left\langle\mathsf{sub},\mathsf{bd}\text{-}\mathsf{ sub}\right\rangle\!\right\rangle^{\omega} =\left\langle\!\left\langle Y_{1},U_{1}\right\rangle\!\right\rangle^{ \omega}\doteq\left\langle\!\left\langle Y_{2},U_{2}\right\rangle\!\right\rangle^{ \omega}.\]
2. _There is an arithmetical_ \(\mathsf{FO}+\mathsf{C}\)_-formula_ \(\mathsf{leq}\) _such that for all structures_ \(A\) _and assignments_ \(\omega\) _over_ \(A\)_,_ \[\left[\mathsf{leq}\right]^{a}=1\iff\left\langle\!\left\langle Y_{1},U_{1} \right\rangle\!\right\rangle^{a}\leq\left\langle\!\left\langle Y_{2},U_{2} \right\rangle\!\right\rangle^{\omega}.\]
Proof.: The key observation is that we can easily define the carry bits. Suppose that we want to add numbers \(m,n\). Then for \(i\geq 0\), the \(i\)th carry is \(1\) if any only if there is a \(j\leq i\) such that \(\operatorname{Bit}(j,m)=\operatorname{Bit}(j,n)=1\), and for all \(k,j<k\leq i\), either \(\operatorname{Bit}(k,m)=1\) or \(\operatorname{Bit}(k,n)=1\).
We can use a similar observation for subtraction.
Less-than-or-equal-to can easily be expressed directly.
To define families of numbers, we can use relation and function variables of higher arity, treating the additional entries as parameters. For a type \(\boldsymbol{t}\in\{\mathtt{v},\mathtt{n}\}^{k}\), let \(Y\) be a relation variable of type \(\{\boldsymbol{nt}\}\), and let \(U\) be a number variable of type \(\boldsymbol{t}\to\mathtt{n}\). Then for every structure \(A\), assignment \(\alpha\), and tuple \(\boldsymbol{c}\in A^{\boldsymbol{t}}\) we let
\[\left\langle\!\left\langle Y,U\right\rangle\!\right\rangle^{(A,\alpha)}( \boldsymbol{c})=\sum_{\begin{subarray}{c}(j,\boldsymbol{c})\in\alpha(Y),\\ j<\alpha(U)(\boldsymbol{c})\end{subarray}}2^{j}. \tag{3.C}\]
We can slightly extend this definition to a setting where \(U\) is a function variable of type \(\boldsymbol{t}^{\prime}\to\mathtt{n}\) for some subtuple \(\boldsymbol{t}^{\prime}\) of \(\boldsymbol{t}\). For example, in the following lemma we have \(\boldsymbol{t}=\mathtt{n}\) and \(\boldsymbol{t}^{\prime}=\varnothing\).
**Lemma 3.10**.: _Let \(Y\) be a relation variable of type \(\{\mathtt{(n,n)}\}\), and let \(U\) be a function variable of type \(\varnothing\to\mathtt{n}\). Then there is an arithmetical \(\mathsf{FO}+\mathsf{C}\)-formula \(\mathsf{s}\)-\(\mathsf{itadd}\) and an arithmetical \(\mathsf{FO}+\mathsf{C}\)-term \(\mathsf{bd}\mathsf{s}\)-\(\mathsf{itadd}\)3 such that for all numerical assignments \(\alpha\) we have_
Footnote 3: The ’s’ in \(\mathsf{s}\)-\(\mathsf{itadd}\) indicates that this is a simple version of iterated addition.
\[\left\langle\!\left\langle\mathsf{s}\text{-}\mathsf{itadd},\mathsf{bd}\mathsf{ s}\text{-}\mathsf{itadd}\right\rangle\!\right\rangle^{\alpha}=\sum_{i<\alpha(U)} \left\langle\!\left\langle Y,U\right\rangle\!\right\rangle^{\alpha}(i).\]
The proof of this lemma requires some preparation. Our first step will be to fix an encoding of sequences by natural numbers. We first encode a sequence \(\boldsymbol{i}=(i_{0},\ldots,i_{k-1})\in\mathbb{N}^{*}\) by the string
\[s_{1}(\boldsymbol{i})\coloneqq\#\operatorname{bin}(i_{1})\#\operatorname{ bin}(i_{2})\#\ldots\#\operatorname{bin}(i_{k})\]
over the alphabet \(\{0,1,\#\}\). Then we replace every \(0\) in \(s_{1}(\boldsymbol{i})\) by \(01\), every \(1\) by \(10\), and every \(\#\) by \(11\) to obtain a string \(s_{2}(\boldsymbol{i})=s_{\ell-1}\ldots s_{0}\) over the alphabet \(\{0,1\}\). We read \(s_{2}(\boldsymbol{i})\) as a binary number and let
\[\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname \,}}}}}}}}}}}}}}} \operatorname{\left\{\left\langle \mathtt{bin}^{-1}\left(s_{2}(i)\right)=\sum_{i=0}^{\ell-1}s_{i}2^{i} \text{if }\boldsymbol{i}\neq\varnothing,\right.\] \[\left.0\text{if }\boldsymbol{i}=\varnothing.\right.\]
**Example 3.11**.: Consider the sequence \(\boldsymbol{i}=(2,5,0)\). We have \(s_{1}(\boldsymbol{i})=\#10\#101\#0\) and \(s_{2}(\boldsymbol{i})=111001111001101101\). This yields
\[\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname}}}}}}}}}}}}}} \operatorname{\left\langle\mathtt{i}\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\,{\,{\,{\,}}}}}}}}} }}}\operatorname{\left\langle\mathtt{i}\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\,{\,{\,{\,}}}}}}} }}}}\operatorname{\left\langle\mathtt{i}\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\,{\operatorname{\,{\,{\,\,}}}}}} }}}}\operatorname{\left\langle\mathtt{i}\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\,{\,{\,{\,{\,{\,}}}}}}}}}} }\operatorname{\left\langle\mathtt{i}\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\,{\,{\,{\,{\,{\,{\,{\,{\,{\,\,{\,{\,{\,\,{\,\,{\,\,\,{\,\,\,\,}}}}}}}}}}}} }\operatorname{\left\langle\mathtt{i}\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\,{\,{\,{\,{\,{\,{\,{\,{\,{\,\,{\,\,{\,\,\,{\,\
It is easy to see that the mapping \(\overset{r}{\iota}\colon\mathbb{N}^{*}\to\mathbb{N}\) is injective, but not bijective. Observe that for \(\boldsymbol{i}=(i_{0},\ldots,i_{k-1})\in\mathbb{N}^{*}\) we have
\[\text{bsize}\left(\overset{r}{\iota}\overset{\ast}{\iota}\right)=\sum_{i=0}^{k -1}2(\text{bsize}(i_{j})+1)\leq 4\sum_{i=0}^{k-1}\text{bsize}(i_{j}) \tag{3.D}\]
and thus
\[\overset{r}{\iota}\overset{\ast}{\iota}\overset{\ast}{\iota}\overset{\ast}{ \iota}\overset{\ast}{\iota}\overset{\ast}{\iota}<2^{4\sum_{i=0}^{k-1}\text{bsize }(i_{j})}. \tag{3.E}\]
**Lemma 3.12**.:
1. _There is an arithmetical_ \(\mathsf{FO}+\mathsf{C}\)_-formula_ \(\mathsf{seq}(y)\) _such that for all_ \(n\in\mathbb{N}\) _we have_ \[\llbracket\mathsf{seq}\rrbracket\,(n)=1\iff n=\overset{r}{\iota}\overset{\ast} {\iota}\overset{\ast}{\iota}\overset{\ast}{\iota}\] _for some_ \(\boldsymbol{i}\in\mathbb{N}\)_._
2. _There is an arithmetical_ \(\mathsf{FO}+\mathsf{C}\)_-term_ \(\mathsf{seqlen}(y)\) _such that for all_ \(n\in\mathbb{N}\) _we have_ \[\llbracket\mathsf{seqlen}\rrbracket\,(n)=\begin{cases}k&\text{if }n=\overset{r}{ \iota}(i_{0},\ldots,i_{k-1})\overset{\ast}{\iota}\text{ for some }k,i_{0},\ldots,i_{k-1}\in\mathbb{N},\\ 0&\text{otherwise}.\end{cases}\]
3. _There is an arithmetical_ \(\mathsf{FO}+\mathsf{C}\)_-term_ \(\mathsf{entry}(y,y^{\prime})\) _such that for all_ \(j,n\in\mathbb{N}\) _we have_ \[\llbracket\mathsf{entry}\rrbracket\,(j,n)=\begin{cases}i&\text{if }n=\overset{r}{ \iota}(i_{0},\ldots,i_{k-1})\overset{\ast}{\iota},j<k,i=i_{j}\text{ for some }k,i_{0},\ldots,i_{k-1}\in\mathbb{N},\\ n&\text{otherwise}.\end{cases}\]
Proof.: Let \(S=\{s_{2}(\boldsymbol{i})\mid\boldsymbol{i}\in\mathbb{N}^{*}\}\). Let \(\text{bin}(n)\eqeqcolon\boldsymbol{s}=s_{\ell-1}\ldots s_{0}\). We want to detect if \(\boldsymbol{s}=S\). As a special case, we note that if \(\ell=0\) and thus \(\boldsymbol{s}=\varnothing\), we have \(\boldsymbol{s}=s_{2}(\varnothing)\). In the following, we assume that \(\ell>0\). Then if \(\ell\) is odd, we have \(\boldsymbol{s}\not\in S\). Furthermore, if there is a \(p<\frac{\ell}{2}\) such that \(s_{2p}=0\) and \(s_{2p+1}=0\), again we have \(\boldsymbol{s}\not\in S\), because \(\boldsymbol{s}\) is not obtained from a string \(\boldsymbol{s}^{\prime}\in\{0,1,\#\}^{*}\) by replacing \(0\)s by \(01\), \(1\)s by \(10\), and \(\#\)s by \(11\). Otherwise, we let \(\boldsymbol{s}^{\prime}=s_{\frac{\ell}{2}-1}^{\prime}\ldots s_{0}^{\prime}\in \{0,1,\#\}^{*}\) be the corresponding string. Then \(\boldsymbol{s}^{\prime}=s_{1}(\boldsymbol{i})\) for some \(\boldsymbol{i}\in\mathbb{N}^{*}\) if and only if \(\boldsymbol{s}^{\prime}\) satisfies the following conditions:
* \(s_{0}^{\prime}\neq\#\) and \(s_{\ell/2-1}^{\prime}=\#\);
* for all \(p<\frac{\ell}{2}\), if \(s_{p}^{\prime}=\#\) and \(s_{p-1}^{\prime}=0\) then either \(p=1\) or \(s_{p-2}=\#\).
We can easily translate the conditions to conditions on the string \(\boldsymbol{s}\) and, using the bit predicate, to conditions on \(n\). As the bit predicate is definable in \(\mathsf{FO}+\mathsf{C}\) (by Lemma 3.7), we can express these conditions by an arithmetical \(\mathsf{FO}+\mathsf{C}\)-formula \(\mathsf{seq}(y)\).
To prove (2), we observe that if \(n=\overset{r}{\iota}\overset{\ast}{\iota}\overset{\ast}{\iota}\) for some \(\boldsymbol{i}\in\mathbb{N}^{*}\), then the length of the sequence \(\boldsymbol{i}\) is the number of \(\#\)s in the string \(s_{1}(\boldsymbol{i})\), or equivalently, the number of \(p<\frac{\ell}{2}\) such that \(\text{Bit}(2p,n)=1\) and \(\text{Bit}(2p+1,n)=1\) Using the formula \(\mathsf{seq}(y)\) and the bit predicate, we can easily express this by a term \(\mathsf{seqlen}(y)\).
To prove (3), we first write an arithmetical formula \(\mathsf{isEntry}(y,y^{\prime},y^{\prime\prime})\) such that
\[\llbracket\mathsf{isEntry}\rrbracket\,(j,n,i)=1\iff n=\overset{r}{\iota}(i_{0}, \ldots,i_{k-1})\overset{\ast}{\iota},j<k,i=i_{j}\text{ for some }k,i_{0},\ldots,i_{k-1}\in\mathbb{N}.\]
Once we have this formula, we let
\[\mathsf{entry}(y,y^{\prime})=\mathsf{min}\Big{(}y^{\prime},\#z<y^{\prime}.\exists y ^{\prime\prime}<y^{\prime}.\big{(}\mathsf{isEntry}(y,y^{\prime},y^{\prime\prime} )\wedge z<y^{\prime\prime}\big{)}\Big{)}.\]
To define \(\mathsf{isEntry}(y,y^{\prime},y^{\prime\prime})\), observe that for \(\mathbf{i}=(i_{0},\dots,i_{k-1})\in\mathbb{N}^{*}\), the \(j\)th entry \(i_{j}\) is located between the \(j\)th and \((j+1)\)st '\(\#\)' in the string \(s_{1}(\mathbf{i})\) and thus between the \(j\)th and \((j+1)\)st occurrence of '\(11\)' at positions \(2p,2+1\) in the string \(s_{2}(\mathbf{i})=\mathrm{bin}\big{(}\overset{\raisebox{-0.5pt}{\scalebox{0.7}{ \scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7 }{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7 }{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{ \scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{ \scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{ \scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{ \scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{ \scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{ \scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{ \scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{ \scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{ \scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{ \scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{ \scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{ \scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{ \scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{ \scalebox{0.7}}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{ \scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}}{ \scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7 }}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{ \scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}}{\scalebox{0.7}{\scalebox{0.7}{ \scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}}{\scalebox{0.7}{\scalebox{0.7 }{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}}{ \scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0. }}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}}{\scalebox{0.7 }{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}}{\scalebox{0. 7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}}{\scalebox{0.7 }{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}}{\scalebox{0.7}{\scalebox{0. 7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}}{\scalebox{0.7}{ \scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}}{\scalebox{0. 7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}}{\scalebox{0.7}{\scalebox{0.7}{ \scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{ \scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{ \scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\, \scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0} \scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0}}{\scalebox{0.7}{\scalebox{0.7 }{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0}}{\scalebox{0.7}{\, \scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{\scalebox}}{\scalebox{0.7}{ \scalebox{0.7}{\scalebox{0.7}{\scalebox{0}}\scalebox{0.7pt}{\scalebox{0.7}{\scalebox{0.7}{\, \scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{\scalebox{0}}}{\scalebox{0.7}{\scalebox{ \scalebox{0}}{\scalebox{0.7}{\scalebox{0}}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{ \scalebox{0}}{\scalebox{0}}{\scalebox{0.7}{\scalebox{0}}{\scalebox{0.7}{\scalebox{0.7}{ \scalebox{0.7}{\scalebox{0}{\scalebox{0}}{\scalebox{0.7}{\scalebox{0}}{\scalebox{0.7}{\scalebox{0.7}{ \scalebox{0.7}{\scalebox{0}{\scalebox{\scalebox}}{0.7}{\scalebox{0.7}{\scalebox{0}{\scalebox{0}}{\scalebox{ \scalebox{0}}{\scalebox{0.7}{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0}}{\scalebox{0.7}{ \scalebox{0}{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0}}{\scalebox{0.7}{0.7}{\scalebox{0.7}{ \scalebox{0.7}{0.7pt}{\scalebox{0.7}{\scalebox{0.7}{0.7pt}{\scalebox{0.7}{\scalebox{0.7}{0.7pt}{\, \scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}}{\scalebox{0.7}}{\scalebox{0.7}{\scalebox{0. }}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}}{\scalebox{0.7}{\, \raisebox{0.7}{0.7pt{0.7}{\scalebox{\,}{0.7pt{\scalebox{\mul}}}{\scalebox{0.7}{\scalebox{0. \,\raisebox{\mul}{0.7}{0.7pt{\scalebox \mul}{0.7}{\scalebox{0}}{\scalebox{0.7}{\scalebox{0.7}{0.7}{\scalebox{ \mul}}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0}}{\scalebox{0.7}{\, \raisebox{0.7}{0.7}{\scalebox{0.7}{\scalebox{0.7}}{\scalebox{0.7}{0.7pt{0.7}{ \scalebox{0.7}}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{0.7}{\scalebox{0. \raisebox{\mul}}{0.7}{\scalebox{0.7}{\scalebox{0.7}{0.7}{\scalebox{0. \raisebox{\mul}}{0.7}{\scalebox{0.7}{0.7pt}{\scalebox{0. \raisebox{0}{0.7}{\raisebox{0}}{\scalebox{0.7}{\raisebox{0.7}{0.7}{\scalebox{0. \raisebox}{0.7}{\raisebox{0.7}{0.7pt}{\raisebox{0.7}{0.7pt{0.7}{ \raisebox{0.7}{0.7pt{0.7pt}{\raisebox{0.7}{0.7pt}{0.7pt}{\scalebox{0. \raisebox{\raisebox}{0.7}{0.7pt}{0.7pt{0.7pt}{0.7pt}{\scalebox{0. \raisebox{0.7}{0.7pt{0.7}{0.7pt}{\scalebox{0.0.7}{0.7pt}{\raisebox{0..7pt}{0.7pt{0.7pt}{\raisebox{0.0.7pt}{0.7pt}{\raisebox{0.0.7pt{0.7}{ \raisebox{0.0.7pt{\raisebox}{0.}}{\raisebox{0.0.7}{0.7pt{0.7pt}{\raisebox{0..0.7pt{0.7pt{\raisebox{\raisebox}}}{0.0.7pt{0.7pt}{\raisebox{0.0.7}{0.7pt{0..7pt}{\raisebox{0.0.7pt{0.7.0.7pt{\raisebox{\raisebox}{\raisebox}}{0.0.7}{\raisebox{0..0.7}{\raisebox{0.0.7}{\raisebox{0.0.7}{\raisebox{0.}{0.0.7pt{0.7}{\raisebox{0..0.7}{\raisebox{0.0.7}{\raisebox{0.}{0.0.7pt{0.7}{\raisebox{0.0.7}{\raisebox{0..0.7}{\raisebox{0.0.7}{\raisebox{0.}{0.0.7pt{0.7}{\raisebox{0..0.7}{\raisebox{0.0.7}{\raisebox{0.0.7}{0.0.7pt{0.0.7}{\
For \(j<\ell^{(0)}\), let \(n^{(0)}_{i,j}\coloneqq\operatorname{Bit}(j,n^{(0)}_{i})\) and
\[s^{(0)}_{j}\coloneqq\sum_{i=0}^{m^{(0)}-1}n^{(0)}_{i,j}=\big{|}\big{\{}i\; \big{|}\;n^{(0)}_{i,j}=1\big{\}}\big{|}. \tag{3.G}\]
Then
\[\sum_{i=0}^{m^{(0)}-1}n^{(0)}_{i}=\sum_{i=0}^{m^{(0)}-1}\sum_{j=0}^{\ell^{(0)}- 1}2^{j}n^{(0)}_{i,j}=\sum_{j=0}^{\ell^{(0)}-1}2^{j}\sum_{i=0}^{m^{(0)}-1}n^{(0) }_{i,j}=\sum_{j=0}^{\ell^{(0)}-1}2^{j}s^{(0)}_{j}.\]
Let \(m^{(1)}\coloneqq\ell^{(0)}\) and \(n^{(1)}_{i}=2^{i}s^{(0)}_{i}\) for \(i<\ell^{(0)}\). Then
\[\sum_{i=0}^{m-1}n_{i}=\sum_{i=0}^{m^{(0)}-1}n^{(0)}_{i}=\sum_{i=0}^{m^{(1)}-1} n^{(1)}_{i}.\]
Moreover,
\[n^{(1)}_{i}\leq 2^{\ell^{(0)}-1}s^{(0)}_{i}=2^{\ell^{(0)}-1+\log s^{(0)}_{i} }<2^{\ell^{(0)}+\big{|}\log s^{(0)}_{i}\big{|}}.\]
Let \(p^{(1)}\coloneqq\big{|}\log m^{(0)}\big{|}\) and \(\ell^{(1)}\coloneqq\ell^{(0)}+p^{(1)}\). Noting that \(s^{(0)}_{i}\leq m^{(0)}\) for all \(i\), we thus have
\[n^{(1)}_{i}<2^{\ell^{(1)}}\]
for all \(i\). Finally, note that
\[\operatorname{Bit}(j,n^{(1)}_{i})\neq 0\implies i\leq j\leq i+p^{(1)}\]
This completes the base step of the construction. For the inductive step, suppose that we have defined \(m^{(k)},\ell^{(k)},p^{(k)}\in\mathbb{N}_{>0}\) and \(n^{(k)}_{i}\in\mathbb{N}\) for \(i\in\{0,\dots,m^{(k)}-1\}\) such that
\[\sum_{i=0}^{m-1}n_{i}=\sum_{i=0}^{m^{(k)}-1}n^{(k)}_{i}\]
and for all \(i\):
* \(n^{(k)}_{i}<2^{\ell^{(k)}}\) for all \(i\);
* \(\operatorname{Bit}(j,n^{(k)}_{i})\neq 0\implies i\leq j\leq i+p^{(k)}\).
For \(j<\ell^{(k)}\), let \(n^{(k)}_{i,j}\coloneqq\operatorname{Bit}(j,n^{(k)}_{i})\) and
\[s^{(k)}_{j}\coloneqq\sum_{i=0}^{m^{(k)}-1}n^{(k)}_{i,j}=\sum_{i=\max\{0,j-p^{ (k)}\}}^{j}n^{(k)}_{i,j}.\]
Then
\[\sum_{i=0}^{m^{(k)}-1}n^{(k)}_{i}=\sum_{j=0}^{\ell^{(k)}-1}2^{j}s^{(k)}_{j}.\]
Let \(m^{(k+1)}\coloneqq\ell^{(k)}\) and \(n_{i}^{(k+1)}\coloneqq 2^{i}s_{i}^{(k)}\) for \(i<\ell^{(k)}\). Then
\[\sum_{i=0}^{m-1}n_{i}=\sum_{i=0}^{m^{(k)}-1}n_{i}^{(k)}=\sum_{i=0}^{m^{(k+1)}-1}n _{i}^{(k+1)}.\]
Moreover,
\[n_{i}^{(k+1)}\leq 2^{\ell^{(k)}-1}s_{i}^{(k)}=2^{\ell^{(k)}-1+\log s_{i}^{(k)}}< 2^{\ell^{(k)}+\left\lfloor\log s_{i}^{(k)}\right\rfloor}.\]
Note that \(s_{i}^{(k)}\leq p^{(k)}+1\). Let \(p^{(k+1)}\coloneqq\left\lfloor\log(p^{(k)}+1)\right\rfloor\) and \(\ell^{(k+1)}\coloneqq\ell^{(k)}+p^{(k+1)}\). Then
\[n_{i}^{(k+1)}<2^{\ell^{(k+1)}}\]
and for all \(j\),
\[\operatorname{Bit}(j,n_{i}^{(k+1)})\neq 0\implies i\leq j\leq i+p^{(k+1)}.\]
Observe that if \(p^{(k)}=1\) then \(p^{(k^{\prime})}=1\) for all \(k^{\prime}>k\). Let \(k^{*}\) be the least \(k\) such that \(p^{(k)}=1\). It is easy to see that
\[\begin{array}{rccc}&p^{(k^{*}-1)}&=&2,\\ 3&\leq&p^{(k^{*}-2)}&\leq&6,\\ 7&\leq&p^{(k^{*}-3)}&\leq&126,\\ 127&\leq&p^{(k)}&&\text{for $k<k^{*}-3$}.\end{array} \tag{3.H}\]
_Claim 1._
* \(\sum_{i=2}^{k^{*}}ip^{(i)}\leq 5\log\log m\);
* \(\sum_{i=1}^{k^{*}}\operatorname{bsize}(p^{(i)})\leq 5\log\log m\).
Proof.: By induction on \(k=k^{*},k^{*}-1,\ldots,2\) we prove
\[\sum_{i=k}^{k^{*}}i\cdot p^{(i)}\leq 2kp^{(k)} \tag{3.I}\]
As base case, we need to check (3.I) for \(k\in\{k^{*},k^{*}-1,k^{*}-2,k^{*}-3\}\). Using (3.H), this is straightforward. For example, if \(k=k^{*}-2\) and \(p^{(k)}=3\) we have
\[\sum_{i=k}^{k^{*}}ip^{(i)}=3(k^{*}-2)+2(k^{*}-1)+k^{*}=6k^{*}-8\leq 6k^{*}-4=2kp ^{(k)}.\]
Or if \(k=k^{*}-3\) and \(p^{(k)}=15\) we have \(p^{(k+1)}=\left\lfloor\log(p^{(k)}+1)\right\rfloor=4\) and thus
\[\sum_{i=k}^{k^{*}}ip^{(i)}=15(k^{*}-3)+4(k^{*}-2)+2(k^{*}-1)+k^{*}=22k^{*}-55 \leq 30k^{*}-45=2kp^{(k)}.\]
The inequality \(22k^{*}-55\leq 30k^{*}-45\) holds because if \(2\leq k\leq k^{*}-3\) then \(k^{*}\geq 5\), which implies \(8k^{*}\geq 10\).
For the inductive step \(k+1\mapsto k\), where \(2\leq k<k^{*}-3\), we argue as follows:
\[\sum_{i=k}^{k^{*}}i\cdot p^{(i)} =k\cdot p^{(k)}+\sum_{i=k+1}^{k^{*}}i\cdot p^{(i)}\] \[\leq k\cdot p^{(k)}+2(k+1)p^{(k+1)}\] induction hypothesis \[\leq k\cdot p^{(k)}+2(k+1)\log(p^{(k)}+1)\] \[\leq k\cdot p^{(k)}+3k\log(p^{(k)}+1)\] because
\[k\geq 2\]
Since by (3.H) we have \(p^{(k)}\geq 127\) for \(k<k^{*}-3\), we have \(p^{(k)}\geq 3\log(p^{(k)}+1)\). Inequality (3.I) follows.
For \(k=2\), this yields
\[\sum_{i=2}^{k^{*}}i\cdot p^{(i)}\leq 4p^{(2)}\leq 4\log(p^{(1)}+1)\leq 4\log( \log m+1)\leq 5\log\log m.\]
To prove (2), note that \(\text{bsize}(p^{(k^{*})})=\text{bsize}(1)=1\) and \(\text{bsize}(p^{(k)})=\left\lceil\log(p^{(k)}+1)\right\rceil\leq p^{(k+1)}+1\) for \(k<k^{*}\). Thus (2) holds if \(k^{*}=1\). If \(k^{*}\geq 2\) we have
\[\sum_{i=1}^{k^{*}}\text{bsize}(p^{(i)})=1+\sum_{i=2}^{k^{*}}(p^{(i)}+1)=k^{*}+ \sum_{i=2}^{k^{*}}p^{(i)}\leq k^{*}+1+\frac{1}{2}\sum_{i=2}^{k^{*}-1}ip^{(i)} \leq\sum_{i=2}^{k^{*}}ip^{(i)},\]
and (2) follows from (1).
_Claim 2._ There is an arithmetical \(\mathsf{FO}+\mathsf{C}\)-term \(\mathsf{pseq}(y)\) such that
\[\left\lceil\mathsf{pseq}\right\rceil^{\alpha}(m)={}^{r}_{\alpha}(p^{(1)}, \ldots,p^{(k^{*})}{}_{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
defines \(\sideset{{}^{\prime}_{i}(p^{(1)},\ldots,p^{(k^{*})})}{}_{\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
To compute \(s_{j}^{(k^{*})}\), we need to know \(s_{j}^{(k^{*}-1)},\ldots,s_{j-p^{(k^{*})}}^{(k^{*}-1)}\). To compute these numbers, we need to know \(s_{j}^{(k^{*}-2)},\ldots,s_{j-p^{(k^{*}-2)}}^{(k^{*}-2)}\), et cetera. Thus if we want to compute \(s_{j}^{(k^{*})}\) starting from values \(s_{j^{\prime}}^{(k)}\), we need to know \(s_{j^{\prime}}^{(k)}\) for
\[j-\sum_{i=k+1}^{k^{*}}p^{(i)}\leq j^{\prime}\leq j.\]
For \(2\leq k<k^{*}\), we let
\[\boldsymbol{s}_{jk}\coloneqq(s_{j-\sum_{i=k+1}^{k^{*}}p^{(i)}}^{(k)},\ldots,s _{j}^{(k)}),\]
and we let \(\boldsymbol{s}_{jk^{*}}\coloneqq(s_{j}^{(k^{*})})\). Concatenating these sequences, we let
\[\boldsymbol{s}_{j}\coloneqq\boldsymbol{s}_{j2}\boldsymbol{s}_{j3}\ldots \boldsymbol{s}_{jk^{*}}.\]
_Claim 5._ There is an arithmetical \(\mathsf{FO}+\mathsf{C}\)-formula \(\mathsf{all}\mathsf{-s}(y,z)\) such that for all \(j\in\{0,\ldots,\ell^{(k^{*})}-1\}\) and \(t\in\mathbb{N}\) we have
\[\llbracket\mathsf{all}\mathsf{-s}\rrbracket^{a}\,(j,t)=1\iff t={}_{\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
again assuming that \(m\) is sufficiently large. This proves (3.K).
We let
\[\mathsf{skstar}(y):=\exists z<U.\Big{(}\mathsf{all\text{-}s}(y,z)\wedge y= \mathsf{entry}\big{(}\mathsf{seqlen}(z)-1,z\big{)}\Big{)},\]
where the formula \(\mathsf{all\text{-}s}(y,z)\) is from Claim 5 and the terms \(\mathsf{entry}\) and \(\mathsf{seqlen}\) are from Lemma 3.12.
It remains to compute \(\sum_{i=0}^{m^{(k^{*})}-1}n_{i}^{(k^{*}+1)}\), where \(n_{i}^{(k^{*}+1)}=2^{i}s_{i}^{(k^{*})}\). Since \(s_{j}^{(k^{*})}\leq 2\), we have \(\operatorname{Bit}(j,n_{i}^{(k^{*})})=0\) unless \(j\in\{i-1,i\}\). We split the sum into the even and the odd entries:
\[\sum_{i=0}^{m^{(k^{*})}-1}n_{i}^{(k^{*}+1)}=\underbrace{\Big{[}m^{(k^{*})/2} \Big{]}^{-1}n_{2i}^{(k^{*})}}_{=n_{1}^{*}}+\underbrace{\Big{[}m^{(k^{*})}/2 \Big{]}^{-1}n_{2i+1}^{(k^{*})}}_{=n_{2}^{*}}.\]
Since the entries in the two partial sums have no non-zero bits in common, it is easy to define the two partial sums (of course, using Claim 6). Then we can apply Lemma 3.9 to define \(n_{1}^{*}+n_{2}^{*}\).
**Lemma 3.14**.: _Let \(Y_{1},Y_{2}\) be relation variables of type \(\{\mathsf{n}\}\), and let \(U_{1},U_{2}\) be function variables of type \(\varnothing\to\mathsf{n}\). Then there are arithmetical \(\mathsf{FO+C}\)-formulas \(\mathsf{mul}\), \(\mathsf{div}\), and \(\mathsf{FO+C}\)-terms \(\mathsf{bd\text{-}mul}\), \(\mathsf{bd\text{-}div}\) such that for all numerical assignments \(\omega\),_
\[\langle\!\langle\mathsf{mul},\mathsf{bd\text{-}mul}\rangle\! \rangle^{\omega} =\langle\!\langle Y_{1},U_{1}\rangle\!\rangle^{\omega}\cdot\langle\! \langle Y_{2},U_{2}\rangle\!\rangle^{\omega}\,,\] \[\langle\!\langle\mathsf{div},\mathsf{bd\text{-}div}\rangle\! \rangle^{\omega} =\left|\frac{\langle\!\langle Y_{1},U_{1}\rangle\!\rangle^{\omega} }{\langle\!\langle Y_{2},U_{2}\rangle\!\rangle^{\omega}}\right|\] if \[\langle\!\langle Y_{2},U_{2}\rangle\!\rangle^{\omega}\neq 0.\]
Proof.: The expressibility multiplication follows easily from the expressibility of iterated addition (Lemma 3.10). Division is significantly more difficult, we refer the reader to [15].
We need be an extension of Lemma 3.10 where the family of numbers is no longer indexed by numbers, but by arbitrary tuples of vertices and numbers, and where the bounds on the bitsite of the numbers in the family are not uniform (in Lemma 3.10 we use the \(0\)-ary function variable \(U\) to provide a bound on the bitsize of all numbers in our family). Before presenting this extension, we consider a version of iterated addition as well as taking the maximum and minimum of a family of numbers given directly as values of a function instead of the binary representation that we considered in the previous lemmas.
**Lemma 3.15**.: _Let \(X\) be a relation variable of type \(\{\mathsf{v}^{k}\mathsf{n}^{\ell}\}\), and let \(U,V\) be a function variables of types \(\mathsf{v}^{k}\mathsf{n}^{\ell}\to\mathsf{n}\), \(\mathsf{v}^{k}\to\mathsf{n}\), respectively._
1. _There is a_ \(\mathsf{FO+C}\)_-term_ \(\mathsf{u}\mathsf{\text{-}tadd}^{\ell}\) _such that for all structures_ \(A\) _and assignments_ \(\omega\) _over_ \(A\)_,_ \[\llbracket\mathsf{u}\mathsf{\text{-}tadd}\rrbracket^{(A,a)}=\sum_{(\mathbf{a},\mathbf{b })}\omega(U)(\mathbf{a},\mathbf{b}),\]
_where the sum ranges over all \((\mathbf{a},\mathbf{b})\in\omega(X)\) such that \(\mathbf{b}=(b_{1},\ldots,b_{\ell})\in\mathbb{N}^{\ell}\) with \(b_{i}<\omega(V)(\mathbf{a})\) for all \(i\in[\ell]\)._
2. _There are_ \(\mathsf{FO}+\mathsf{C}\)_-terms_ \(\mathsf{u}\mathsf{-max}\) _and_ \(\mathsf{u}\mathsf{-min}\) _such that for all structures_ \(A\) _and assignments_ \(\omega\) _over_ \(A\)_,_ \[\llbracket\mathsf{u}\mathsf{-max}\rrbracket^{(A,a)} =\max_{(\mathbf{a},\mathbf{b})}\omega(U)(\mathbf{a},\mathbf{b}),\] \[\llbracket\mathsf{u}\mathsf{-min}\rrbracket^{(A,a)} =\min_{(\mathbf{a},\mathbf{b})}\omega(U)(\mathbf{a},\mathbf{b}),\] _where_ \(\max\) _and_ \(\min\) _range over all_ \((\mathbf{a},\mathbf{b})\in\omega(X)\) _such that_ \(\mathbf{b}=(b_{1},\ldots,b_{\ell})\in\mathbb{N}^{\ell}\) _with_ \(b_{i}<\omega(V)(\mathbf{a})\) _for all_ \(i\in[\ell]\)_._
_Furthermore, if \(k=0\) then the terms \(\mathsf{u}\mathsf{-add}\), \(\mathsf{u}\mathsf{-max}\), and \(\mathsf{u}\mathsf{-min}\) are arithmetical._
Proof.: To simplify the notation, in the proof we assume \(\ell=1\). The generalisation to arbitrary \(\ell\) is straightforward.
The trick is to use adaptive bounds in our counting terms. Let \(A\) be a structure and \(\omega\) an assignment over \(A\). Let \(\mathscr{I}\subseteq V(A)^{k}\times\mathbb{N}\) be the set of all all \((\mathbf{a},b)\in\omega(X)\) with \(b<\omega(V)(\mathbf{a})\). We exploit that for all \((\mathbf{a},b)\in\mathscr{I}\), \(\omega(U)(\mathbf{a},b)\) is the number of \(c\in\mathbb{N}\) such that \(c<\omega(U)(\mathbf{a},b)\). This implies
\[\sum_{(\mathbf{a},b)}\omega(U)(\mathbf{a},b) =\Big{|}\Big{\{}(\mathbf{a},b,c)\Bigm{|}(\mathbf{a},b)\in\mathscr{I} \text{ and }c<\omega(U)(\mathbf{a},b)\Big{\}}\Big{|}\] \[=\Big{|}\Big{\{}(\mathbf{a},b,c)\Bigm{|}(\mathbf{a},b)\in\omega(X)\text{ with }b<\omega(V)(\mathbf{a})\text{ and }c<\omega(U)(\mathbf{a},b)\Big{\}}\Big{|}.\]
Thus
\[\mathsf{u}\mathsf{-itald}\coloneqq\#\big{(}\mathbf{x},y<V(\mathbf{x}),z<U(\mathbf{x},y) \big{)}.X(\mathbf{x},y).\]
satisfies assertion (1).
Once we have this, assertion (2) is easy because maximum and minimum are bounded from above by the sum. For example, we let
\[\mathsf{u}\mathsf{-max}\coloneqq\#z<\mathsf{u}\mathsf{-itald}.\exists\mathbf{x}. \exists y<V(\mathbf{x}).\big{(}X(\mathbf{x})\wedge z<U(\mathbf{x},y)\big{)}.\qed\]
The following lemma is the desired generalisation of Lemma 3.10.
**Lemma 3.16**.: _Let \(X,Y\) be relation variables of type \(\{\mathsf{v}^{k}\mathtt{n}^{\ell}\}\) and \(\{\mathsf{nv}^{k}\mathtt{n}^{\ell}\}\), respectively, and let \(U,V\) be function variables of type \(\mathsf{v}^{k}\mathtt{n}^{\ell}\to\mathtt{n}\) and \(\mathsf{v}^{k}\to\mathtt{n}\), respectively. Then there is an \(\mathsf{FO}+\mathsf{C}\)-formula \(\mathsf{itadd}\) and an \(\mathsf{FO}+\mathsf{C}\)-term \(\mathsf{bd}\mathsf{-itald}\) such that for all structures \(A\) and assignments \(\omega\) over \(A\),_
\[\langle\!\langle\mathsf{itadd},\mathsf{bd}\mathsf{-itald}\rangle\!\rangle^{(A, a)}=\sum_{(\mathbf{a},\mathbf{b})}\langle\!\langle Y,U\rangle\!\rangle^{(A,a)}\,(\mathbf{a},\mathbf{b}),\]
_where the sum ranges over all \((\mathbf{a},\mathbf{b})\in\omega(X)\) such that \(\mathbf{b}=(b_{1},\ldots,b_{\ell})\in\mathbb{N}^{\ell}\) with \(b_{i}<\omega(V)(\mathbf{a})\) for all \(i\in[\ell]\)._
_If \(k=0\), then the formula \(\mathsf{itadd}\) and the term \(\mathsf{bd}\mathsf{-itald}\) are arithmetical._
Proof.: Let \(A\) be a structure and \(\omega\) an assignment over \(A\). Let \(\mathscr{I}\subseteq V(A)^{k}\times\mathbb{N}^{\ell}\) be the set of all all \((\mathbf{a},\mathbf{b})\in\omega(X)\) such that \(\mathbf{b}=(b_{1},\ldots,b_{\ell})\) with \(b_{i}<\omega(V)\mathbf{(a)}\). Let \(m\coloneqq|\mathscr{I}|\) and \(p\coloneqq\max\big{\{}\omega(U)\mathbf{(a,b)}\bigm{|}(\mathbf{a},\mathbf{b})\in\mathscr{I} \big{\}}\). Note that \(p=[\mathfrak{u}\text{-}\mathsf{max}]^{(A,e)}\) for the term \(\mathfrak{u}\text{-}\mathsf{max}\) of Lemma 3.15. We have to add the family of \(m\) numbers \(n_{\mathbf{c}}\coloneqq\langle\!\langle Y,U\rangle\!\rangle^{(A,e)}\,\mathbf{(c)}\) for \(\mathbf{c}\in\mathscr{I}\). We think of these numbers as \(p\)-bit numbers, padding them with zeroes if necessary. Note that we cannot directly apply Lemma 3.10 to add these numbers, because the family, being indexed by vertices, is not ordered, and Lemma 3.10 only applies to ordered families indexed by numbers. But there is a simple trick to circumvent this difficulty. (We applied the same trick in the proof of Lemma 3.10).
For all \(i<p\), we let \(n_{\mathbf{c},i}\coloneqq\operatorname{Bit}(i,n_{\mathbf{c}})\) be the \(i\)th bit of \(n_{\mathbf{c}}\), and we let
\[s_{i}\coloneqq\sum_{\mathbf{c}\in\mathscr{I}}n_{\mathbf{c},i}.\]
We have
\[\sum_{\mathbf{c}\in\mathscr{I}}n_{\mathbf{c}}=\sum_{\mathbf{c}\in\mathscr{I}}\sum_{i=0}^{ p-1}n_{\mathbf{c},i}\cdot 2^{i}=\sum_{i=0}^{p-1}s_{i}\cdot 2^{i}.\]
This reduces the problem of adding the _unordered_ family of \(m\)\(p\)-bit numbers \(n_{\mathbf{c}}\) to adding the _ordered_ family of \(p\)\(p+\log m\)-bit numbers \(n^{\prime}_{i}\coloneqq s_{i}\cdot 2^{i}\).
The partial sums \(s_{i}\) are definable by a counting term in \(\mathsf{FO}+\mathsf{C}\), because \(s_{i}\) is the number of \(\mathbf{c}\in\mathscr{I}\) such that \(n_{\mathbf{c},i}=1\). As the bit predicate is definable in \(\mathsf{FO}+\mathsf{C}\), we can obtain the bit-representation of these numbers and then shift \(s_{i}\) by \(i\) to obtain the bit representation of \(n^{\prime}_{i}=s_{i}\cdot 2^{i}\). Then we can apply Lemma 3.10 to compute the sum.
**Lemma 3.17**.: _Let \(X,Y\) be relation variables of type \(\{\mathsf{v}^{k}\mathsf{n}^{\ell}\}\) and \(\{\mathsf{nv}^{k}\mathsf{n}^{\ell}\}\), respectively, and let \(U,V\) be function variables of type \(\mathsf{v}^{k}\mathsf{n}^{\ell}\to\mathfrak{n}\) and \(\mathsf{v}^{k}\to\mathfrak{n}\), respectively. Then there are \(\mathsf{FO}+\mathsf{C}\)-formulas \(\mathsf{itmax}\), \(\mathsf{itmin}\) and \(\mathsf{FO}+\mathsf{C}\)-terms \(\mathsf{bd}\text{-}\mathsf{itmax}\), \(\mathsf{bd}\text{-}\mathsf{itmin}\) such that for all structures \(A\) and assignments \(\omega\) over \(A\),_
\[\langle\!\langle\mathsf{itmax},\mathsf{bd}\text{-}\mathsf{itmax} \rangle\!\rangle^{(A,e)} =\max_{(\mathbf{a},\mathbf{b})}\langle\!\langle Y,U\rangle\!\rangle^{(A,e)} \,(\mathbf{a},\mathbf{b}),\] \[\langle\!\langle\mathsf{itmin},\mathsf{bd}\text{-}\mathsf{itmin} \rangle\!\rangle^{(A,e)} =\min_{(\mathbf{a},\mathbf{b})}\langle\!\langle Y,U\rangle\!\rangle^{(A,e)} \,(\mathbf{a},\mathbf{b}),\]
_where max and min range over all \((\mathbf{a},\mathbf{b})\in\omega(X)\) such that \(\mathbf{b}=(b_{1},\ldots,b_{\ell})\in\mathbb{N}^{\ell}\) with \(b_{i}<\omega(V)\mathbf{(a)}\) for all \(i\in[\ell]\)._
_If \(k=0\), then the formula \(\mathsf{itmax}\), \(\mathsf{itmin}\) and the term \(\mathsf{bd}\text{-}\mathsf{itmin}\), \(\mathsf{bd}\text{-}\mathsf{itmin}\) are arithmetical._
Proof.: Again, to reduce the notational overhead we assume \(\ell=1\). We only give the proof for the maximum; the proof for the minimum is completely analogous.
The following formula says that \((\mathbf{x},y)\) is the index of the maximum number in the family:
\[\mathsf{maxind}(\mathbf{x},y)\coloneqq X(\mathbf{x},y)\wedge y<V(\mathbf{x})\wedge\forall \mathbf{x}^{\prime}\forall y^{\prime}<V(\mathbf{x}^{\prime}).\big{(}X(\mathbf{x}^{\prime},y) \to\mathsf{leq}^{\prime}(\mathbf{x}^{\prime},y^{\prime},\mathbf{x},y)\big{)},\]
where \(\mathsf{leq}^{\prime}(\mathbf{x}^{\prime},y^{\prime},\mathbf{x},\mathbf{y})\) is the formula obtained from the formula \(\mathsf{leq}\) of Lemma 3.9(2) by substituting \(Y_{1}(z)\) with \(Y(z,\mathbf{x}^{\prime},y^{\prime})\), \(U_{1}()\) with \(U(\mathbf{x}^{\prime},y^{\prime})\), \(Y_{2}(z)\) with \(Y(z,\mathbf{x},y)\), \(U_{2}()\) with \(U(\mathbf{x},y)\).
Then we let
\[\mathsf{itmax}(\widehat{y}) \coloneqq\exists\mathbf{x}\exists y<V(\mathbf{x}).\big{(}\mathsf{maxind}( \mathbf{x},y)\wedge Y(\widehat{y},\mathbf{x},y)\big{)},\] \[\mathsf{bd}\mbox{-}\mathsf{itmax} \coloneqq\#z<\mathsf{u}\mbox{-}\mathsf{max}.\forall\mathbf{x}\forall y <V(\mathbf{x})\big{(}\mathsf{maxind}(\mathbf{x},y)\to z<U(\mathbf{x},y)\big{)},\]
where \(\mathsf{u}\mbox{-}\mathsf{max}\) is the formula of Lemma 3.15.
### Rational Arithmetic
We need to lift the results of the previous section to arithmetic on rational numbers. However, we will run into a problem with iterated addition, because the denominator of the sum can get too large. To avoid this problem, we will work with arithmetic on dyadic rationals. Then we have a problem with division, because the dyadic rationals are not closed under division, but division is not as important for us as iterated addition.
Our representation system for dyadic rationals by relation and function variables, or by formulas and terms, is based on a representations of dyadic rationals by tuples \((r,I,s,t)\in\{0,1\}\times 2^{\mathbb{N}}\times\mathbb{N}\times\mathbb{N}\): such a tuple represents the number
\[\langle\!\langle r,I,s,t\rangle\!\rangle\coloneqq(-1)^{r}\cdot 2^{-s}\cdot \sum_{i\in I,i<t}2^{i}.\]
This representation is not unique: there are distinct \((r,I,s,t)\) and \((r^{\prime},I^{\prime},s^{\prime},t^{\prime})\) such that \(\langle\!\langle r,I,s,t\rangle\!\rangle=\langle\!\langle r^{\prime},I^{ \prime},s^{\prime},t^{\prime}\rangle\!\rangle\). For example, \(\langle\!\langle r,I,s,t\rangle\!\rangle=\langle\!\langle r,I^{\prime},s+1,t+1\rangle\!\rangle\), where \(I^{\prime}=\{i+1\mid i\in I,i<t\}\). However, each dyadic rational \(q\) has a unique representation \(\operatorname{crep}(q)=(r,I,s,t)\) satisfying the following conditions:
1. \(i<t\) for all \(i\in I\);
2. \(s=0\) or \(0\in I\) (that is, the fraction \(\frac{\sum_{i\in I,i\in T}2^{i}}{2^{s}}\) is reduced);
3. if \(I=\varnothing\) (and hence \(\langle\!\langle s,I,s,t\rangle\!\rangle=0\)) then \(r=s=t=0\).
We call \(\operatorname{crep}(q)\) the _canonical representation_ of \(q\).
To represent a dyadic rational in our logical framework, we thus need four variables. As this tends to get a bit unwieldy, we introduce some shortcuts. An _r-schema_ of type \(\mathbf{t}\to\mathbf{r}\) for some \(\mathbf{t}\in\{\mathsf{v},\mathsf{n}\}^{k}\) is a tuple \(\mathbf{Z}=(Z_{\mathrm{sg}},Z_{\mathrm{Ind}},Z_{\mathrm{dn}},Z_{\mathrm{bd}})\), where \(Z_{\mathrm{sg}}\) is a relation variable of type \(\{\mathbf{t}\}\), \(Z_{\mathrm{Ind}}\) is a relation variable of type \(\{\mathbf{n}\mathbf{t}\}\), and \(Z_{\mathrm{dn}},Z_{\mathrm{bd}}\) are function variables of type \(\mathbf{t}\to\mathsf{n}\). For a structure \(A\), an interpretation \(\omega\) over \(A\), and a tuple \(\mathbf{c}\in A^{\mathbf{t}}\) we let
\[\langle\!\langle\mathbf{Z}\rangle\!\rangle^{(A,a)}\,(\mathbf{c})\coloneqq(-1)^{r}\cdot 2 ^{-\omega(Z_{\mathrm{dn}})(\mathbf{c})}\cdot\sum_{\begin{subarray}{c}(i,\mathbf{c}) \in\omega(Z_{\mathrm{Ind}}),\\ i<a(Z_{\mathrm{bd}})(\mathbf{c})\end{subarray}}2^{i}, \tag{3.1}\]
where \(r=1\) if \(\mathbf{c}\in\omega(Z_{\mathrm{sg}})\) and \(r=0\) otherwise. Note that with \(I=\{i\in\mathbb{N}\mid(i,\mathbf{c})\in\omega(Z_{\mathrm{Ind}})\}\), \(s=\omega(Z_{\mathrm{dn}})(\mathbf{c})\), and \(t=\omega(Z_{\mathrm{bd}})(\mathbf{c})\) we have \(\langle\!\langle\mathbf{Z}\rangle\!\rangle^{(A,a)}\,(\mathbf{c})=\langle\!\langle r,I,s, t\rangle\!\rangle\).
An _r-expression_ is a tuple \(\mathbf{\rho}(\mathbf{z})=\left(\mathbf{\rho}_{\mathrm{sg}}(\mathbf{z}),\mathbf{\rho}_{\mathrm{Ind}}( \widehat{y},\mathbf{z}),\mathbf{\rho}_{\mathrm{dn}}(\mathbf{z}),\mathbf{\rho}_{\mathrm{bd}}(\mathbf{ z})\right)\) where \(\mathbf{z}\) is a tuple of individual variables, \(\mathbf{\rho}_{\mathrm{sg}}(\mathbf{z}),\mathbf{\rho}_{\mathrm{Ind}}(\widehat{y},\mathbf{z})\) are \(\mathsf{FO}+\mathsf{C}\)-formulas, and \(\mathbf{\rho}_{\mathrm{dn}}(\mathbf{z}),\mathbf{\rho}_{\mathrm{bd}}(\mathbf{z})\) are \(\mathsf{FO}+\mathsf{C}\)-terms. For a structure \(A\), an interpretation \(\mathpzc{\omega}\) over \(A\), and a tuple \(\mathbf{c}\in A^{\mathrm{tp}(\mathbf{z})}\) we let
\[\left\langle\!\left\langle\mathbf{\rho}\right\rangle\!\right\rangle^{(A,\mathpzc {\omega})}(\mathbf{c})\coloneqq\left\langle\!\left\langle r,I,s,t\right\rangle\! \right\rangle, \tag{3.M}\]
where \(r=1\) if \((A,\mathpzc{\omega})\coloneqq\mathbf{\rho}_{\mathrm{sg}}(\mathbf{c})\) and \(r=0\) otherwise, \(I\) is the set of all \(i\in\mathbb{N}\) such that \(A\coloneqq\mathbf{\rho}_{\mathrm{Ind}}(i,\mathbf{c})\), \(s=\left[\mathbf{\rho}_{\mathrm{dn}}\right]^{(A,\mathpzc{\omega})}(\mathbf{c})\), and \(t=\left[\mathbf{\rho}_{\mathrm{bd}}\right]^{(A,\mathpzc{\omega})}(\mathbf{c})\). We sometimes say that \(\mathbf{\rho}\)_defines_ the representation \((r,I,s,t)\) of the dyadic rational \(\left\langle\!\left\langle r,I,s,t\right\rangle\!\right\rangle\)_in_\((A,\mathpzc{\omega})\).
For a fragment \(\mathsf{L}\) of \(\mathsf{FO}+\mathsf{C}\), such as those introduced in Section 3.7, an _r-expression in \(\mathsf{L}\)_ is an r-expression consisting of formulas and terms from \(\mathsf{L}\). An _arithmetical r-expression_ is an r-expression consisting of arithmetical formulas and terms.
For an r-schema \(\mathbf{Z}\) of type \(\mathsf{n}^{k}\to\mathtt{r}\), for some \(k\geq 0\), and a numerical assignment \(\mathpzc{\omega}\) we may just write \(\left\langle\!\left\langle\mathbf{Z}\right\rangle\!\right\rangle^{\mathpzc{\omega}} (\mathbf{c})\) without referring to a structure. Similarly, for an arithmetical r-expression \(\mathbf{\rho}(\mathbf{z})\) we may write \(\left\langle\!\left\langle\mathbf{\rho}\right\rangle\!\right\rangle^{\mathpzc{ \omega}}(\mathbf{c})\). We use a similar notation for other objects, in particular the L,F-schemas and L,F-expressions that will be introduced in Section 3.6.
**Lemma 3.18**.: _Let \(\mathbf{Z}\) be an r-schema of type \(\varnothing\to\mathtt{r}\). Then there is an arithmetical r-expression \(\mathsf{crep}\) such that for all structures \(A\) and assignments \(\mathpzc{\omega}\) over \(A\), \(\mathsf{crep}\) defines the canonical representation of \(\left\langle\!\left\langle\mathbf{Z}\right\rangle\!\right\rangle^{(A,\mathpzc{ \omega})}\) in \((A,\mathpzc{\omega})\)._
Proof.: Straightforward.
Using this lemma, in the following we can always assume that the formulas and terms defining arithmetical operations, as for example, in Lemma 3.19, 3.20, et cetera, return their results in canonical representation.
**Lemma 3.19**.: _Let \(\mathbf{Z}_{1},\mathbf{Z}_{2}\) be r-schemas of type \(\varnothing\to\mathtt{r}\)._
1. _There are arithmetical r-expressions_ add_,_ sub_, and_ mul _such that for all structures_ \(A\) _and assignments_ \(\mathpzc{\omega}\) _over_ \(A\)_,_ \[\left\langle\!\left\langle\mathsf{add}\right\rangle\!\right\rangle^{(A, \mathpzc{\omega})} =\left\langle\!\left\langle\mathbf{Z}_{1}\right\rangle\!\right\rangle^{(A, \mathpzc{\omega})}+\left\langle\!\left\langle\mathbf{Z}_{2}\right\rangle\!\right\rangle ^{(A,\mathpzc{\omega})},\] \[\left\langle\!\left\langle\mathsf{sub}\right\rangle\!\right\rangle^{(A, \mathpzc{\omega})} =\left\langle\!\left\langle\mathbf{Z}_{1}\right\rangle\!\right\rangle^{(A, \mathpzc{\omega})}-\left\langle\!\left\langle\mathbf{Z}_{2}\right\rangle\!\right\rangle ^{(A,\mathpzc{\omega})},\] \[\left\langle\!\left\langle\mathsf{mul}\right\rangle\!\right\rangle^{(A, \mathpzc{\omega})} =\left\langle\!\left\langle\mathbf{Z}_{1}\right\rangle\!\right\rangle^{(A, \mathpzc{\omega})}\cdot\left\langle\!\left\langle\mathbf{Z}_{2}\right\rangle\! \right\rangle^{(A,\mathpzc{\omega})}.\]
2. _There is an arithmetical_ \(\mathsf{FO}+\mathsf{C}\)_-sentence_ \(\mathsf{leq}\) _such that for all structures_ \(A\) _and assignments_ \(\mathpzc{\omega}\) _over_ \(A\)_,_ \[(A,\mathpzc{\omega})\coloneqq\mathsf{leq}\,\Longleftrightarrow\,\left\langle\! \left\langle\mathbf{Z}_{1}\right\rangle\!\right\rangle^{(A,\mathpzc{\omega})} \leq\left\langle\!\left\langle\mathbf{Z}_{2}\right\rangle\!\right\rangle^{(A, \mathpzc{\omega})}.\]
Proof.: These are straightforward consequences of Lemmas 3.9 and 3.14.
**Lemma 3.20**.: _Let \(\mathbf{Z}\) be an r-schema of type \(\mathsf{v}^{k}\mathsf{n}^{\ell}\to\mathtt{r}\). Furthermore, let \(X\) be a relation variable of type \(\{\mathsf{v}^{k}\mathsf{n}^{\ell}\}\), and let \(V\) be function variable of type \(\mathsf{v}^{k}\to\mathtt{n}\)._
1. _There is an r-expression_ it-add _such that for all structures_ \(A\) _and assignments_ \(\boldsymbol{a}\) _over_ \(A\)_,_ \[\left\langle\!\left\langle\mathsf{tadd}\right\rangle\!\right\rangle^{(A,\, \boldsymbol{a})}=\sum_{(\boldsymbol{a},\boldsymbol{b})}\left\langle\!\left\langle \boldsymbol{Z}\right\rangle\!\right\rangle^{(A,\,\boldsymbol{a})}( \boldsymbol{a},\boldsymbol{b}),\] _where the sum ranges over all_ \((\boldsymbol{a},\boldsymbol{b})\in\boldsymbol{a}(X)\) _such that_ \(\boldsymbol{b}=(b_{1},\ldots,b_{\ell})\in\mathbb{N}^{\ell}\) _with_ \(b_{i}<\boldsymbol{a}(V)(\boldsymbol{a})\) _for all_ \(i\in[\ell]\)_._
2. _There are r-expressions_ \(\mathsf{max}\) _and_ \(\mathsf{min}\) _such that for all structures_ \(A\) _and assignments_ \(\boldsymbol{a}\) _over_ \(A\)_,_ \[\left\langle\!\left\langle\mathsf{max}\right\rangle\!\right\rangle^{(A,\, \boldsymbol{a})} =\max_{(\boldsymbol{a},\boldsymbol{b})}\left\langle\!\left\langle \boldsymbol{Z}\right\rangle\!\right\rangle^{(A,\,\boldsymbol{a})}( \boldsymbol{a},\boldsymbol{b}),\] _where_ \(\max\) _and_ \(\min\) _range over all_ \((\boldsymbol{a},\boldsymbol{b})\in\boldsymbol{a}(X)\) _such that_ \(\boldsymbol{b}=(b_{1},\ldots,b_{\ell})\in\mathbb{N}^{\ell}\) _with_ \(b_{i}<\boldsymbol{a}(V)(\boldsymbol{a})\) _for all_ \(i\in[\ell]\)_._
_Furthermore, if \(k=0\), then the r-expressions_ itadd_, \(\mathsf{max}\), and \(\mathsf{min}\) are arithmetical._
Proof.: To express iterated addition, we first split the family of numbers into the positive and negative numbers. We take the sums over these two subfamilies separately and then combine the results using Lemma3.19. To take the sum over a family of nonnegative dyadic rationals, we apply Lemma3.16 for the numerator and Lemma3.15 for the denominator.
To express maximum and minimum, it clearly suffices to express the maximum and minimum of a family of nonnegative dyadic rationals \((p_{i}\cdot 2^{-s_{i}})_{i\in\mathcal{I}}\) for some definable finite index set \(\mathcal{I}\). Using Lemma3.15, we can determine \(s=\max_{i\in\mathcal{I}}s_{i}\). Then we need to determine maximum and minimum of the natural numbers \(q_{i}\coloneqq p_{i}2^{s-s_{i}}\), which we can do by applying Lemma3.17.
For division, the situation is slightly more complicated, because the dyadic rationals are not closed under division. We only get an approximation. We use a \(0\)-ary function variable to control the additive approximation error.
**Lemma 3.21**.: _Let \(\boldsymbol{Z}_{1},\boldsymbol{Z}_{2}\) be r-schemas of type \(\varnothing\to\mathtt{r}\), and let \(W\) be a function variable of type \(\varnothing\to\mathtt{n}\). Then there is an arithmetical r-expression \(\mathsf{div}\) such that for all structures \(A\) and assignments \(\boldsymbol{a}\) over \(A\), if \(\left\langle\!\left\langle\boldsymbol{Z}_{2}\right\rangle\!\right\rangle^{(A, \,\boldsymbol{a})}\neq 0\) then_
Proof.: This follows easily from Lemma3.9.
### Evaluating Feedforward Neural Networks
The most important consequence the results of the previous section have for us is that we can simulate rational piecewise-linear FNNs.
Let us first see how we deal with the activation functions. To represent a rational piecewise linear function, we need an integer \(k\) as well as three families of dyadic rationals: the thresholds \((t_{i})_{1\leq i\leq k}\), the slopes \((a_{i})_{0\leq i\leq k}\) and the constant terms \((b_{i})_{0\leq i\leq k}\). An _\(L\)-schema_ of type \(\mathbf{t}\to\mathsf{L}\), for some \(\mathbf{t}\in\{\mathsf{v},\mathsf{n}\}^{k}\), is a tuple \(\mathbf{Z}=(Z_{\mathrm{len}},\mathbf{Z}_{\mathrm{th}},\mathbf{Z}_{\mathrm{sl}},\mathbf{Z}_{ \mathrm{co}})\), where \(Z_{\mathrm{len}}\) is a function variable of type \(\mathbf{t}\to\mathsf{n}\) and \(\mathbf{Z}_{\mathrm{th}},\mathbf{Z}_{\mathrm{sl}},\mathbf{Z}_{\mathrm{co}}\) are r-schemas of type \(\mathbf{n}\mathbf{t}\). Let \(A\) be a structure, \(\omega\) an assignment over \(A\), and \(\mathbf{c}\in A^{\mathbf{t}}\). Let \(k\coloneqq\omega(Z_{\mathrm{len}})(\mathbf{c})\). For \(1\leq i\leq k\), let \(t_{i}\coloneqq\llparent{\mathbf{Z}_{\mathrm{th}}}^{(A,\,\omega)}(i,\mathbf{c})\). For \(0\leq i\leq k\), let \(a_{i}\coloneqq\llparent{\mathbf{Z}_{\mathrm{sl}}}^{(A,\,\omega)}(i,\mathbf{c})\) and \(b_{i}\coloneqq\llparent{\mathbf{Z}_{\mathrm{co}}}^{(A,\,\omega)}(i,\mathbf{c})\). Then if \(t_{1}<\ldots<t_{k}\) and for all \(i\in[k]\) we have \(a_{i-1}t_{i}+b_{i-1}=a_{i}t_{i}+b_{i}\), we define \(\llparent{\mathbf{Z}}^{(A,\,\omega)}:\mathbb{R}\to\mathbb{R}\) to be the rational piecewise linear function with thresholds \(t_{i}\), slopes \(a_{i}\), and constants \(b_{i}\). The condition \(a_{i-1}t_{i}+b_{i-1}=a_{i}t_{i}+b_{i}\) guarantees that this function is continuous. Otherwise, we define \(\llparent{\mathbf{Z}}^{(A,\,\omega)}:\mathbb{R}\to\mathbb{R}\) to be identically \(0\). We can also define _\(L\)-expressions_ consisting of formulas of the appropriate types.
**Lemma 3.22**.: _Let \(\mathbf{Y}\) be an \(L\)-schema of type \(\varnothing\to\mathsf{L}\), and let \(\mathbf{Z}\) be an r-schema of type \(\varnothing\to\mathsf{r}\). Then there is an arithmetical r-expression \(\mathsf{apply}\) such that for all numerical assignments \(\omega\),_
Proof.: This follows easily from Lemma 3.19.
To represent an FNN we need to represent the skeleton as well as all activation functions and parameters. An _F-schema_ of type \(\mathbf{t}\to\mathsf{F}\) for some \(\mathbf{t}\in\{\mathsf{v},\mathsf{n}\}^{k}\) is a tuple \(\mathbf{Z}=(Z_{\mathrm{V}},Z_{\mathrm{E}},\mathbf{Z}_{\mathrm{ac}},\mathbf{Z}_{\mathrm{wt }},\mathbf{Z}_{\mathrm{bi}})\) where \(Z_{\mathrm{V}}\) is a function variable \(\mathbf{t}\to\mathsf{n}\), \(Z_{\mathrm{E}}\) is a relation variable of type \(\{\mathsf{n}^{2}\mathbf{t}\}\), \(\mathbf{Z}_{\mathrm{ac}}\) is an L-schema of type \(\mathsf{n}\mathbf{t}\to\mathsf{L}\), \(\mathbf{Z}_{\mathrm{wt}}\) is an r-schema of type \(\mathsf{n}\mathbf{}^{2}\mathbf{t}\to\mathsf{r}\), and \(\mathbf{Z}_{\mathrm{bi}}\) is an r-schema of type \(\mathsf{n}\mathbf{t}\to\mathsf{r}\). Then for every structure \(A\), every assignment \(\omega\) over \(A\), and every tuple \(\mathbf{c}\in A^{\mathbf{t}}\), we define \((V,E,(\mathfrak{a}_{v})_{v\in V},(w_{e})_{e\in E},(b_{v})_{v\in V})\) as follows:
* \(V\coloneqq\{0,\ldots,\omega(Z_{\mathrm{V}})(\mathbf{c})\}\);
* \(E\coloneqq\{ij\in V^{2}\mid ij\mathbf{c}\in\omega(Z_{\mathrm{E}})\}\);
* \(\mathfrak{a}_{i}=\llparent{\mathbf{Z}_{\mathrm{ac}}}^{(A,\,\omega)}(i,\mathbf{c})\) for \(i\in V\);
* \(w_{ij}=\llparent{\mathbf{Z}_{\mathrm{wt}}}^{(A,\,\omega)}(i,j,\mathbf{c})\) for \(ij\in E\);
* \(b_{i}=\llparent{\mathbf{Z}_{\mathrm{bi}}}^{(A,\,\omega)}(i,\mathbf{c})\) for \(i\in V\).
Then if \((V,E)\) is a dag, \(\llparent{\mathbf{Z}}^{(A,\,\omega)}\coloneqq(V,E,(\mathfrak{a}_{v})_{v\in V},(w_ {e})_{e\in E},(b_{v})_{v\in V})\) is an FNN. The input nodes \(X_{1},\ldots,X_{p}\) of this FNN are the sources of the dag \((V,E)\) in their natural order (as natural numbers). Similarly, the output nodes \(Y_{1},\ldots,Y_{q}\) of the FNN are the sinks of the dag \((V,E)\) in their natural order. If \((V,E)\) is not a dag, we simply define \(\llparent{\mathbf{Z}}^{(A,\,\omega)}\) to be the trivial FNN with a single node, which computes the identity function. We can also define _F-expressions_ consisting of formulas of the appropriate types.
**Lemma 3.23**.: _Let \(\mathbf{Z}\) be an F-schema of type \(\varnothing\to\mathbb{F}\), and let \(\mathbf{X}\) be an r-schema of type \(\mathtt{n}\to\mathtt{r}\). Then for every \(t\geq 0\) there is an arithmetical r-expression \(\mathsf{eval}_{t}(y)\) such that the following holds. Let \(\omega\) be a numerical assignment and \(\mathfrak{F}\coloneqq\llparent{\mathbf{Z}}^{\omega}\). Suppose that the input dimension of \(\mathfrak{F}\) is \(p\), and let_
\[\mathbf{x}\coloneqq\big{(}\llparent{\mathbf{X}}^{\omega}\,(0),\ldots,\llparent{\mathbf{X}}^ {\omega}\,(p-1)\big{)}.\]
_Then for every node \(v\) of \(\mathfrak{F}\) of depth \(t\) it holds that_
\[f_{\mathfrak{F},v}(\mathbf{x})=\llparent{\mathsf{eval}_{t}}^{\omega}\,(v).\]
Proof.: Using the formulas for multiplication and iterated addition, it easy to construct \(\mathsf{eval}_{t}\) by induction on \(t\).
**Corollary 3.24**.: _Let \(\mathbf{Z}\) be an F-schema of type \(\varnothing\to\mathbb{F}\), and let \(\mathbf{X}\) be r-schemas of type \(\mathtt{n}\to\mathtt{r}\). Then for every \(d>0\) there is an arithmetical r-expression \(\mathsf{eval}_{d}(y)\) such that the following holds. Let \(A\) be a structure, \(\omega\) an assignment, and \(\mathfrak{F}\coloneqq\llparent{\mathbf{Z}}^{(A,\,\omega)}\). Suppose that the depth of \((V,E)\) is at most \(d\), and let \(p\) be the input dimension and \(q\) the output dimension. Let_
\[\mathbf{x}=\big{(}\llparent{\mathbf{X}}^{\omega}\,(0),\ldots,\llparent{\mathbf{X}}^{(A,\, \omega)}\,(p-1)\big{)}.\]
_Then_
\[\mathfrak{F}(\mathbf{x})=\big{(}\llparent{\mathsf{eval}_{d}}^{(A,\,\omega)}\,(0), \ldots,\llparent{\mathsf{eval}_{d}}^{(A,\,\omega)}\,(q-1)\big{)}.\]
**Corollary 3.25**.: _Let \(\mathfrak{F}\) be a rational piecewise linear FNN of input dimension \(p\) and output dimension \(q\), and let \(\mathbf{X}_{1},\ldots,\mathbf{X}_{p}\) be r-schemas of type \(\varnothing\to\mathtt{r}\). Then for all \(i\in[q]\) there is an arithmetical r-expression \(\mathsf{eval}_{\mathfrak{F},i}\) such that for all structures \(A\) and assignments \(\omega\) over \(A\),_
### Fragments of FO+C
To describe the expressiveness of graph neural networks, we need to consider various fragments of FO+C. For \(k\geq 1\), the _\(k\)-variable fragment_\(\mathsf{FO}^{k}+\mathsf{C}\) of FO+C consists of all formulas with at most \(k\) vertex variables. Importantly, the number of number variables is unrestricted. We call an \(\mathsf{FO}^{k}+\mathsf{C}\)-formula _decomposable_ if it contains no relation variables or function variables and every subformula with exactly \(k\) free vertex variables is a Boolean combination of relational atoms and formulas with at most \(k-1\) free vertex variables. Equivalently, an \(\mathsf{FO}^{k}+\mathsf{C}\)-formula is decomposable if it contains no relation variables or function variables and every subformula of the form \(\theta\leq\theta^{\prime}\), for terms \(\theta,\theta^{\prime}\), has at most \(k-1\) free vertex variables. Note that this implies that a decomposable \(\mathsf{FO}^{k}+\mathsf{C}\)-formula contains no terms with \(k\) free vertex variables.
**Example 3.26**.: The \(\mathsf{FO}^{2}+\mathsf{C}\)-formula
\[\varphi(z)\coloneqq\exists x_{1}.\exists x_{2}.\big{(}E(x_{1},x_{2})\wedge z= \#y_{1}<\mathsf{ord}.E(x_{1},y_{1})\wedge z=\#y_{2}<\mathsf{ord}.E(x_{2},y_{2 })\big{)}\]
is decomposable, whereas the \(\mathsf{FO}^{2}+\mathsf{C}\)-formula
\[\psi(z)\coloneqq\exists x_{1}.\exists x_{2}.\Big{(}E(x_{1},x_{2})\wedge z=\big{(} \#y_{1}<\mathsf{ord}.E(x_{1},y_{1})\big{)}\cdot\big{(}\#y_{2}<\mathsf{ord}.E(x_{ 2},y_{2})\big{)}\Big{)}\]
is not. However, \(\psi(z)\) is decomposable if viewed as an \(\mathsf{FO}^{3}+\mathsf{C}\)-formula.
**Lemma 3.27**.: _Let \(\varphi\) be an \(\mathsf{FO}+\mathsf{C}\)-formula of vocabulary \(\tau\cup\{\leqslant\}\) with at most one free vertex variable and no relation or function variables. Then there is a decomposable \(\mathsf{FO}^{2}+\mathsf{C}\)-formula \(\varphi^{\prime}\) such that for all ordered \(\tau\)-structures \(A\) and all assignments \(\omega\) over \(A\) it holds that \(A\vDash\varphi\iff A\vDash\varphi^{\prime}\)._
Proof.: We first define a bijection between the vertices of the structure \(A\) and an initial segment of \(\mathbb{N}\). We simply let \(\mathsf{bij}(x,y)\coloneqq\#x^{\prime}.x^{\prime}\leqslant x=\#\big{(}y^{ \prime}\leq\mathsf{ord}\big{)}.y^{\prime}\leq y\). We introduce a distinguished number variable \(y_{x}\) for every vertex variable \(x\).
To obtain \(\varphi^{\prime}\) from \(\varphi\), we first replace quantification over \(x\) in counting terms by quantification over \(y_{x}\), that is, we replace \(\#(x,\ldots)\) by \(\#(y_{x}<\mathsf{ord},\ldots)\). Furthermore, we replace atomic formulas \(x=x^{\prime}\) by \(y_{x}=y_{x^{\prime}}\) (or, more formally, \(y_{x}\leq y_{x^{\prime}}\wedge y_{x^{\prime}}\leq y_{x}\)) and atomic \(R(x,x^{\prime})\) by \(\exists x_{1}.\exists x_{2}.\big{(}\mathsf{bij}(x_{1},y_{x})\wedge\mathsf{bij} (x_{2},y_{x^{\prime}})\wedge R(x_{1},x_{2})\big{)}\). Let \(\psi\) be the resulting formula. If \(\varphi\) has no free variables, we let \(\varphi^{\prime}\coloneqq\psi\). If \(\varphi\) has one free variable \(x\), we let
\[\varphi^{\prime}=\exists y_{x}<\mathsf{ord}.\big{(}\mathsf{bij}(x,y_{x})\wedge \psi\big{)}.\]
Then \(\varphi^{\prime}\) is equivalent to \(\varphi\), and it only contains the vertex variables \(x_{1},x_{2}\) and hence is in \(\mathsf{FO}^{2}+\mathsf{C}\). It is easy to check that the formula is decomposable.
_Remark 3.28_.: Note that Lemma 3.27 implies that on ordered structures, every \(\mathsf{FO}^{2}+\mathsf{C}\)-formula with at most one free variable is equivalent to a decomposable formula. It is an open problem whether this holds on arbitrary structures.
The definition of decomposable \(\mathsf{FO}^{k}+\mathsf{C}\) is not particularly intuitive, at least at first glance. However, we wonder if "decomposable \(\mathsf{FO}^{k}+\mathsf{C}\)" is what we actually want as the \(k\)-variable fragment of \(\mathsf{FO}+\mathsf{C}\). This view is not only supported by Lemma 3.27, but also by the observation that the logic \(\mathsf{C}^{k}\) (the \(k\)-variable fragment of the extension of first-order logic by counting quantifiers \(\exists^{\geq n}x\)) is contained in decomposable \(\mathsf{FO}^{k}+\mathsf{C}\). Furthermore, characterisations of \(k\)-variable logics in terms of pebble games or the WL-algorithm only take atomic properties of \(k\)-tuples into account.
The _guarded fragment_\(\mathsf{GFO}+\mathsf{C}\) is a fragment of \(\mathsf{FO}^{2}+\mathsf{C}\) where quantification and counting is restricted to range over neighbours of a free variable. We fix two variables \(x_{1},x_{2}\). A _guard_ is an atomic formula of the form \(R(x_{i},x_{3-i})\) for some binary relation symbol \(R\). We inductively define the sets of \(\mathsf{GFO}+\mathsf{C}\)-_terms_ and \(\mathsf{GFO}+\mathsf{C}\)-_formulas_ as follows.
* All number variables and \(\mathsf{0},\mathsf{1},\mathsf{ord}\) are \(\mathsf{GFO}+\mathsf{C}\)-terms.
* For all \(\mathsf{GFO}+\mathsf{C}\)-terms \(\theta,\theta^{\prime}\), the expressions \(\theta+\theta^{\prime}\) and \(\theta\cdot\theta^{\prime}\) are \(\mathsf{GFO}+\mathsf{C}\)-terms.
* For all function variables \(U\) of type \((t_{1},\ldots,t_{k})\to\mathtt{n}\) and all tuples \((\xi_{1},\ldots,\xi_{k})\), where \(\xi_{i}\) is a vertex variable if \(t_{i}=\mathtt{v}\) and \(\xi_{i}\) is a \(\mathsf{GFO}+\mathsf{C}\)-term if \(t_{i}=\mathtt{n}\), the expression \(U(\xi_{1},\ldots,\xi_{k})\) is a \(\mathsf{GFO}+\mathsf{C}\)-term.
* For all \(\mathsf{GFO}+\mathsf{C}\)-terms \(\theta,\theta^{\prime}\), the expression \(\theta\leq\theta^{\prime}\) is a \(\mathsf{GFO}+\mathsf{C}\)-formula.
* All relational atoms whose variables are among \(x_{1},x_{2}\) are \(\mathsf{GFO}+\mathsf{C}\)-formulas.
* For all relation variables \(X\) of type \(\{(t_{1},\ldots,t_{k})\}\) and all tuples \((\xi_{1},\ldots,\xi_{k})\), where \(\xi_{i}\) is a vertex variable if \(t_{i}=\mathtt{v}\) and \(\xi_{i}\) is a \(\mathsf{GFO}+\mathsf{C}\)-term if \(t_{i}=\mathtt{n}\), the expression \(X(\xi_{1},\ldots,\xi_{k})\) is a \(\mathsf{GFO}+\mathsf{C}\)-formula.
* For all \(\mathsf{GFO}+\mathsf{C}\)-formulas \(\varphi,\psi\) the expressions \(\neg\varphi\) and \(\varphi\wedge\psi\) are \(\mathsf{GFO}+\mathsf{C}\)-formulas.
* For all \(\mathsf{GFO}+\mathsf{C}\)-formulas \(\varphi\), all guards \(\gamma\), all number variables \(y_{1},\ldots,y_{k}\), all \(\mathsf{GFO}+\mathsf{C}\)-terms \(\theta_{1},\ldots,\theta_{k}\), and \(i=1,2\), \[\#(x_{3-i},y_{1}<\theta_{1},\ldots,y_{k}<\theta_{k}).(\gamma\wedge\varphi),\] (3.N) is a \(\mathsf{GFO}+\mathsf{C}\)-term.
* For all \(\mathsf{GFO}+\mathsf{C}\)-formulas \(\varphi\), all number variables \(y_{1},\ldots,y_{k}\), and all \(\mathsf{GFO}+\mathsf{C}\)-terms \(\theta_{1},\ldots,\theta_{k}\), \[\#(y_{1}<\theta_{1},\ldots,y_{k}<\theta_{k}).\varphi,\] (3.O) is a \(\mathsf{GFO}+\mathsf{C}\)-term.
Observe that a \(\mathsf{GFO}+\mathsf{C}\)-term or \(\mathsf{GFO}+\mathsf{C}\)-formula either has at least one free vertex variable or contains no vertex variable at all. Note that we add \(\mathsf{ord}\) as a "built-in" constant that is always interpreted by the order of the input structure. We need access to the order of a structure to bound quantification on numbers, and the closed \(\mathsf{FO}^{2}+\mathsf{C}\)-term \(\mathsf{ord}=\#x.x=x\) defining the order is not in \(\mathsf{GFO}+\mathsf{C}\).
An r-expression is _guarded_ if all its formulas and terms are in \(\mathsf{GFO}+\mathsf{C}\).
_Remark 3.29_.: Our definition of the guarded fragment is relatively liberal in terms of which kind of formulas \(\varphi\) we allow inside the guarded counting operators in (3.N). In particular, we allow both \(x_{i}\) and \(x_{3-i}\) to occur freely in \(\varphi\). A more restrictive alternative definition, maybe more in the spirit of a modal logic, would be to stipulate that the variable \(x_{i}\) must not occur freely in \(\varphi\).
This is related to a similar choice we made in the definition of graph neural networks (see Remark 4.1).
By definition, \(\mathsf{GFO}+\mathsf{C}\) is contained in \(\mathsf{FO}^{2}+\mathsf{C}\). The converse does not hold. Let us introduce an intermediate fragment \(\mathsf{GFO}+\mathsf{C}^{\mathrm{gc}}\) which extends \(\mathsf{GFO}+\mathsf{C}\) and is still in \(\mathsf{FO}^{2}+\mathsf{C}\). We call \(\mathsf{GFO}+\mathsf{C}^{\mathrm{gc}}\) the _guarded fragment with global counting_. In addition to the guarded counting terms in (3.N), in \(\mathsf{GFO}+\mathsf{C}^{\mathrm{gc}}\) formulas we also allow a restricted form of unguarded counting in the form
\[\#(x_{3-i},y_{1}<\theta_{1},\ldots,y_{k}<\theta_{k}).\varphi, \tag{3.P}\]
where the variable \(x_{i}\) must not occur freely in \(\varphi\). Intuitively, such a term makes a "global" calculation that is unrelated to the "local" properties of the free variable \(x_{i}\).
Let us call a \(\mathsf{GFO}+\mathsf{C}\)-formula or a \(\mathsf{GFO}+\mathsf{C}^{\mathrm{gc}}\)-formula _decomposable_ if it is decomposable as an \(\mathsf{FO}^{2}+\mathsf{C}\)-formula.
**Lemma 3.30**.: _For every decomposable \(\mathsf{FO}^{2}+\mathsf{C}\)-formula \(\varphi\) there is a decomposable \(\mathsf{GFO}+\mathsf{C}^{\mathrm{gc}}\)-formula \(\varphi^{\prime}\) such that for all graphs \(G\), possibly labelled, and all assignments \(\omega\) over \(G\) we have_
\[(G,\omega)\vDash\varphi\iff(G,\omega)\vDash\varphi^{\prime}.\]
Proof.: We write \(x\) and \(x^{\prime}\) to refer to the variables \(x_{1},x_{2}\), with the understanding that if \(x\) refers to \(x_{i}\) then \(x^{\prime}\) refers to \(x_{3-i}\). We need to replace unguarded terms by combinations of terms of the form (3.N), (3.O), and (3.P).
_Claim 1._ Every term \(\eta\coloneqq\#(x^{\prime},y_{1}<\theta_{1},\ldots,y_{k}<\theta_{k}).\psi\), where \(\psi\) is a \(\mathsf{GFO}+\mathsf{C}^{\mathrm{gc}}\)-formula and the \(\theta_{i}\) are \(\mathsf{GFO}+\mathsf{C}^{\mathrm{gc}}\)-terms, is equivalent to a \(\mathsf{GFO}+\mathsf{C}^{\mathrm{gc}}\)-term.
Proof.: By Lemma 3.2, we may assume without loss of generality that the terms \(\theta_{i}\) do not contain the variables \(x,x^{\prime},y_{1},\ldots,y_{k}\) (and therefore the counting operator is nonadaptive). Indeed, by the lemma we can find a term \(\theta\) built from \(\mathsf{ord}\) and the free number variables of \(\eta\) such that \(\theta\) bounds all \(\theta_{i}\). Then \(\eta\) is equivalent to \(\#(x^{\prime},y_{1}<\theta,\ldots,y_{k}<\theta).(\psi\wedge y_{1}<\theta_{1} \wedge\ldots\wedge y_{k}<\theta_{k})\).
Also without loss of generality we may assume that \(k\geq 1\), because we can always append \(y<1\) for a fresh variable \(y\) in the counting operator without changing the result. To simplify the notation, let us assume that \(k=1\). The generalisation to larger \(k\) is straightforward. Hence
\[\eta=\#(x^{\prime},y<\theta).\psi \tag{3.Q}\]
and \(\theta\) is a term in which neither \(x\) nor \(x^{\prime}\) is free.
Since \(\psi\) is decomposable, we can re-write \(\psi\) as
\[(E(x,x^{\prime})\wedge\psi_{1})\vee(\neg E(x,x^{\prime})\wedge\neg x=x^{ \prime}\wedge\psi_{2})\vee\psi_{3},\]
where \(\psi_{1}\) and \(\psi_{2}\) are Boolean combinations of formulas with at most one free vertex variable (either \(x\) or \(x^{\prime}\)), and \(x^{\prime}\) does not occur freely in \(\psi_{3}\). To see this, note that in graphs all vertices \(x,x^{\prime}\) satisfy exactly one of \(E(x,x^{\prime})\), \(\neg E(x,x^{\prime})\wedge\neg x=x^{\prime}\), and \(x=x^{\prime}\). Thus \(\psi\) is equivalent to the disjunction
\[(E(x,x^{\prime})\wedge\psi)\vee(\neg E(x,x^{\prime})\wedge\neg x=x^{\prime} \wedge\psi)\vee(x=x^{\prime}\wedge\psi).\]
Then to obtain \(\psi_{1}\) and \(\psi_{2}\), we eliminate all atomic formulas \(E(x,x^{\prime}),x=x^{\prime}\) in both free variables from \(\psi\), and to obtain \(\psi_{3}\) we substitute \(x\) for all free occurrence of \(x^{\prime}\). Thus \(\eta\) is equivalent to the term
\[\#(x^{\prime},y<\theta).(E(x,x^{\prime})\wedge\psi_{1}) \tag{3.R}\] \[+\#(x^{\prime},y<\theta).(\neg E^{\prime}(x,x^{\prime})\wedge \neg x=x^{\prime}\wedge\psi_{2})\] \[+\#(y<\theta).\psi_{3}.\]
The first and the third term in this sum are already \(\mathsf{GFO}+\mathsf{C}\)-terms of the forms (3.N) and (3.O), respectively. We only need to worry about the second,
\[\eta_{2}\coloneqq\#(x^{\prime},y<\theta).(\neg E^{\prime}(x,x^{\prime}) \wedge\neg x=x^{\prime}\wedge\psi_{2}).\]
We can equivalently re-write \(\eta_{2}\) as
\[\#(x^{\prime},y<\theta).\psi_{2}\] \[\dot{\simeq} \#(x^{\prime},y<\theta).(E(x,x^{\prime})\wedge\psi_{2})\] \[\dot{\simeq} \#(y<\theta).\psi_{2}\tfrac{x}{x^{\prime}},\]
where \(\psi_{2}\tfrac{x}{x^{\prime}}\) denotes the formula obtained from \(\psi_{2}\) by replacing all free occurrences of \(x^{\prime}\) by \(x\). The second and the third term are already \(\mathsf{GFO}+\mathsf{C}\)-terms. We only need to worry about the first,
\[\eta_{2}^{\prime}\coloneqq\#(x^{\prime},y<\theta).\psi_{2}\]
Recall that \(\psi_{2}\) is a Boolean combination of formulas with only one free vertex variable. Bringing this Boolean combination into disjunctive normal form, we obtain an equivalent formula \(\bigvee_{i\in I}(\chi_{i}\wedge\chi_{i}^{\prime})\), where \(x\) does not occur freely in \(\chi_{i}^{\prime}\) and \(x^{\prime}\) does not occur freely in \(\chi_{i}\). We can further ensure that the disjuncts are mutually exclusive, that is, for every pair \(x,x^{\prime}\) there is at most one \(i\) such that that it satisfies \((\chi_{i}\wedge\chi_{i}^{\prime})\). For example, if \(I=\{1,2\}\), we note that \((\chi_{1}\wedge\chi_{1}^{\prime})\vee(\chi_{2}\wedge\chi_{2}^{\prime})\) is equivalent to
\[((\chi_{1}\wedge\chi_{2})\wedge(\chi_{1}^{\prime}\wedge\chi_{2}^{ \prime}))\] \[\vee((\chi_{1}\wedge\neg\chi_{2})\wedge(\chi_{1}^{\prime}\wedge \chi_{2}^{\prime}))\] \[\vee((\chi_{1}\wedge\chi_{2})\wedge(\chi_{1}^{\prime}\wedge \neg\chi_{2}^{\prime}))\] \[\vee((\chi_{1}\wedge\neg\chi_{2})\wedge(\chi_{1}^{\prime}\wedge \neg\chi_{2}^{\prime}))\] \[\vee((\neg\chi_{1}\wedge\chi_{2})\wedge(\chi_{1}^{\prime}\wedge \chi_{2}^{\prime}))\] \[\vee((\chi_{1}\wedge\chi_{2})\wedge(\neg\chi_{1}^{\prime}\wedge \chi_{2}^{\prime}))\] \[\vee((\neg\chi_{1}\wedge\chi_{2})\wedge(\neg\chi_{1}^{\prime} \wedge\chi_{2}^{\prime})).\]
Then \(\eta_{2}^{\prime}\) is equivalent to the term
\[\sum_{i\in I}\#(x^{\prime},y<\theta).(\chi_{i}\wedge\chi_{i}^{\prime}).\]
Consider a summand \(\eta_{2i}\coloneqq\#(x^{\prime},y<\theta).(\chi_{i}\wedge\chi_{i}^{\prime})\). Let \(\zeta\coloneqq\#x^{\prime}.(\chi_{i}\wedge\chi_{i}^{\prime})\) and note that for all graphs \(G\) and assignments \(\omega\) we have
\[[\![\eta_{2i}]\!]^{(G,\,\alpha)}=\sum_{b<[\![\theta]\!]^{(G,\,\alpha)}}[\![ \zeta]\!]^{(G,\,\alpha\,\tfrac{b}{y})}=\big{|}\big{\{}(b,c)\,\big{|}\,b<[\![ \theta]\!]^{(G,\,\alpha)}\,,c<[\![\zeta]\!]^{(G,\,\alpha\,\tfrac{b}{y})}\, \big{\}}\big{|}.\]
Let \(\eta_{2i}^{\prime}\coloneqq\#(y<\theta,z<\mathsf{ord}).z<\zeta\). Since we always have \([\![\zeta]\!]^{(G,\,\alpha)}\leq|G|\), the terms \(\eta_{2i}\) and \(\eta_{2i}^{\prime}\) are equivalent.
The final step is to turn \(\zeta\) into a \(\mathsf{GFO}+\mathsf{C}^{\mathsf{gc}}\)-term. Recall that \(\zeta=\#x^{\prime}.(\chi_{i}\wedge\chi_{i}^{\prime})\) and that \(x^{\prime}\) is not free in \(\chi_{i}\). If \(x\) does not satisfy \(\chi_{i}\), then the term \(\zeta\) evaluates to \(0\), and otherwise it has the same value as the term \(\#x^{\prime}.\chi_{i}^{\prime}\), which is of the form (3.P). Note that the term \(\#(y^{\prime}<1).\chi_{i}\), where \(y^{\prime}\) is a fresh number variable not occurring in \(\chi_{i}\), is of the form (3.O) and evaluates to \(1\) if \(\chi_{i}\) is satisfied and to \(0\) otherwise. Thus the term
\[\#(y^{\prime}<1).\chi_{i}\,\cdot\,\#x^{\prime}.\chi_{i}^{\prime}\]
is a \(\mathsf{GFO}+\mathsf{C}^{\text{gc}}\)-term equivalent to \(\zeta\). This completes the proof of the claim.
_Claim 2._ Every term \(\eta=\#(x,x^{\prime},y_{1}<\theta_{1},\ldots,y_{k}<\theta_{k}).\psi\), where \(\varphi\) is a \(\mathsf{GFO}+\mathsf{C}^{\text{gc}}\)-formula and the \(\theta_{i}\) are \(\mathsf{GFO}+\mathsf{C}^{\text{gc}}\)-terms, is equivalent to a \(\mathsf{GFO}+\mathsf{C}^{\text{gc}}\)-term.
Proof.: Arguing as in the proof of Claim 1, we may assume that
\[\eta=\#(x,x^{\prime},y<\theta).\psi, \tag{3.S}\]
where \(\theta\) is a term in which the variables \(x,x^{\prime}\) are not free.
We proceed very similarly to the proof of Claim 1. The first step is to rewrite the term as the sum
\[\#(x,x^{\prime},y<\theta).(E(x,x^{\prime})\wedge\psi_{1})\] \[+\#(x,x^{\prime},y<\theta).(\neg E^{\prime}(x,x^{\prime})\wedge \neg x=x^{\prime}\wedge\psi_{2}) \tag{3.T}\] \[+\#(x,y<\theta).\psi_{3},\]
where \(\psi_{1}\) and \(\psi_{2}\) are Boolean combinations of formulas with at most one free vertex variable and \(x^{\prime}\) is not free in \(\psi_{3}\). Then the third term is already of the form (3.P), and we only have to deal with the first two.
Let us look at the term \(\eta_{1}\coloneqq\#(x,x^{\prime},y<\theta).(E(x,x^{\prime})\wedge\psi_{1})\). Let \(\zeta_{1}\coloneqq\#(x^{\prime},y<\theta).(E(x,x^{\prime})\wedge\psi_{1})\). Then \(\zeta_{1}\) is a term of the form (3.N). Moreover, for every graph \(G\) and assignment \(\omega\) we have
\[[\![\eta_{1}]\!]^{(G,\omega)}=\sum_{a\in V(G)}[\![\zeta_{1}]\!]^{(G,\omega\, \omega)}=\big{|}\big{\{}(a,c)\,\big{|}\,a\in V(G),c<[\![\zeta_{1}]\!]^{(G,\omega \,\omega)}\,\big{\}}\big{|}.\]
We let
\[\eta^{\prime}_{1}\coloneqq\#(x,z<\mathsf{ord}).z<\zeta_{1}.\]
Then \(\eta^{\prime}_{1}\) is a term of the form (3.P) that is equivalent to \(\eta_{1}\).
It remains to deal with the second summand in (3.T), the term \(\eta_{2}\coloneqq\#(x,x^{\prime},y<\theta).(\neg E^{\prime}(x,x^{\prime}) \wedge\neg x=x^{\prime}\wedge\psi_{2})\). We rewrite this term as
\[\#(x,x^{\prime},y<\theta).\psi_{2}\] (3.U) \[\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Consider one of the terms in the sum, \(\eta^{\prime}_{2i}\coloneqq\#(x,x^{\prime},y<\theta).(\chi_{i}\wedge\chi^{\prime}_{ i})\). We let \(\zeta\coloneqq\#(x,x^{\prime}).(\chi_{i}\wedge\chi^{\prime}_{i})\) As in the proof of Claim 1, for all graphs \(G\) and assignments \(\omega\) we have
\[\big{[}\!\big{[}\eta^{\prime}_{2i}\big{]}^{(G,\omega)}=\sum_{b<[\![\theta]\!]^ {(G,\omega)}}\big{[}\![\zeta]\!\big{]}^{(G,\omega\,\tfrac{b}{y})}=\big{|}\big{\{} (b,c)\,\big{|}\,b<[\![\theta]\!]^{(G,\omega)}\,,c<[\![\zeta]\!]^{(G,\omega)}\, \big{\}}\big{|}.\]
Let \(\eta^{\prime\prime}_{2i}\coloneqq\#(y<\theta,z<\mathsf{ord}\cdot\mathsf{ord}).z<\zeta\). Since we always have \(\big{[}\![\zeta]\!\big{]}^{(G,\omega)}\leq|G|^{2}\), the terms \(\eta^{\prime}_{2i}\) and \(\eta^{\prime\prime}_{2i}\) are equivalent.
To turn \(\zeta\) into a \(\mathsf{GFO}+\mathsf{C}^{\mathrm{gc}}\)-term \(\zeta^{\prime}\), we observe that for all graphs \(G\) and assignments \(\omega\) we have
\[\big{[}\![\zeta]\!\big{]}^{(G,\omega)}=\big{|}\big{\{}a\in V(G)\,\big{|}\, \big{|}\,(G,\omega\,\tfrac{a}{x}\big{)}\vDash\chi_{i}\big{\}}\big{|}\cdot\big{|} \big{\{}a^{\prime}\in V(G)\,\big{|}\,\big{(}G,\omega\,\tfrac{a^{\prime}}{x^{ \prime}}\big{)}\vDash\chi^{\prime}_{i}\big{\}}\big{|}.\]
We let \(\zeta^{\prime}\coloneqq\#x.\chi_{i}\cdot\#x^{\prime}.\chi^{\prime}_{i}\).
With these two claims, it is easy to inductively translate a decomposable \(\mathsf{FO}^{2}+\mathsf{C}\)-formula into an \(\mathsf{GFO}+\mathsf{C}^{\mathrm{gc}}\)-formula.
Combining Lemmas 3.30 and 3.27 with Corollary 3.6, we obtain the following.
**Corollary 3.31**.: _Let \(\mathds{O}\) be a unary query. Then \(L(\mathds{O})\) is in \(\mathsf{TC}^{0}\) if and only if \(\mathds{O}\) is definable in order-invariant \(\mathsf{GFO}+\mathsf{C}^{\mathrm{gc}}_{\mathrm{nu}}\)._
### Arithmetic in \(\mathsf{GFO}+\mathsf{C}\)
Since all arithmetical \(\mathsf{FO}+\mathsf{C}\)-formulas and terms are in \(\mathsf{GFO}+\mathsf{C}\), most results of Sections 3.4-3.6 apply to \(\mathsf{GFO}+\mathsf{C}\). Exceptions are Lemmas 3.15, 3.16, and 3.20 on iterated addition, which may involve vertex variables. Here we prove variants of these lemmas for the guarded fragment.
**Lemma 3.32**.: _Let \(X\) be a relation variable of type \(\{\mathsf{v}^{2}\mathsf{n}^{\ell}\}\), and let \(U,V\) be function variables of types \(\mathsf{v}^{2}\mathsf{n}^{\ell}\to\mathsf{n}\), \(\mathsf{v}^{2}\to\mathsf{n}\), respectively. Furthermore, let \(\gamma(x,x^{\prime})\) be a guard._
1. _There is a_ \(\mathsf{GFO}+\mathsf{C}\)_-term_ \(\mathsf{u}\mathsf{-itald}(x)\) _such that for all structures_ \(A\)_, assignments_ \(\omega\) _over_ \(A\)_, and_ \(a\in V(A)\)_,_ \[\big{[}\![\mathsf{u}\mathsf{-itald}]\!]^{(A,\omega)}\,(a)=\sum_{(a^{\prime},b )}\omega(U)(a,a^{\prime},\overline{b}),\] _where the sum ranges over all_ \((a^{\prime},\mathbf{b})\in V(A)\times\mathbb{N}^{\ell}\) _such that_ \(A\vDash\gamma(a,a^{\prime})\) _and_ \((a,a^{\prime},\mathbf{b})\in\omega(X)\) _and_ \(\mathbf{b}=(b_{1},\ldots,b_{\ell})\in\mathbb{N}^{\ell}\) _with_ \(b_{i}<\omega(V)(a,a^{\prime})\) _for all_ \(i\in[\ell]\)_._
2. _There are_ \(\mathsf{GFO}+\mathsf{C}\)_-terms_ \(\mathsf{u}\mathsf{-max}(x)\) _and_ \(\mathsf{u}\mathsf{-min}(x)\) _such that for all structures_ \(A\)_, assignments_ \(\omega\) _over_ \(A\)_, and_ \(a\in V(A)\)_,_ \[\big{[}\![\mathsf{u}\mathsf{-max}]\!]^{(A,\omega)}\,(a)=\max_{(a^{\prime}, \overline{b})}\omega(U)(a,a^{\prime},\overline{b}),\]
\[\left[\mathsf{u}\text{-}\mathsf{min}\right]^{(A,a)}(a)=\min_{(a^{\prime},\vec{b})} \alpha(U)(a,a^{\prime},\vec{b}),\] _where_ \(\max\) _and_ \(\min\) _range over all_ \((a^{\prime},\vec{b})\in V(A)\times\mathbb{N}^{\ell}\) _such that_ \(A\vDash\gamma(a,a^{\prime})\) _and_ \((a,a^{\prime},\vec{b})\in\omega(X)\) _and_ \(\vec{b}=(b_{1},\ldots,b_{\ell})\in\mathbb{N}^{\ell}\) _with_ \(b_{i}<\omega(V)(a,a^{\prime})\) _for all_ \(i\in[\ell]\)_._
Proof.: The proof is very similar to the proof of Lemma 3.15, we just have to make sure that the terms we define are guarded. Again, we assume for simplicity that \(\ell=1\).
We let
\[\mathsf{u}\text{-}\mathsf{i}\mathsf{t}\mathsf{add}(x)\coloneqq\#\big{(}x^{ \prime},y<V(x,x^{\prime}),z<U(\vec{x},y)\big{)}.\big{(}\gamma(x,x^{\prime}) \wedge X(x,x^{\prime},y)\big{)}\]
and
\[\mathsf{u}\text{-}\mathsf{max}(x) \coloneqq\#z<\mathsf{u}\text{-}\mathsf{i}\mathsf{t}\mathsf{add}( x).\exists(x^{\prime},y<V(x,x^{\prime})).\big{(}\gamma(x,x^{\prime})\wedge X(x,x ^{\prime},y)\wedge z<U(x,x^{\prime})\big{)},\] \[\mathsf{u}\text{-}\mathsf{min}(x) \coloneqq\#z<\mathsf{u}\text{-}\mathsf{i}\mathsf{t}\mathsf{add}( x).\forall(x^{\prime},y<V(x,x^{\prime})).\big{(}\gamma(x,x^{\prime})\wedge X(x,x ^{\prime},y)\to z<U(x,x^{\prime})\big{)}.\]
**Lemma 3.33**.: _Let \(\vec{Z}\) be an r-schema of type \(\mathsf{v}^{2}\mathsf{n}^{\ell}\to\mathtt{r}\). Let \(X\) be a relation variable of type \(\{\mathsf{v}^{2}\mathsf{n}^{\ell}\}\), and let \(V\) be a function variable of type \(\mathsf{v}^{2}\to\mathtt{n}\). Furthermore, let \(\gamma(x,x^{\prime})\) be a guard._
1. _There is a guarded r-expression_ \(\mathsf{i}\mathsf{t}\mathsf{add}(x)\) _such that for all structures_ \(A\)_, assignments_ \(\omega\) _over_ \(A\)_, and_ \(a\in V(A)\) _we have_ \[\langle\!\langle\mathsf{i}\mathsf{t}\mathsf{add}\rangle\!\rangle^{(A,a)}\,(a )=\sum_{(a^{\prime},\vec{b})}\langle\!\langle\vec{Z}\rangle\!\rangle^{(A,a)} \,(a,a^{\prime},\vec{b}),\] _where the sum ranges over all_ \((a^{\prime},\vec{b})\in V(A)\times\mathbb{N}^{\ell}\) _such that_ \(A\vDash\gamma(a,a^{\prime})\) _and_ \((a,a^{\prime},\vec{b})\in\omega(X)\) _and_ \(\vec{b}=(b_{1},\ldots,b_{\ell})\in\mathbb{N}^{\ell}\) _with_ \(b_{i}<\omega(V)(a,a^{\prime})\) _for all_ \(i\in[\ell]\)_._
2. _There are guarded r-expressions_ \(\mathsf{max}(x)\) _and_ \(\mathsf{min}(x)\) _such that for all structures_ \(A\)_, assignments_ \(\omega\) _over_ \(A\)_, and_ \(a\in V(A)\) _we have_ \[\langle\!\langle\mathsf{max}\rangle\!\rangle^{(A,a)}\,(a) =\max_{(a^{\prime},\vec{b})}\langle\!\langle\vec{Z}\rangle\! \rangle^{(A,a)}\,(a,a^{\prime},\vec{b}),\] \[\langle\!\langle\mathsf{min}\rangle\!\rangle^{(A,a)}\,(a) =\min_{(a^{\prime},\vec{b})}\langle\!\langle\vec{Z}\rangle\! \rangle^{(A,a)}\,(a,a^{\prime},\vec{b}),\] _where_ \(\max\) _and_ \(\min\) _range over all_ \((a^{\prime},\vec{b})\in V(A)\times\mathbb{N}^{\ell}\) _such that_ \(A\vDash\gamma(a,a^{\prime})\) _and_ \((a,a^{\prime},\vec{b})\in\omega(X)\) _and_ \(\vec{b}=(b_{1},\ldots,b_{\ell})\in\mathbb{N}^{\ell}\) _with_ \(b_{i}<\omega(V)(a,a^{\prime})\) _for all_ \(i\in[\ell]\)_._
Proof.: The proof is an easy adaptation of the proof of Lemma 3.20, arguing as in the proof of Lemma 3.32 to make sure that we obtains guarded formulas and terms.
## 4 Graph Neural Networks
We will work with standard message passing graph neural networks (GNNs5) [10]. A GNN consists of a finite sequence of layers. A _GNN layer_ of input dimension \(p\) and output dimension \(q\) is a triple \(\mathfrak{L}=(\mathsf{msg},\mathsf{agg},\mathsf{comb})\) of functions: a _message function_\(\mathsf{msg}:\mathbb{R}^{2p}\to\mathbb{R}^{p^{\prime}}\), an _aggregation function_\(\mathsf{agg}\) mapping finite multisets of vectors in \(\mathbb{R}^{p^{\prime}}\) to vectors in \(\mathbb{R}^{p^{\prime\prime}}\), and a _combination function_\(\mathsf{comb}:\mathbb{R}^{p+p^{\prime\prime}}\to\mathbb{R}^{q}\). A _GNN_ is a tuple \(\mathfrak{N}=(\mathfrak{L}^{(1)},\ldots,\mathfrak{L}^{(d)})\) of GNN layers, where the output dimension \(q^{(i)}\) of \(\mathfrak{L}^{(i)}\) matches the input dimension \(p^{(i+1)}\) of \(\mathfrak{L}^{(i+1)}\). We call \(q^{(0)}\coloneqq p^{(1)}\) the input dimension of \(\mathfrak{N}\) and \(q^{(d)}\) the output dimension.
Footnote 5: We use the abbreviation GNN, but MPNN is also very common.
To define the semantics, let \(\mathfrak{L}=(\mathsf{msg},\mathsf{agg},\mathsf{comb})\) be a GNN layer of input dimension \(p\) and output dimension \(q\). It computes a function \(\mathfrak{L}\colon\mathcal{GF}_{p}\to\mathcal{FG}_{q}\) (as for circuits and feedforward neural networks, we use the same letter to denote the network and the function it computes) defined by \(\mathfrak{L}(G,\mathpzc{x})\coloneqq(G,\mathpzc{y})\), where \(\mathpzc{y}:V(G)\to\mathbb{R}^{q}\) is defined by
\[\mathpzc{x}(v)\coloneqq\mathsf{comb}\Bigg{(}\mathpzc{x}(v),\mathsf{agg} \Big{(}\Big{\{}\mathsf{msg}\big{(}\mathpzc{x}(v),\mathpzc{x}(w)\big{)}\; \Big{|}\;w\in N_{G}(v)\Big{\}}\Big{)}\Bigg{)}. \tag{4.A}\]
A GNN \(\mathfrak{N}=(\mathfrak{L}^{(1)},\ldots,\mathfrak{L}^{(d)})\) composes the transformations computed by its layers \(\mathfrak{L}^{(i)}\), that is, it computes the function \(\mathfrak{N}\colon\mathcal{GF}_{q^{(0)}}\to\mathcal{GF}_{q^{(d)}}\) defined by
\[\mathfrak{N}(G,\mathpzc{x})\coloneqq\mathfrak{L}^{(d)}\circ\mathfrak{L}^{(d- 1)}\circ\ldots\circ\mathfrak{L}^{(1)}.\]
It will be convenient to also define \(\widetilde{\mathfrak{N}}\) as the function mapping \((G,\mathpzc{x})\) to the signal \(\mathpzc{x}^{\prime}\in\mathcal{S}_{q^{(d)}}(G)\) such that \(\mathfrak{N}(G,\mathpzc{x})=(G,\mathpzc{x}^{\prime})\), so \(\mathfrak{N}(G,\mathpzc{x})=(G,\widetilde{\mathfrak{N}}(G,\mathpzc{x}))\), and similarly \(\widetilde{\mathfrak{L}}\) for a single layer \(\mathfrak{L}\).
Our version of GNNs corresponds to the _message passing neural networks_ due to [10]. Another version that can be found in the literature, cleanly formalised as the _aggregate-combine GNNs_ in [3], only allows the messages to depend on the vertex they are sent form. So the \(\mathsf{msg}(\mathpzc{x}(v),\mathpzc{x}(w))\) in (4.A) becomes \(\mathsf{msg}(\mathpzc{x}(w))\). On the surface, our version is more powerful, but it is not clear to us if it is really more expressive.
The reason we decided to use the version with messages depending on both endvertices of the edge is that in practical work we also found it beneficial to use this version. However, slightly adapting the logic (cf. Remark 3.29), our results also have a version for aggregate-combine GNNs.
So far, we have defined GNNs as an abstract computation model computing transformations between graph signals. To turn them into deep learning models, we represent the functions that specify the layers by feedforward neural networks. More precisely, we assume that the message functions \(\mathsf{msg}\) and the combination functions \(\mathsf{comb}\) of all GNN layers are specified by FNNs \(\mathfrak{F}_{\mathsf{msg}}\) and \(\mathfrak{F}_{\mathsf{comb}}\). Furthermore, we assume that the aggregation function \(\mathsf{agg}\) is either summation \(\mathsf{SUM}\) or arithmetic mean \(\mathsf{MEAN}\) or maximum \(\mathsf{MAX}\). Note that this means that the aggregation function does not change the dimension,
that is, we always have \(p^{\prime}=p^{\prime\prime}\) (referring to the description of GNN layers above). To be able to deal with isolated nodes as well, we define \(\mathsf{SUM}(\varnothing)\coloneqq\mathsf{MEAN}(\varnothing)\coloneqq\mathsf{ MAX}(\varnothing)\coloneqq\mathbf{0}\).
If the FNNs \(\mathfrak{F}_{\mathsf{msg}}\) and \(\mathfrak{F}_{\mathsf{comb}}\) on all layers are (rational) piecewise linear, we call the GNN _(rational) piecewise linear_. Similarly, if they are rpl-approximable, we call the GNN _rpl-approximable_.
We mention a few extensions of the basic GNN model. Most importantly, in a GNN with _global readout_[3] (or, equivalently, a GNN with a _virtual node_[10]) in each round the nodes also obtain the aggregation of the states of all nodes in addition to the messages they receive from their neighbours. So the state update rule (4.A) becomes
\[\mathsf{agg}^{\prime}\Big{(}\Big{\{}\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
only depends on the neighbourhood of \(v\). The _global_ bound (4.C) is simpler, but a bit weaker.
Proof.: Clearly, (4.B) implies (4.C), so we only have to prove the local bound (4.B). Let \(\mathsf{msg},\mathsf{agg},\mathsf{comb}\) be the message, aggregation, and combination functions of \(\mathfrak{L}\). Let \(\mathfrak{F}_{\mathsf{msg}}\) and \(\mathfrak{F}_{\mathsf{comb}}\) be FNNs computing \(\mathsf{msg}\) and \(\mathsf{comb}\), respectively, and let \(\gamma_{\mathsf{msg}}\coloneqq\gamma(\mathfrak{F}_{\mathsf{msg}})\) and \(\gamma_{\mathsf{comb}}\coloneqq\gamma(\mathfrak{F}_{\mathsf{comb}})\) be the constants of Lemma 2.5.
Let \(G\) be a graph, \(\mathbf{x}\in\mathcal{S}_{p}(G)\), and \(v\in V(G)\). Then for all \(w\in N_{G}(v)\) we have
Since for every multiset \(M\) we have \(\mathsf{agg}(M)\leq|M|\cdot m\), where \(m\) is the the maximum absolute value of the entries of \(M\), it follows that
Since \(\gamma_{\mathsf{msg}}\geq 1\), this implies
Hence
\[\big{\|}\widetilde{\mathfrak{L}}(G,x)(v)\big{\|}_{\infty} =\big{\|}\mathsf{comb}\big{(}\big{(}\mathbf{x}(v),\mathbf{x}(v)\big{)} \big{)}\big{\|}_{\infty}\] \[\leq\gamma_{\mathsf{comb}}\cdot\big{(}\gamma_{\mathsf{msg}} \big{(}\big{\|}x|_{N[v]}\big{\|}_{\infty}+1\big{)}\max\{\deg(v),1\}+1\big{)}\] \[\leq 2\gamma_{\mathsf{comb}}\gamma_{\mathsf{msg}}\cdot\big{(} \big{\|}x|_{N[v]}\big{\|}_{\infty}+1\big{)}\max\{\deg(v),1\}.\]
We let \(\gamma\coloneqq 2\gamma_{\mathsf{comb}}\gamma_{\mathsf{msg}}\).
**Lemma 4.3**.: _Let \(\mathfrak{L}\) be a GNN layer of input dimension \(p\). Then there is a \(\lambda=\lambda(\mathfrak{L})\in\mathbb{N}_{>0}\) such that for all graphs \(G\), all signals \(x,\mathbf{x}^{\prime}\in\mathcal{S}_{p}(G)\), and all vertices \(v\in V(G)\) we have_
\[\big{|}\widetilde{\mathfrak{L}}(G,x)(v)-\widetilde{\mathfrak{L}} (G,\mathbf{x}^{\prime})(v)\big{\|}_{\infty} \leq\lambda\left\|\mathbf{x}|_{N[v]}-\mathbf{x}^{\prime}|_{N[v]}\right\| _{\infty}\max\{\deg(v),1\} \tag{4.D}\] \[\leq\lambda\left\|\mathbf{x}-\mathbf{x}^{\prime}\right\|_{\infty}|G|. \tag{4.E}\]
Proof.: Again, the local bound (4.D) implies the global bound (4.E). So we only need to prove (4.D). Let \(\mathsf{msg},\mathsf{agg},\mathsf{comb}\) be the message, aggregation, and combination functions of \(\mathfrak{L}\). Let \(\mathfrak{F}_{\mathsf{msg}}\) and \(\mathfrak{F}_{\mathsf{comb}}\) be FNNs computing \(\mathsf{msg}\) and \(\mathsf{comb}\), respectively, and let \(\lambda_{\mathsf{msg}}\coloneqq\lambda(\mathfrak{F}_{\mathsf{msg}})\) and \(\lambda_{\mathsf{comb}}\coloneqq\lambda(\mathfrak{F}_{\mathsf{comb}})\) be their Lipschitz constants (cf. Lemma 2.5).
Let \(G\) be a graph, \(\mathbf{x},\mathbf{x}^{\prime}\in\mathcal{S}_{p}(G)\), and \(\mathbf{y}\coloneqq\widetilde{\mathfrak{L}}(G,\mathbf{x})\), \(\mathbf{y}^{\prime}\coloneqq\widetilde{\mathfrak{L}}(G,\mathbf{x}^{\prime})\). Let \(v\in V(G)\). For all \(w\in N(v)\) we have
\[\big{|}\mathsf{msg}(x(v),\mathbf{x}(w))-\mathsf{msg}(\mathbf{x}^{\prime}(v),\mathbf{x}^{ \prime}(w))\big{|}_{\infty}\leq\lambda_{\mathsf{msg}}\big{|}(\mathbf{x}(v),\mathbf{x} (w))-(\mathbf{x}^{\prime}(v),\mathbf{x}^{\prime}(w))\big{|}_{\infty}\,.\]
Thus for
\[x(v) \coloneqq\mathsf{agg}\Big{(}\big{\{}\mathsf{msg}(x(v),x(w))\big{|} \,w\in N_{G}(v)\big{\}}\Big{)},\] \[x^{\prime}(v) \coloneqq\mathsf{agg}\Big{(}\big{\{}\mathsf{msg}(x^{\prime}(v),x^{ \prime}(w))\big{|}\,w\in N_{G}(v)\big{\}}\Big{)}\]
we have
\[\big{\|}x(v)-x^{\prime}(v)\big{\|}_{\infty}\leq\lambda_{\mathsf{msg}}\big{\|}x |_{N[v]}-x^{\prime}|_{N[v]}\big{\|}_{\infty}\deg(v). \tag{4.F}\]
It follows that
\[\big{\|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\! \big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\! \big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\! \big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\! \big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\! \big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\! \big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\! \big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\! \big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\! \big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\! \big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\! \big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\! \big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\! \big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\! \big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\! \big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\! \big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\! \big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\! \big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\! \big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\! \big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|} \!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!\big{|}\!
_such that the following holds for all graphs \(G\) and assignments \(\omega\) over \(G\). Let \(\mathbf{x}\in\mathcal{S}_{p}(G)\) be the signal defined by_
\[\mathbf{x}(v)\coloneqq\left(\llparent{\mathbf{X}_{1}}\right)^{\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
r-expression itadd of Lemma 3.33 to obtain guarded r-expressions \(\boldsymbol{\sigma}_{i}\), for \(i\in[r]\), such that for all \(v\in V(G)\) we have \[\langle\!\langle\boldsymbol{\sigma}_{i}\rangle\!\rangle^{(G,a)}\,(v)=\sum_{v^{ \prime}\in N(v)}\langle\!\langle\boldsymbol{\mu}_{i}\rangle\!\rangle^{(G,a)} \,(v,v^{\prime}).\] (5.E) Then we substitute the r-expressions \(\boldsymbol{\sigma}_{i}(x)\) for the variables \(\boldsymbol{Z}_{i}\) in the formulas \(\boldsymbol{\gamma}_{j}\) of Claim 2 and obtain the desired r-expressions \(\mathsf{l}\text{-}\mathsf{eval}_{j}(x)\) such that \[\Big{(}\,\langle\!\langle\mathsf{l}\text{-}\mathsf{eval}_{1} \rangle\!\rangle^{(G,a)}\,(v),\ldots,\langle\!\langle\mathsf{l}\text{-} \mathsf{eval}_{q}\rangle\!\rangle^{(G,a)}\,(v)\,\Big{)}\] \[\qquad=\mathsf{comb}\Big{(}\,\langle\!\langle\boldsymbol{X}_{1} \rangle\!\rangle^{(G,a)}\,(v),\ldots,\langle\!\langle\boldsymbol{X}_{p} \rangle\!\rangle^{(G,a)}\,(v),\langle\!\langle\boldsymbol{\sigma}_{1}\rangle \!\rangle^{(G,a)}\,(v),\ldots,\langle\!\langle\boldsymbol{\sigma}_{r}\rangle\! \rangle^{(G,a)}\,(v)\Big{)}\] \[\qquad=\mathpzc{z}(v).\] Thus in this case, the r-expressions \(\mathsf{l}\text{-}\mathsf{eval}_{j}(x)\) even define \(\mathpzc{z}\) exactly. Of course this implies that they satisfy (5.D). _Case 2:_agg = MAX. We can argue as in Case 1, using the r-expression \(\mathsf{max}\) of Lemma 3.33 instead of itadd. Again, we obtain r-expressions \(\mathsf{l}\text{-}\mathsf{eval}_{j}(x)\) that define \(\mathpzc{z}\) exactly. _Case 3:_agg = MEAN. The proof is similar to Case 1 and Case 2, but we need to be careful. We cannot define the mean of a family of numbers exactly, but only approximately, because of the division it involves. Exactly as in Case 1 we define r-expressions \(\boldsymbol{\sigma}_{i}(x)\) satisfying (5.E). Recall that \(\lambda\) is a Lipschitz constant for comb. Let \(\boldsymbol{\delta}(x)\coloneqq\#x^{\prime}.E(x,x^{\prime})\) be a term defining the degree of a vertex. Using Lemma 3.21 we can construct an r-expression \(\boldsymbol{\nu}_{i}\) such that \[\left|\frac{\langle\!\langle\boldsymbol{\sigma}_{i}\rangle\!\rangle^{(G,a)} \,(v)}{\langle\!\langle\boldsymbol{\delta}\rangle\!\rangle^{(G,a)}\,(v)}- \langle\!\langle\boldsymbol{\nu}_{i}\rangle\!\rangle^{(G,a)}\,(v)\right|<2^{-a \,(W)-\lambda}\leq\lambda^{-1}2^{-a\,(W)}\] if \(\langle\!\langle\boldsymbol{\delta}\rangle\!\rangle^{(G,a)}\,(v)=\deg(v)\neq 0\) and \(\langle\!\langle\boldsymbol{\nu}_{i}\rangle\!\rangle^{(G,a)}\,(v)=0\) otherwise. Thus, letting \[x(v) \coloneqq\text{MEAN}\big{(}\big{\{}\mathsf{msg}(x(v),x(v^{ \prime}))\,\big{|}\,v^{\prime}\in N(v)\big{\}}\big{)}\] \[=\begin{cases}0&\text{if }\deg(v)=0,\\ \big{(}\frac{\langle\!\langle\boldsymbol{\sigma}_{1}\rangle\!\rangle^{(G,a)} \,(v)}{\langle\!\langle\boldsymbol{\delta}\rangle\!\rangle^{(G,a)}\,(v)}, \ldots,\frac{\langle\!\langle\boldsymbol{\sigma}_{r}\rangle\!\rangle^{(G,a)} \,(v)}{\langle\!\langle\boldsymbol{\delta}\rangle\!\rangle^{(G,a)}\,(v)}\bigg{)} &\text{otherwise}\end{cases}\] we have \[\left\|x(v)-\Big{(}\,\langle\!\langle\boldsymbol{\nu}_{1}\rangle\!\rangle^{(G,a)}\,(v),\ldots,\langle\!\langle\boldsymbol{\nu}_{r}\rangle\!\rangle^{(G,a)} \,(v)\Big{)}\right\|_{\infty}\leq\lambda^{-1}2^{-a\,(W)}\] for all \(v\in V(G)\). By the Lipschitz continuity of comb, this implies \[\left\|\mathsf{comb}\big{(}x(v),x(v)\big{)}-\mathsf{comb}\Big{(}x(v),\langle\! \langle\boldsymbol{\nu}_{1}\rangle\!\rangle^{(G,a)}\,(v),\ldots,\langle\! \langle\boldsymbol{\nu}_{r}\rangle\!\rangle^{(G,a)}\,(v)\Big{)}\right\|_{ \infty}\leq 2^{-a\,(W)}.\] (5.F)
We substitute the r-expressions \(\mathbf{\nu}_{i}(x)\) for the variables \(\mathbf{Z}_{i}\) in the formulas \(\mathbf{\gamma}_{j}\) of Claim 2 and obtain r-expressions \(\mathsf{l}\mathsf{-eval}_{j}(x)\) such that
\[\Big{(}\,\langle\!\langle\mathsf{l}\mathsf{-eval}_{1}\rangle\! \rangle^{(G,a)}\,(v),\ldots,\langle\!\langle\mathsf{l}\mathsf{-eval}_{q} \rangle\!\rangle^{(G,a)}\,(v)\Big{)}\] \[\quad=\mathsf{comb}\Big{(}\,\langle\!\langle\mathbf{X}_{1}\rangle\! \rangle^{(G,a)}\,(v),\ldots,\langle\!\langle\mathbf{X}_{p}\rangle\!\rangle^{(G,a)} \,(v),\langle\!\langle\mathbf{\nu}_{1}\rangle\!\rangle^{(G,a)}\,(v),\ldots,\langle\! \langle\mathbf{\nu}_{r}\rangle\!\rangle^{(G,a)}\,(v)\Big{)}\] \[\quad=\mathsf{comb}\Big{(}\,x(v),\langle\!\langle\mathbf{\nu}_{1} \rangle\!\rangle^{(G,a)}\,(v),\ldots,\langle\!\langle\mathbf{\nu}_{r}\rangle\! \rangle^{(G,a)}\,(v)\Big{)}.\]
Since \(\mathbf{\nu}(v)=\mathsf{comb}\big{(}\,x(v),\mathbf{\varkappa}(v)\big{)}\), the assertion (5.D) follows from (5.F).
Proof of Theorem 5.1.: We fix a graph \(G\) and assignment \(\mathbf{\omega}\) over \(G\) for the presentation of the proof; as usual the formulas we shall define will not depend on this graph and assignment. Let \(x\in\mathcal{S}_{p}(G)\) be the signal defined in (5.A).
Suppose that \(\mathfrak{N}=(\mathfrak{L}^{(1)},\ldots,\mathfrak{L}^{(d)})\). Let \(p^{(i-1)}\) be the input dimension of \(\mathfrak{L}^{(i)}\), and let \(p^{(i)}\) be the output dimension. Then \(p=p^{(0)}\) and \(q=p^{(d)}\). Moreover, let \(\mathbf{x}^{(0)}\coloneqq\mathbf{x}\) and \(\mathbf{x}^{(i)}\coloneqq\widetilde{\mathfrak{L}}^{(i)}(G,\mathbf{x}^{(i-1)})\) for \(i\in[d]\). Note that \(\mathbf{x}^{(d)}=\mathbf{\varkappa}\).
For every \(i\in[d]\), let \(\lambda^{(i)}\coloneqq\lambda(\mathfrak{L}^{(i)})\) be the constant of Lemma 4.3. We inductively define a sequence of \(\mathsf{GFO}+\mathsf{C}\)-terms \(\mathsf{err}^{(i)}(x)\), which will give us the desired error bounds. We let \(\mathsf{err}^{(d)}(x)\coloneqq W(x)\). To define \(\mathsf{err}^{(i)}(x)\) for \(0\leq i<d\), we first note that by Lemma 3.32, for every \(\mathsf{GFO}+\mathsf{C}\)-term \(\theta(x)\) there is a \(\mathsf{GFO}+\mathsf{C}\)-term \(\mathsf{maxN}_{\theta}(x)\) such that for every \(v\in V(G)\) we have
\[[\![\mathsf{maxN}_{\theta}]\!]^{(G,a)}\,(v)=\max\Big{\{}\,[\![\theta]\!]^{(G,a )}\,(w)\,\Big{|}\,w\in N_{G}[v]\Big{\}}.\]
We let \(\mathsf{dg}(x)\coloneqq(\#x^{\prime}.E(x,x^{\prime}))+1\) and
\[\mathsf{err}^{(i)}(x)\coloneqq\mathsf{maxN}_{\mathsf{err}^{(i+1)}}(x)+\lambda ^{(i+1)}\cdot\mathsf{maxN}_{\mathsf{dg}}(x)+1.\]
Letting
\[k^{(i)}(v)\coloneqq\left[\!\left|\mathsf{err}^{(i)}\right|\!\right]^{(G,a)}(v),\]
for every \(v\in V(G)\) and \(0\leq i<d\), we have
\[k^{(i)}(v)=\max\big{\{}k^{(i+1)}(w)\,\big{|}\,w\in N_{G}[v]\big{\}}+\lambda^{( i+1)}\max\big{\{}\,\deg_{G}(w)+1\,\big{|}\,w\in N_{G}[v]\big{\}}+1. \tag{5.G}\]
Furthermore, \(k^{(d)}(v)=\mathbf{\omega}(W)(v)\).
Now for \(i\in[d]\) and \(j\in[p^{(i)}]\) we shall define guarded r-expressions \(\mathbf{\rho}_{j}^{(i)}(x)\) such that for all \(v\in V(G)\), with
\[\mathbf{\varkappa}^{(i)}(v)\coloneqq\Big{(}\,\langle\!\langle\mathbf{\rho}_{1}^{(i)} \rangle\!\rangle\,(v),\ldots,\langle\!\langle\mathbf{\rho}_{p^{(i)}}^{(i)}\rangle \!\rangle\,(v)\Big{)}\]
we have
\[\left\|\mathbf{x}^{(i)}(v)-\mathbf{\varkappa}^{(i)}(v)\right\|_{\infty}\leq 2^{-k^{(i)}(v)}. \tag{5.H}\]
For \(i=d\) and with \(\mathsf{gnn}\text{-}\mathsf{eval}_{j}\coloneqq\boldsymbol{\rho}_{j}^{(d)}\), this implies (5.B) and hence the assertion of the theorem.
To define \(\boldsymbol{\rho}_{j}^{(1)}(x)\), we apply Lemma 5.2 to the first layer \(\mathfrak{L}^{(1)}\) and substitute \(\mathsf{err}^{(0)}\) for \(W\) in the resulting-expression. Then (5.H) for \(i=1\) follows directly from Lemma 5.2 and the fact that \(k^{(0)}(v)\geq k^{(1)}(v)\) for all \(v\).
For the inductive step, let \(2\leq i\leq d\) and suppose that we have defined \(\boldsymbol{\rho}_{j}^{(i-1)}(x)\) for all \(j\in[p^{(i-1)}]\). To define \(\boldsymbol{\rho}_{j}^{(i)}(x)\), we apply Lemma 5.2 to the \(i\)th layer \(\mathfrak{L}^{(i)}\) and substitute \(\boldsymbol{\rho}_{1}^{(i-1)},\ldots,\boldsymbol{\rho}_{p^{(i-1)}}^{(i-1)}\) for \(\boldsymbol{X}_{1},\ldots,\boldsymbol{X}_{p^{(i-1)}}\) and \(\mathsf{err}^{(i-1)}\) for \(W\) in the resulting r-expression. Then by Lemma 5.2, for all \(v\in V(G)\) we have
\[\left\|\widehat{\mathfrak{L}}^{(i)}(G,\varkappa^{(i-1)})(v)-\varkappa^{(i)}(v )\right\|_{\infty}\leq 2^{-k^{(i-1)}(v)}. \tag{5.I}\]
Moreover, by Lemma 4.3 applied to \(\mathfrak{L}^{(i)}\) and \(x\coloneqq\varkappa^{(i-1)}\), \(\varkappa^{\prime}\coloneqq\varkappa^{(i-1)}\) we have
\[\left\|\varkappa^{(i)}(v)-\widehat{\mathfrak{L}}^{(i)}(G, \varkappa^{(i-1)})(v)\right\|_{\infty}\] \[\leq\lambda^{(i)}\max\Big{\{}\left\|\varkappa^{(i-1)}(w)- \varkappa^{(i-1)}(w)\right\|_{\infty}\Bigm{|}w\in N_{G}[v]\Big{\}}\big{(} \deg_{G}(v)+1\big{)}\] \[\leq\lambda^{(i)}\max\Big{\{}2^{-k^{(i-1)}(w)}\Bigm{|}w\in N_{G} [v]\Big{\}}\big{(}\deg_{G}(v)+1\big{)}\] \[\leq\max\Big{\{}2^{-k^{(i-1)}(w)+\lambda^{(i)}(\cdot\deg_{G}(v)+1 )}\Bigm{|}w\in N_{G}[v]\Big{\}}.\]
We choose a \(w\in N_{G}[v]\) minimising \(k^{(i-1)}(w)\). Then
\[\left\|\varkappa^{(i)}(v)-\widehat{\mathfrak{L}}^{(i)}(G,\varkappa^{(i-1)})( v)\right\|\leq 2^{-(k^{(i-1)}(w)-\lambda^{(i)}(\cdot\deg_{G}(v)+1))}. \tag{5.J}\]
Combining (5.I) and (5.J) by the triangle inequality, we get
\[\left\|\varkappa^{(i)}(v)-\varkappa^{(i)}(v)\right\|\leq 2^{-k^{(i-1)}(v)}+2^{- (k^{(i-1)}(w)-\lambda^{(i)}(\cdot\deg_{G}(v)+1))}.\]
Observe that \(k^{(i-1)}(v)\geq k^{(i)}(v)+1\) and
\[k^{(i-1)}(w)-\lambda^{(i)}\cdot(\deg_{G}(v)+1)\] \[=\max\big{\{}k^{(i)}(v^{\prime})\bigm{|}v^{\prime}\in N_{G}[w] \big{\}}+\lambda^{(i)}\max\big{\{}\deg_{G}(v^{\prime})+1\bigm{|}v^{\prime}\in N _{G}[w]\big{\}}+1\] \[\qquad\qquad-\lambda^{(i)}\cdot(\deg_{G}(v)+1)\] \[\geq k^{(i)}(v)+\lambda^{(i)}\big{(}\deg_{G}(v)+1\big{)}+1- \lambda^{(i)}\cdot(\deg_{G}(v)+1)\qquad\qquad\text{ since }v\in N_{G}[w]\] \[=k^{(i)}(v)+1.\]
Thus
\[\left\|\varkappa^{(i)}(v)-\varkappa^{(i)}(v)\right\|\leq 2^{-k^{(i)}(v)-1}+2^{- k^{(i)}(v)-1}=2^{-k^{(i)}(v)},\]
which proves (5.H) and hence the theorem.
Since logics define queries, when comparing the expressiveness of graph neural networks with that of logics, it is best to focus on queries. Recall from Section 2.5 that we identified the \(\ell\)-labelled graphs and graphs with an \(\ell\)-dimensional Boolean signal. A unary query on the class \(\mathscr{SG}_{\ell}^{\mathrm{bool}}\) is an equivariant signal transformations from \(\mathscr{SG}_{\ell}^{\mathrm{bool}}\) to \(\mathscr{SG}_{1}^{\mathrm{bool}}\).
We say that a GNN \(\mathfrak{N}\)_computes_ a unary query \(\mathcal{G}:\mathscr{SG}_{\ell}^{\mathrm{bool}}\to\mathscr{SG}_{1}^{\mathrm{bool}}\) if for all \((G,\mathcal{G})\in\mathscr{SG}_{\ell}^{\mathrm{bool}}\) and all \(v\in V(G)\) it holds that
\[\begin{cases}\widetilde{\mathfrak{N}}(G,\mathcal{G})(v)\geq\frac{2}{3}&\text{ if }\mathcal{G}(G,\mathcal{G})(v)=1,\\ \widetilde{\mathfrak{N}}(G,\mathcal{G})(v)\leq\frac{1}{3}&\text{if }\mathcal{G}(G, \mathcal{G})(v)=0.\end{cases} \tag{5.K}\]
Note that if we allow \(\mathrm{lsig}\) or \(\mathrm{relu}\) activations, we can replace the \(\geq\frac{2}{3}\) and \(\leq\frac{1}{3}\) in (5.K) by \(=1\) and \(=0\) and thus require \(\widetilde{\mathfrak{N}}(G,\mathcal{G})(v)=\mathcal{G}(G,\mathcal{G})(v)\). We simply apply the transformation \(\mathrm{lsig}(3x-1)\) to the output. It maps the interval \((-\infty,\frac{1}{3}]\) to \(0\) and the interval \([\frac{2}{3},\infty)\) to \(1\). With other activations such as the logistic function, this is not possible, which is why we chose our more flexible definition.
**Corollary 5.3**.: _Every unary query on \(\mathscr{GS}_{\ell}\) that is computable by a rational piecewise linear GNN is definable in \(\mathsf{GFO}+\mathsf{C}\)._
Note that this Corollary is Theorem 1.4 stated in the introduction.
The reader may wonder if the converse of the previous corollary holds, that is, if every query definable in \(\mathsf{GFO}+\mathsf{C}\) is computable by a rational piecewise linear GNN. It is not; we refer the reader to Remark 7.6.
_Remark 5.4_.: As mentioned earlier, there are versions of the theorem for all the extensions of basic GNNs that we discussed in Section 4. In particular, there is a version for graph level functions and the logic \(\mathsf{GFO}+\mathsf{C}^{\mathrm{gc}}\), and also for GNNs with global readout and \(\mathsf{GFO}+\mathsf{C}^{\mathrm{gc}}\).
## 6 The Non-Uniform Case: GNNs with Arbitrary Weights and Families of GNNs
Now we consider the general case where the weights in the neural networks are arbitrary real numbers. We also drop the assumption that the activation functions be piecewise linear, only requiring rpl approximability. The price we pay is a non-uniformity on the side of the logic or the Boolean circuits and a slightly weaker approximation bound as well as a boundedness assumption on the input signal.
**Theorem 6.1**.: _Let \(\mathfrak{N}\) be an rpl-approximable GNN of input dimension \(p\) and output dimension \(q\). Let \(\mathbf{X}_{1},\ldots,\mathbf{X}_{p}\) be r-schemas of type \(\mathsf{v}\to\mathsf{r}\), and let \(W,W^{\prime}\) be function variables of type \(\varnothing\to\mathsf{n}\)._
_Then there are a r-expressions \(\mathsf{gnn}\)-\(\mathsf{eval}_{1}(x)\), \(\ldots\), \(\mathsf{gnn}\)-\(\mathsf{eval}_{q}(x)\) in \(\mathsf{GFO}+\mathsf{C}_{\mathrm{nu}}\) such that the following holds for all graphs \(G\) and assignments \(\mathscr{L}\) over \(G\). Let \(\mathbf{x}\in\mathscr{S}_{p}(G)\) be the signal defined by_
\[\mathbf{x}(v)\coloneqq\Big{(}\langle\!\langle\mathbf{X}_{1}\rangle\!\rangle^{(G, \mathscr{L})}\,(v),\ldots,\langle\!\langle\mathbf{X}_{p}\rangle\!\rangle^{(G, \mathscr{L})}\,(v)\Big{)}, \tag{6.A}\]
_and let \(\boldsymbol{y}=\widehat{\mathfrak{N}}(G,\boldsymbol{x})\). Assume that \(\left\|\boldsymbol{x}\right\|_{\infty}\leq\omega(W)\) and that \(\omega(W^{\prime})\neq 0\). Then for all \(v\in V(G)\),_
\[\left\|\boldsymbol{y}(v)-\left(\,\langle\!\langle\mathsf{gnn-eval}_{1}\rangle \!\rangle^{(G,e)}\,(v),\ldots,\langle\!\langle\mathsf{gnn-eval}_{q}\rangle\! \rangle^{(G,e)}\,(v)\right)\right\|_{\infty}\leq\frac{1}{\omega(W^{\prime})} \tag{6.B}\]
Let us comment on the role of the two \(0\)-ary functions (that is, constants) \(W,W^{\prime}\). We introduce them to add flexibility in the bounds. Their values depend on the assignment \(\omega\), which means that we can freely choose them. For example, we can let \(\omega(W)=\omega(W^{\prime})=n\coloneqq|G|\). Then we get an approximation error of \(1/n\) for input signals bounded by \(n\). Or we could let \(\omega(W)=1\) and \(\omega(W^{\prime})=100\). Then we get an approximation error of \(1\%\) for Boolean input signals.
Since we move to a non-uniform regime anyway, to obtain the most general results we may as well go all the way to a non-uniform GNN model where we have different GNNs for every size of the input graphs.
We need additional terminology. We define the _bitsize_\(\operatorname{bsize}(\mathfrak{F})\) of a rational piecewise linear FNN \(\mathfrak{F}\) to be the sum of the bitsizes of its skeleton, all its weights and biases, and all its activations. We define the _weight_ of an arbitrary FNN \(\mathfrak{F}=(V,E,(\mathfrak{a}_{v})_{v\in V},\boldsymbol{w},\boldsymbol{b})\) to be
\[\operatorname{wt}(\mathfrak{F})\coloneqq|V|+\left\|\boldsymbol{w}\right\|_{ \infty}+\left\|\boldsymbol{b}\right\|_{\infty}+\max_{v\in V}\left(\lambda( \mathfrak{a}_{v})+\mathfrak{a}_{v}(0)\right)\]
Here \(\lambda(\mathfrak{a}_{v})\) denotes the minimal Lipschitz constant of the Lipschitz continuous function \(\mathfrak{a}_{v}\). The _size_\(\operatorname{size}(\mathfrak{F})\) of a rational piecewise linear FNN \(\mathfrak{F}\) is the maximum of its bitsize and its weight. The _depth_\(\operatorname{dp}(\mathfrak{F})\) of an FNN \(\mathfrak{F}\) is the depth of its skeleton, that is, the length of a longest path from an input node to an output node of \(\mathfrak{F}\).
The _weight_\(\operatorname{wt}(\mathfrak{N})\) of a GNN \(\mathfrak{N}\) is the sum of the weights of the FNNs for the message and combination functions of all layers of \(\mathfrak{N}\). The _bitsize_\(\operatorname{bsize}(\mathfrak{N})\) and the _size_\(\operatorname{size}(\mathfrak{N})\) of a rational piecewise linear GNN \(\mathfrak{N}\) is the sum of the (bit)sizes of all its FNNs. The _skeleton_ of a GNN \(\mathfrak{N}\) consists of the directed acyclic graphs underlying the FNNs for the message and combination functions of all layers of \(\mathfrak{N}\).Thus if two GNNs have the same skeleton they have the same number of layers and the same input and output dimensions on all layers, but they may have different activation functions and different weights. The _depth_\(\operatorname{dp}(\mathfrak{N})\) of a GNN \(\mathfrak{N}\) is the number of layers of \(\mathfrak{N}\) times the maximum depth of all its FNNs.
Let \(\mathcal{N}=(\mathfrak{N}^{(n)})_{n\in\mathbb{N}}\) be a family of GNNs. Suppose that the input dimension of \(\mathfrak{N}^{(n)}\) is \(p^{(n)}\) and the output dimension is \(q^{(n)}\). It will be convenient to call \((p^{(n)})_{n\in\mathbb{N}}\) the _input dimension_ of \(\mathcal{N}\) and \((q^{(n)})_{n\in\mathbb{N}}\) the _output dimension_. Then for every graph \(G\) of order \(n\) and every \(\boldsymbol{x}\in\mathcal{S}_{p(n)}(G)\) we let \(\mathcal{N}(G,\boldsymbol{x})\coloneqq\mathfrak{N}^{(n)}(G,\boldsymbol{x})\) and \(\widetilde{\mathcal{N}}(G,\boldsymbol{x})\coloneqq\widehat{\mathfrak{N}}^{( n)}(G,\boldsymbol{x})\). Thus \(\mathcal{N}\) computes a generalised form of signal transformation where the input and output dimension depend on the order of the input graph.
We say that \(\mathcal{N}\) is of _polynomial weight_ if there is a polynomial \(\pi(X)\) such that \(\operatorname{wt}(\mathfrak{N}^{(n)})\leq\pi(n)\) for all \(n\). _Polynomial (bit)size_ is defined similarly. The family \(\mathcal{N}\) is of _bounded depth_ if there is a \(d\in\mathbb{N}\) such that \(\operatorname{dp}(\mathfrak{N}^{(n)})\leq d\) for all \(n\). The family \(\mathcal{N}\) is _rpl approximable_ if there is a polynomial \(\pi^{\prime}(X,Y)\) such that for all \(n\in\mathbb{N}_{>0}\) and all
\(\varepsilon>0\), every activation function of \(\mathfrak{N}^{(n)}\) is \(\varepsilon\)-approximable by a rational piecewise linear function of bitsize at most \(\pi^{\prime}(\varepsilon^{-1},n)\).
**Theorem 6.2**.: _Let \(\mathscr{N}\) be an rpl-approximable polynomial-weight, bounded-depth family of GNNs of input dimension \((p^{(n)})_{n\in\mathbb{N}}\) and output dimension \((q^{(n)})_{n\in\mathbb{N}}\). Let \(\mathbf{X}\) be an r-schema of type \(\mathtt{vn}\to\mathtt{r}\), and let \(W,W^{\prime}\) be function variables of type \(\varnothing\to\mathtt{n}\)._
_Then there is an r-expression \(\mathtt{gnn}\text{-}\mathsf{eval}(x,y)\) in \(\mathsf{GFO}+\mathsf{C}_{\mathtt{n}\mathtt{u}}\) such that the following holds for all graphs \(G\) and assignments \(\mathpzc{a}\) over \(G\). Let \(n\coloneqq|G|\), and let \(\mathbf{x}\in\mathscr{S}_{p^{(n)}}(G)\) be the signal defined by_
\[\mathbf{x}(v)\coloneqq\Big{(}\left\langle\!\left\langle\mathbf{X}\right\rangle\! \right\rangle^{(G,a)}(v,0),\ldots,\left\langle\!\left\langle\mathbf{X}\right\rangle \!\right\rangle^{(G,a)}(v,p^{(n)}-1)\Big{)}. \tag{6.C}\]
_Assume that \(\left\|\!\left\langle\mathbf{x}\right\rangle\!\right\|_{\infty}\leq\mathpzc{a}(W)\) and that \(\mathpzc{a}(W^{\prime})\neq 0\). Let \(\mathbf{\chi}=\widetilde{\mathscr{N}}(G,\mathbf{x})\in\mathscr{S}_{q^{(n)}}(G)\). Then for all \(v\in V(G)\),_
\[\left\|\!\left\langle\mathpzc{c}(v)-\big{(}\left\langle\!\left\langle\mathtt{ gnn}\text{-}\mathsf{eval}\right\rangle\!\right\rangle^{(G,a)}(v,0),\ldots,\left\langle\! \left\langle\mathtt{gnn}\text{-}\mathsf{eval}\right\rangle\!\right\rangle^{( G,a)}(v,q^{(n)}-1)\big{)}\right\|_{\infty}\leq\frac{1}{\mathpzc{a}(W^{\prime})} \tag{6.D}\]
Observe that Theorem 6.2 implies Theorem 6.1, because we can simply let \(\mathscr{N}\) be the family consisting of the same GNN for every \(n\). So we only need to prove Theorem 6.2. The basic idea of the proof is simple. We exploit the continuity of the functions computed by FNNs and GNNs not only in terms of the input signals but also in terms of the weights and the biases. This allows us to approximate the functions computed by GNNs with arbitrary real weights by GNNs with rational weights. However, the bitsize of the rationals we need to get a sufficiently precise approximation depends on the size of the input graph, and this leads to the non-uniformity.
Before we delve into the proof, let us state one important corollary. Extending the definition for single GNNs in the obvious way, we say that a family \(\mathscr{N}=(\mathfrak{N}^{(n)})_{n\in\mathbb{N}}\) of GNNs _computes_ a unary query \(\mathscr{O}:\mathscr{SG}_{p}^{\mathrm{bool}}\to\mathscr{SG}_{1}^{\mathrm{bool}}\) on \(p\)-labelled graphs if for all \(n\in\mathbb{N}\), all \((G,\mathpzc{a})\in\mathscr{SG}_{p}^{\mathrm{bool}}\) with \(n=|G|\), and all \(v\in V(G)\) it holds that
\[\begin{cases}\widetilde{\mathfrak{N}}^{(n)}(G,\mathpzc{a})(v)\geq\frac{2}{3}& \text{if }\mathscr{O}(G,\mathpzc{a})(v)=1,\\ \widetilde{\mathfrak{N}}^{(n)}(G,\mathpzc{a})(v)\leq\frac{1}{3}&\text{if } \mathscr{O}(G,\mathpzc{a})(v)=0.\end{cases}\]
**Corollary 6.3**.: _Every unary query on \(\mathscr{SG}_{p}^{\mathrm{bool}}\) that is computable by an rpl-approximable polynomial-weight bounded-depth family of GNNs is definable in \(\mathsf{GFO}+\mathsf{C}_{\mathrm{m}\mathtt{u}}\)._
_Remark 6.4_.: The exact analogues of Theorems 6.1 and 6.2 hold for GNNs with global readout and the logic \(\mathsf{GFO}+\mathsf{C}^{\mathrm{gc}}\), with only small modifications of the proof.
### Bounds and Approximations for FNNs
In this section, we shall prove that we can approximate rpl approximable FNNs by rational piecewise linear FNNs whose size is bounded in terms of the approximation ratio. For this, we first need to establish bounds on the Lipschitz constant and growth of an FNN in terms of its structure, its activation functions, and its parameters.
Throughout this section, we let \(\mathfrak{A}=\big{(}V,E,(\mathfrak{a}_{v})_{v\in V}\big{)}\) be an FNN architecture of input dimension \(p\) and output dimension \(q\). We let \(d\) be the depth and the maximum in-degree of the directed graph \((V,E)\). Without loss of generality, we assume \(\Delta\geq 1\) and thus \(d\geq 1\). If \(\Delta=0\), we simply add a dummy edge of weight \(0\) to the network. Moreover, we let \(\lambda\in\mathbb{N}_{>0}\) be a Lipschitz constant for all activation function \(\mathfrak{a}_{v}\) for \(v\in V\), and we let
\[\mu=\max\{v\in V\mid\mathfrak{a}_{v}(0)\}.\]
For vectors \(\boldsymbol{x}\in\mathbb{R}^{p}\), \(\boldsymbol{w}\in\mathbb{R}^{E}\), \(\boldsymbol{b}\in\mathbb{R}^{V}\), we assume that \(\boldsymbol{x}=(x_{1},\ldots,x_{p})\), \(\boldsymbol{w}=(w_{e})_{e\in E}\), and \(\boldsymbol{b}=(b_{v})_{v\in V}\).
In the first two lemmas we analyse how the growth and variation of the functions \(g_{\boldsymbol{A},v}\mathfrak{A}(\boldsymbol{x},\boldsymbol{w},\boldsymbol{b})\) and \(\mathfrak{A}(\boldsymbol{x},\boldsymbol{w},\boldsymbol{b})\) depend on the constants \(d,\Delta,\lambda,\mu\) and \(\left\|\boldsymbol{x}\right\|_{\infty},\left\|\boldsymbol{w}\right\|_{\infty},\left\|\boldsymbol{b}\right\|_{\infty}\) (more precisely than in Lemma 2.5).
**Lemma 6.5**.: _Let \(\gamma\coloneqq 2\Delta\lambda\max\{\lambda,\mu\}\). Then for all \(\boldsymbol{x}\in\mathbb{R}^{p}\), \(\boldsymbol{b}\in\mathbb{R}^{V}\), and \(\boldsymbol{w}\in\mathbb{R}^{E}\), and all \(v\in V\) of depth \(t\) we have_
\[\big{|}g_{\mathfrak{A},v}(\boldsymbol{x},\boldsymbol{b},\boldsymbol{w})\big{|} \leq\gamma^{t}(\left\|\boldsymbol{w}\right\|_{\infty}+1)^{t}\big{(}\left\| \boldsymbol{x}\right\|_{\infty}+\left\|\boldsymbol{b}\right\|_{\infty}+1). \tag{6.E}\]
Proof.: Note that for all \(x\in\mathbb{R}\) we have
\[\left|\mathfrak{a}_{v}(x)\right|\leq\lambda|x|+\mu. \tag{6.F}\]
For all input nodes \(X_{i}\) we have
\[\big{|}g_{\mathfrak{A},X_{i}}(\boldsymbol{x},\boldsymbol{b},\boldsymbol{w}) \big{|}=\left|x_{i}\right|\leq\left\|\boldsymbol{x}\right\|_{\infty}. \tag{6.G}\]
This implies (6.E) for \(t=0\).
_Claim 1._ For all nodes \(v\in V\) of depth \(t\geq 1\) we have
\[\big{|}g_{\mathfrak{A},v}(\boldsymbol{x},\boldsymbol{b},\boldsymbol{w})\big{|} \leq\big{(}\Delta\lambda\left\|\boldsymbol{w}\right\|_{\infty}\big{)}^{t} \left\|\boldsymbol{x}\right\|_{\infty}+\sum_{s=0}^{t-1}\big{(}\Delta\lambda \left\|\boldsymbol{w}\right\|_{\infty}\big{)}^{s}\big{(}\lambda\left\| \boldsymbol{b}\right\|_{\infty}+\mu\big{)}. \tag{6.H}\]
Proof.: We prove (6.H) by induction on \(t\geq 1\). Suppose that \(v\in V\) is a node of depth \(t\), and let \(v_{1},\ldots,v_{k}\) be its in-neighbours. Let \(b\coloneqq b_{v}\) and \(w_{i}\coloneqq w_{v_{i}v}\) for \(i\in[k]\). Moreover, let \(y_{i}\coloneqq g_{\mathfrak{A},v_{i}}(\boldsymbol{x},\boldsymbol{b},\boldsymbol{w})\) and \(\boldsymbol{y}=(y_{1},\ldots,y_{k})\). If \(t=1\), by (6.G) we have
\[\left\|\boldsymbol{y}\right\|_{\infty}\leq\left\|\boldsymbol{x}\right\|_{\infty} \tag{6.I}\]
If \(t>1\), by the induction hypothesis we have
\[\left\|\boldsymbol{y}\right\|_{\infty}\leq\big{(}\Delta\lambda\left\| \boldsymbol{w}\right\|_{\infty}\big{)}^{t-1}\left\|\boldsymbol{x}\right\|_{ \infty}+\sum_{s=0}^{t-2}\big{(}\Delta\lambda\left\|\boldsymbol{w}\right\|_{ \infty}\big{)}^{s}\big{(}\lambda\left\|\boldsymbol{b}\right\|_{\infty}+\mu \big{)}. \tag{6.J}\]
Thus
\[\big{|}g_{\mathfrak{A},v}(\boldsymbol{x},\boldsymbol{b},\boldsymbol{w})\big{|} =\Big{|}\mathfrak{a}_{v}\Big{(}b+\sum_{i=1}^{k}w_{i}y_{i}\Big{)}\Big{|}\]
\[\leq\left|\lambda\middle|b+\sum_{i=1}^{k}w_{i}y_{i}\middle|+\mu\right| \text{by (\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeqeqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeqeqeqeqeq:
\(\mathbf{y}=(y_{1},\ldots,y_{k})\) and \(\mathbf{y}^{\prime}=(y^{\prime}_{1},\ldots,y^{\prime}_{k})\) with \(y_{i}\coloneqq g_{\mathfrak{A},v_{i}}(\mathbf{x},\mathbf{b},\mathbf{w})\) and \(y^{\prime}_{i}\coloneqq g_{\mathfrak{A},v_{i}}(\mathbf{x}^{\prime},\mathbf{b},\mathbf{w})\). Then
\[\big{|}g_{\mathfrak{A},v}(\mathbf{x},\mathbf{b},\mathbf{w})-g_{\mathfrak{A},v} (\mathbf{x},\mathbf{b}^{\prime},\mathbf{w}^{\prime})\big{|} =\Big{|}\mathfrak{a}_{v}\Big{(}b+\sum_{i=1}^{k}w_{i}y_{i}\Big{)}- \mathfrak{a}_{v}\Big{(}b+\sum_{i=1}^{k}w_{i}y^{\prime}_{i}\Big{)}\Big{|}\] \[\leq\lambda\Big{(}\sum_{i=1}^{k}w_{i}|y_{i}-y^{\prime}_{i}|\Big{)}\] \[\leq\lambda\Delta\left\|\mathbf{w}\right\|_{\infty}\left\|\mathbf{y}-\bm {y}^{\prime}\right\|_{\infty}.\]
Since by the induction hypothesis we have \(\left\|\mathbf{y}-\mathbf{y}^{\prime}\right\|_{\infty}\leq(\lambda\Delta\left\|\mathbf{ w}\right\|_{\infty})^{t-1}\left\|\mathbf{x}-\mathbf{x}^{\prime}\right\|_{\infty}\), the assertion (6.K) follows.
**Lemma 6.7**.: _Let \(\nu\coloneqq(4\Delta\lambda\gamma)^{d}\), where \(\gamma\) is the constant of Lemma 6.5. Then for all \(\varepsilon\in\mathbb{R}\), \(\mathbf{x}\in\mathbb{R}^{p}\), \(\mathbf{b},\mathbf{b}^{\prime}\in\mathbb{R}^{V}\), and \(\mathbf{w},\mathbf{w}^{\prime}\in\mathbb{R}^{E}\) with_
\[0\leq\max\big{\{}\|\mathbf{b}-\mathbf{b}^{\prime}\|_{\infty},\|\mathbf{w}-\mathbf{w}^{\prime }\|_{\infty}\big{\}}\leq\varepsilon\leq 1 \tag{6.L}\]
_we have_
\[\big{\|}\mathfrak{A}(\mathbf{x},\mathbf{b},\mathbf{w})-\mathfrak{A}(\mathbf{x},\mathbf{b}^{\prime},\mathbf{w}^{\prime})\big{\|}_{\infty}\leq\nu\big{(}\left\|\mathbf{w}\right\|_{\infty }+1\big{)}^{d}\big{(}\left\|\mathbf{x}\right\|_{\infty}+\left\|\mathbf{b}\right\|_{ \infty}+1\big{)}\varepsilon.\]
Proof.: Let \(\mathbf{x}\in\mathbb{R}^{p}\) and \(\varepsilon\in[0,1]\), \(\mathbf{b},\mathbf{b}^{\prime}\in\mathbb{R}^{V}\), \(\mathbf{w},\mathbf{w}^{\prime}\in\mathbb{R}^{E}\) such that (6.L) holds.
We shall prove by induction on \(t\) that for all nodes \(v\in V\) of depth \(t\) we have
\[\big{|}g_{\mathfrak{A},v}(\mathbf{x},\mathbf{b},\mathbf{w})-g_{\mathfrak{A},v}(\mathbf{x},\bm {b}^{\prime},\mathbf{w}^{\prime})\big{|}\leq\big{(}4\Delta\lambda\gamma(\left\|\bm {w}\right\|_{\infty}+1)\big{)}^{t}\big{(}\left\|\mathbf{x}\right\|_{\infty}+\left\| \mathbf{b}\right\|_{\infty}+1\big{)}\varepsilon. \tag{6.M}\]
Applied to the output nodes \(v=Y_{i}\) of depth \(\leq d\), this yields the assertion of the lemma.
Nodes of depth \(t=0\) are input nodes, and we have
\[\big{|}g_{\mathfrak{A},X_{i}}(\mathbf{x},\mathbf{b},\mathbf{w})-g_{\mathfrak{A},X_{i}}( \mathbf{x},\mathbf{b}^{\prime},\mathbf{w}^{\prime})\big{|}=|x_{i}-x_{i}|=0. \tag{6.N}\]
For the inductive step, let \(v\in V\) be a node of depth \(t>0\), and let \(v_{1},\ldots,v_{k}\) be its in-neighbours. Let \(b\coloneqq b_{v},b^{\prime}\coloneqq b^{\prime}_{v}\) and \(w_{i}\coloneqq w_{v_{i}v},w^{\prime}_{i}\coloneqq w^{\prime}_{v_{i}v}\) for \(i\in[k]\). Moreover, let \(\mathbf{y}=(y_{1},\ldots,y_{k})\) and \(\mathbf{y}^{\prime}=(y^{\prime}_{1},\ldots,y^{\prime}_{k})\) with \(y_{i}\coloneqq g_{\mathfrak{A},v_{i}}(\mathbf{x},\mathbf{b},\mathbf{w})\) and \(y^{\prime}_{i}\coloneqq g_{\mathfrak{A},v_{i}}(\mathbf{x},\mathbf{b}^{\prime},\mathbf{w}^ {\prime})\).
_Claim 1_.: \[\big{|}g_{\mathfrak{A},v}(\mathbf{x},\mathbf{b},\mathbf{w})-g_{\mathfrak{A},v}(\mathbf{x},\bm {b}^{\prime},\mathbf{w}^{\prime})\big{|}\leq\Delta\lambda\left\|\mathbf{y}-\mathbf{y}^{ \prime}\right\|_{\infty}\left\|\mathbf{w}\right\|_{\infty}+\Delta\lambda\big{(} \left\|\mathbf{y}\right\|_{\infty}+\left\|\mathbf{y}-\mathbf{y}^{\prime}\right\|_{\infty}+1 \big{)}\varepsilon\]
Proof.: By the definition of \(g_{\mathfrak{A},v}\) and the Lipschitz continuity of the activation functions we have
\[\big{|}g_{\mathfrak{A},v}(\mathbf{x},\mathbf{b},\mathbf{w})-g_{\mathfrak{A},v }(\mathbf{x},\mathbf{b}^{\prime},\mathbf{w}^{\prime})\big{|} =\Big{|}\mathfrak{a}_{v}\Big{(}b+\sum_{i=1}^{k}w_{i}y_{i}\Big{)} -\mathfrak{a}_{v}\Big{(}b^{\prime}+\sum_{i=1}^{k}w^{\prime}_{i}y^{\prime}_{i} \Big{)}\Big{|}\] \[\leq\lambda\cdot\Big{(}|b-b^{\prime}|+\sum_{i=1}^{k}|w_{i}y_{i}-w^ {\prime}_{i}y^{\prime}_{i}|\Big{)}\]
Observe that \(\left|b-b^{\prime}\right|\leq\left\|\mathbf{b}-\mathbf{b}^{\prime}\right\|_{\infty}\leq\varepsilon\) and
\[y_{i}w_{i}-y_{i}^{\prime}w_{i}^{\prime} =(y_{i}-y_{i}^{\prime})w_{i}+y_{i}^{\prime}(w_{i}-w_{i}^{\prime})\] \[=(y_{i}-y_{i}^{\prime})w_{i}+(y_{i}^{\prime}-y_{i})(w_{i}-w_{i}^{ \prime})+y_{i}(w_{i}-w_{i}^{\prime})\] \[\leq\left\|\mathbf{y}-\mathbf{y}^{\prime}\right\|_{\infty}\left\|\mathbf{w} \right\|_{\infty}+\varepsilon\left\|\mathbf{y}-\mathbf{y}^{\prime}\right\|_{\infty}+ \varepsilon\left\|\mathbf{y}\right\|_{\infty}.\]
Hence
\[\left|g_{\mathfrak{A},v}(\mathbf{x},\mathbf{b},\mathbf{w})-g_{\mathfrak{A},v} (\mathbf{x},\mathbf{b}^{\prime},\mathbf{w}^{\prime})\right| \leq\lambda\Big{(}\varepsilon+\Delta\big{(}\left\|\mathbf{y}-\mathbf{y}^ {\prime}\right\|_{\infty}\left\|\mathbf{w}\right\|_{\infty}+\left\|\mathbf{y}-\mathbf{y}^ {\prime}\right\|_{\infty}\varepsilon+\left\|\mathbf{y}\right\|_{\infty} \varepsilon\big{)}\Big{)}\] \[\leq\Delta\lambda\left\|\mathbf{y}-\mathbf{y}^{\prime}\right\|_{\infty} \left\|\mathbf{w}\right\|_{\infty}+\Delta\lambda\big{(}\left\|\mathbf{y}\right\|_{ \infty}+\left\|\mathbf{y}-\mathbf{y}^{\prime}\right\|_{\infty}+1\big{)}\varepsilon.\]
This proves the claim.
By the inductive hypothesis (6.M), we have
\[\left\|\mathbf{y}-\mathbf{y}^{\prime}\right\|_{\infty}\leq\big{(}4\Delta\lambda \gamma(\left\|\mathbf{w}\right\|_{\infty}+1)\big{)}^{t-1}\big{(}\left\|\mathbf{x} \right\|_{\infty}+\left\|\mathbf{b}\right\|_{\infty}+1\big{)}\varepsilon. \tag{6.O}\]
Thus
\[\Delta\lambda\left\|\mathbf{y}-\mathbf{y}^{\prime}\right\|_{\infty}\left\|\mathbf{w} \right\|_{\infty}\leq 4^{t-1}\big{(}\Delta\lambda\gamma(\left\|\mathbf{w} \right\|_{\infty}+1)\big{)}^{t}\big{(}\left\|\mathbf{x}\right\|_{\infty}+\left\| \mathbf{b}\right\|_{\infty}+1\big{)}\varepsilon \tag{6.P}\]
and
\[\Delta\lambda\left\|\mathbf{y}-\mathbf{y}^{\prime}\right\|_{\infty}\varepsilon \leq\Delta\lambda\big{(}4\Delta\lambda\gamma(\left\|\mathbf{w} \right\|_{\infty}+1)\big{)}^{t-1}\big{(}\left\|\mathbf{x}\right\|_{\infty}+\left\| \mathbf{b}\right\|_{\infty}+1\big{)}\varepsilon^{2}\] \[\leq 4^{t-1}\big{(}\Delta\lambda\gamma(\left\|\mathbf{w}\right\|_{ \infty}+1)\big{)}^{t}\big{(}\left\|\mathbf{x}\right\|_{\infty}+\left\|\mathbf{b} \right\|_{\infty}+1\big{)}\varepsilon\qquad\quad\text{ because }\varepsilon\leq 1. \tag{6.Q}\]
By Lemma 6.5 we have \(\left\|\mathbf{y}\right\|_{\infty}\leq\gamma^{t-1}\big{(}\left\|\mathbf{w}\right\|_{ \infty}+1\big{)}^{t-1}\big{(}\left\|\mathbf{x}\right\|_{\infty}+\left\|\mathbf{b} \right\|_{\infty}+1\big{)}\) and thus
\[\Delta\lambda\left\|\mathbf{y}\right\|_{\infty}\varepsilon \leq\Delta\lambda\gamma^{t-1}\big{(}\left\|\mathbf{w}\right\|_{ \infty}+1\big{)}^{t-1}\big{(}\left\|\mathbf{x}\right\|_{\infty}+\left\|\mathbf{b} \right\|_{\infty}+1\big{)}\varepsilon\] \[\leq\big{(}\Delta\lambda\gamma(\left\|\mathbf{w}\right\|_{\infty}+1) \big{)}^{t}\big{(}\left\|\mathbf{x}\right\|_{\infty}+\left\|\mathbf{b}\right\|_{\infty} +1\big{)}\varepsilon \tag{6.R}\]
Plugging (6.P), (6.Q), and (6.R) into Claim 1, we get
\[\leq 4^{t-1}\big{(}\Delta\lambda\gamma(\left\|\mathbf{w}\right\|_{ \infty}+1)\big{)}^{t}\big{(}\left\|\mathbf{x}\right\|_{\infty}+\left\|\mathbf{b} \right\|_{\infty}+1\big{)}\varepsilon\] \[\leq 4^{t-1}\big{(}\Delta\lambda\gamma(\left\|\mathbf{w}\right\|_{ \infty}+1)\big{)}^{t}\big{(}\left\|\mathbf{x}\right\|_{\infty}+\left\|\mathbf{b} \right\|_{\infty}+1\big{)}\varepsilon\] \[\leq 4^{t}\big{(}\Delta\lambda\gamma(\left\|\mathbf{w}\right\|_{ \infty}+1)\big{)}^{t}\big{(}\left\|\mathbf{x}\right\|_{\infty}+\left\|\mathbf{b} \right\|_{\infty}+1\big{)}\varepsilon.\]
This proves (6.M) and thus completes the proof of the lemma.
**Lemma 6.8**.: _Let \(\beta=(4\Delta\lambda\gamma)^{d}\), where \(\gamma\) is the constant of Lemma 6.5. Let \(\varepsilon>0\). For all \(v\in V\), let \(\mathfrak{a}_{v}^{\prime}:\mathbb{R}\to\mathbb{R}\) be an \(\varepsilon\)-approximation of \(\mathfrak{a}_{v}\) that is Lipschitz continuous with constant \(2\lambda\), and let \(\mathfrak{A}^{\prime}=(V,E,(\mathfrak{a}_{v}^{\prime})_{v\in V})\). Then for all \(\mathbf{x}\in\mathbb{R}^{p}\), \(\mathbf{b}\in\mathbb{R}^{V}\), and \(\mathbf{w}\in\mathbb{R}^{E}\),_
\[\left\|\mathfrak{A}(\mathbf{x},\mathbf{b},\mathbf{w})-\mathfrak{A}^{\prime}(\mathbf{x},\mathbf{b}, \mathbf{w})\right\|_{\infty}\leq\beta\big{(}\left\|\mathbf{w}\right\|_{\infty}+1 \big{)}^{d}\big{(}\left\|\mathbf{x}\right\|_{\infty}+\left\|\mathbf{b}\right\|_{\infty} +1\big{)}\varepsilon.\]
Proof.: We shall prove by induction on \(t\) that for all nodes \(v\in V\) of depth \(t\) we have
(6.S)
This yields the assertion of the lemma.
Nodes of depth \(t=0\) are input nodes, and we have
\[\big{|}g_{\mathfrak{A},X_{i}}(\boldsymbol{x},\boldsymbol{b},\boldsymbol{w})-g_{ \mathfrak{A}^{\prime},X_{i}}(\boldsymbol{x},\boldsymbol{b},\boldsymbol{w}) \big{|}=|x_{i}-x_{i}|=0.\]
For the inductive step, let \(v\in V\) be a node of depth \(t>0\), and let \(v_{1},\ldots,v_{k}\) be its in-neighbours. Let \(b\coloneqq b_{v}\) and \(w_{i}\coloneqq w_{v_{i}v}\) for \(i\in[k]\). Moreover, let \(\boldsymbol{y}=(y_{1},\ldots,y_{k})\) and \(\boldsymbol{y}^{\prime}=(y_{1}^{\prime},\ldots,y_{k}^{\prime})\) with \(y_{i}\coloneqq g_{\mathfrak{A},v_{i}}(\boldsymbol{x},\boldsymbol{b}, \boldsymbol{w})\) and \(y_{i}^{\prime}\coloneqq g_{\mathfrak{A}^{\prime},v_{i}}(\boldsymbol{x}, \boldsymbol{b},\boldsymbol{w})\).
_Claim 1_.: \[\big{|}g_{\mathfrak{A},v}(\boldsymbol{x},\boldsymbol{b},\boldsymbol{w})-g_{ \mathfrak{A}^{\prime},v}(\boldsymbol{x},\boldsymbol{b},\boldsymbol{w})\big{|} \leq 2\gamma^{t}\big{(}\|\boldsymbol{w}\|+1\big{)}^{t}\big{(}\| \boldsymbol{x}\|_{\infty}+\|\boldsymbol{b}\|_{\infty}+1\big{)}\varepsilon+2 \Delta\lambda\left\|\boldsymbol{w}\right\|_{\infty}\left\|\boldsymbol{y}- \boldsymbol{y}^{\prime}\right\|_{\infty}\]
Proof.: We have
\[\big{|}g_{\mathfrak{A},v}(\boldsymbol{x},\boldsymbol{b},\boldsymbol{w})-g_{ \mathfrak{A}^{\prime},v}(\boldsymbol{x},\boldsymbol{b},\boldsymbol{w})\big{|} =\Big{|}\mathfrak{a}_{v}\Big{(}b+\sum_{i=1}^{k}w_{i}y_{i}\Big{)} -\mathfrak{a}^{\prime}_{v}\Big{(}b+\sum_{i=1}^{k}w_{i}y_{i}^{\prime}\Big{)}\Big{|} \tag{6.T}\] \[\leq\Big{|}\mathfrak{a}_{v}\Big{(}b+\sum_{i=1}^{k}w_{i}y_{i}\Big{)} -\mathfrak{a}^{\prime}_{v}\Big{(}b+\sum_{i=1}^{k}w_{i}y_{i}\Big{)}\Big{|}\] \[\quad+\Big{|}\mathfrak{a}^{\prime}_{v}\Big{(}b+\sum_{i=1}^{k}w_{i }y_{i}\Big{)}-\mathfrak{a}^{\prime}_{v}\Big{(}b+\sum_{i=1}^{k}w_{i}y_{i}^{ \prime}\Big{)}\Big{|}\]
By Lemma 6.5 we have
\[\Big{|}\mathfrak{a}_{v}\Big{(}b+\sum_{i=1}^{k}w_{i}y_{i}\Big{)}\Big{|}=\big{|} g_{\mathfrak{A},v}(\boldsymbol{x},\boldsymbol{b},\boldsymbol{w})\big{|} \leq\gamma^{t}\big{(}\|\boldsymbol{w}\|+1\big{)}^{t}\big{(}\| \boldsymbol{x}\|_{\infty}+\|\boldsymbol{b}\|_{\infty}+1\big{)}. \tag{6.U}\]
Since \(\mathfrak{a}^{\prime}_{v}\)\(\varepsilon\)-approximates \(\mathfrak{a}_{v}\), this implies
\[\leq 2\gamma^{t}\big{(}\|\boldsymbol{w}\|+1\big{)}^{t}\big{(}\| \boldsymbol{x}\|_{\infty}+\|\boldsymbol{b}\|_{\infty}+1\big{)}\varepsilon. \tag{6.V}\]
Furthermore, by the Lipschitz continuity of \(\mathfrak{a}^{\prime}_{v}\) we have
\[\Big{|}\mathfrak{a}^{\prime}_{v}\Big{(}b+\sum_{i=1}^{k}w_{i}y_{i}\Big{)}- \mathfrak{a}^{\prime}_{v}\Big{(}b+\sum_{i=1}^{k}w_{i}y_{i}^{\prime}\Big{)} \Big{|}\leq 2\lambda\sum_{i=1}^{k}|w_{i}|\cdot|y_{i}-y_{i}^{\prime}|\leq 2 \Delta\lambda\left\|\boldsymbol{w}\right\|_{\infty}\left\|\boldsymbol{y}- \boldsymbol{y}^{\prime}\right\|_{\infty}. \tag{6.W}\]
The assertion of the claim from (6.T), (6.V), and (6.W).
By the inductive hypothesis (6.S), we have
\[\big{|}\boldsymbol{y}-\boldsymbol{y}^{\prime}\big{|}_{\infty}\leq(4\Delta \lambda\gamma)^{t-1}\big{(}\|\boldsymbol{w}\|_{\infty}+1\big{)}^{t-1}\big{(} \|\boldsymbol{x}\|_{\infty}+\|\boldsymbol{b}\|_{\infty}+1\big{)}\varepsilon.\]
Thus by the claim,
\[\big{|}g_{\mathfrak{A},v}(\boldsymbol{x},\boldsymbol{b},\boldsymbol{ w})-g_{\mathfrak{A}^{\prime},v}(\boldsymbol{x},\boldsymbol{b},\boldsymbol{w}) \big{|} \leq 2\gamma^{t}\big{(}\|\boldsymbol{w}\|+1\big{)}^{t}\big{(}\| \boldsymbol{x}\|_{\infty}+\|\boldsymbol{b}\|_{\infty}+1\big{)}\varepsilon+2 \Delta\lambda\left\|\boldsymbol{w}\right\|_{\infty}\left\|\boldsymbol{y}- \boldsymbol{y}^{\prime}\right\|_{\infty}\] \[\leq 2\gamma^{t}\big{(}\|\boldsymbol{w}\|+1\big{)}^{t}\big{(}\| \boldsymbol{x}\|_{\infty}+\|\boldsymbol{b}\|_{\infty}+1\big{)}\varepsilon\] \[\quad+2\cdot 4^{t-1}(\Delta\lambda\gamma)^{t}\big{(}\left\| \boldsymbol{w}\right\|_{\infty}+1\big{)}^{t}\big{(}\left\|\boldsymbol{x} \right\|_{\infty}+\|\boldsymbol{b}\|_{\infty}+1\big{)}\varepsilon\] \[\leq(4\gamma\Delta\lambda)^{t}\big{(}\left\|\boldsymbol{w} \right\|_{\infty}+1\big{)}^{t}\big{(}\left\|\boldsymbol{x}\right\|_{\infty}+ \left\|\boldsymbol{b}\right\|_{\infty}+1\big{)}\varepsilon\]
This proves (6.S) and hence the lemma.
**Lemma 6.9**.: _Let \(f:\mathbb{R}\to\mathbb{R}\) be Lipschitz continuous with constant \(\lambda>0\) such that \(f\) is rpl approximable. Then for every \(\varepsilon>0\) there is a rational piecewise linear function \(L\) of bitsize polynomial in \(\varepsilon^{-1}\) such that \(L\) is an \(\varepsilon\)-approximation of \(f\) and \(L\) is Lipschitz continuous with constant \((1+\varepsilon)\lambda\)._
Proof.: Let \(0<\varepsilon\leq 1\) and \(\varepsilon^{\prime}\coloneqq\frac{\varepsilon}{10}\). Let \(L^{\prime}\) be a piecewise linear \(\varepsilon^{\prime}\)-approximation of \(f\) of bitsize polynomial in \(\varepsilon^{-1}\). Let \(t_{1}<\ldots<t_{n}\) be the thresholds of \(L^{\prime}\), and let \(a_{0},\ldots,a_{n}\) and \(b_{0},\ldots,b_{n}\) be its slopes and constants. Then \(|a_{0}|\leq(1+\varepsilon)\lambda\); otherwise the slope of the linear function \(a_{0}x+b_{0}\) would be too large (in absolute value) to approximate the function \(f\) whose slope is bounded by \(\lambda\). For the same reason, \(|a^{\prime}_{n}|\leq(1+\varepsilon)\lambda\).
Let \(s\coloneqq t_{1}\) and \(s^{\prime}\coloneqq t_{n}\). We subdivide the interval \([s,s^{\prime}]\) into sufficiently small subintervals (of length at most \(\varepsilon^{\prime}\cdot\lambda^{-1}\)). Within each such interval, \(f\) does not change much, because it is Lipschitz continuous, and we can approximate it sufficiently closely by a linear function with parameters whose bitsize is polynomially bounded in \(\varepsilon^{-1}\). The slope of theses linear functions will not be significantly larger than \(\lambda\), because the slope of \(f\) is bounded by \(\lambda\). We can combine all these linear pieces with the linear functions \(a_{0}x+b_{0}\) for the interval \((-\infty,s]\) and \(a_{n}x+b_{n}\) for the interval \([s^{\prime},\infty)\) to obtain the desired piecewise linear approximation of \(f\).
Now we are ready to prove the main result of this subsection.
**Lemma 6.10**.: _For every \(d\in\mathbb{N}_{>0}\) there is a polynomial \(\pi^{\prime}(X,Y)\) such that the following holds. Let \(\mathfrak{F}=\big{(}V,E,(\mathfrak{a}_{v})_{v\in V},\boldsymbol{w},\boldsymbol{ b}\big{)}\) be an rpl-approximable FNN architecture of depth \(d\). Let \(\varepsilon>0\). Then there exists a rational piecewise-linear FNN \(\mathfrak{F}^{\prime}=\big{(}V,E,(\mathfrak{a}^{\prime}_{v})_{v\in V}, \boldsymbol{w}^{\prime},\boldsymbol{b}^{\prime}\big{)}\) of size at most \(\pi^{\prime}\big{(}\varepsilon^{-1},\operatorname{wt}(\mathfrak{F})\big{)}\) such that for all \(v\in V\) it holds that \(\lambda(\mathfrak{a}^{\prime}_{v})\leq 2\lambda(\mathfrak{a}_{v})\) and for all \(\boldsymbol{x}\in\mathbb{R}^{p}\) it holds that_
\[\big{\|}\mathfrak{F}(\boldsymbol{x})-\mathfrak{F}^{\prime}(\boldsymbol{x}) \big{\|}_{\infty}\leq\big{(}\left\|\boldsymbol{x}\right\|_{\infty}+1\big{)}\varepsilon.\]
Note that \(\mathfrak{F}^{\prime}\) has the same skeleton as \(\mathfrak{F}\).
Proof.: Without loss of generality we assume \(\varepsilon\leq 1\). Let \(\mathfrak{A}\coloneqq\big{(}V,E,(\mathfrak{a}_{v})_{v\in V}\big{)}\). Define the parameters \(\Delta,\lambda,\mu\) with respect to \(\mathfrak{A}\) as before. Note that \(\Delta,\lambda,\mu\leq\operatorname{wt}(\mathfrak{F})\). Choose the constants \(\gamma\) according to Lemma 6.5, \(\nu\) according to Lemma 6.7, and \(\beta\) according to Lemma 6.8 and note that for fixed \(d\) they depend polynomially on \(\Delta,\lambda,\mu\) and hence on \(\operatorname{wt}(\mathfrak{F})\).
Let \(\alpha\coloneqq 2\nu\big{(}\left\|\mathbf{w}\right\|_{\infty}+1\big{)}^{d}\big{(} \left\|\mathbf{b}\right\|_{\infty}+1\big{)}\). Let \(\mathbf{w}^{\prime}\in\mathbb{Z}\big{[}\frac{1}{2}\big{]}^{E}\), \(\mathbf{b}^{\prime}\in\mathbb{Z}\big{[}\frac{1}{2}\big{]}^{V}\) such that \(\|\mathbf{w}-\mathbf{w}^{\prime}\|_{\infty}\leq\frac{\varepsilon}{\alpha}\) and \(\|\mathbf{b}-\mathbf{b}^{\prime}\|_{\infty}\leq\frac{\varepsilon}{\alpha}\). Clearly, we can choose such \(\mathbf{w}^{\prime}=(w_{e}^{\prime})_{eeE}\) and \(\mathbf{b}^{\prime}=(b_{v}^{\prime})_{v\in V}\) such that all their entries have bitsize polynomial in \(\frac{\alpha}{\varepsilon}\), which is polynomial in \(\varepsilon^{-1}\) and in \(\operatorname{wt}(\mathfrak{F})\).. Then by Lemma 6.7, for all \(\mathbf{x}\in\mathbb{R}^{p}\) we have
\[\big{\|}\mathfrak{A}(\mathbf{x},\mathbf{b},\mathbf{w})-\mathfrak{A}(\mathbf{x},\mathbf{b}^{\prime },\mathbf{w}^{\prime})\big{\|}_{\infty}\leq\nu\big{(}\left\|\mathbf{w}\right\|_{\infty }+1\big{)}^{d}\big{(}\left\|\mathbf{x}\right\|_{\infty}+\left\|\mathbf{b}\right\|_{ \infty}+1\big{)}\frac{\varepsilon}{\alpha}\leq\big{(}\left\|\mathbf{x}\right\|_{ \infty}+1\big{)}\frac{\varepsilon}{2},\]
Let \(\alpha^{\prime}\coloneqq 2\beta\big{(}\left\|\mathbf{w}^{\prime}\right\|_{\infty}+1 \big{)}^{d}\big{(}\left\|\mathbf{b}^{\prime}\right\|_{\infty}+1\big{)}\). For every \(v\in V\), we let \(\mathfrak{a}_{v}^{\prime}\) be a rational piecewise-linear function of bitsize polynomial in in \(\varepsilon^{-1}\) that is an \(\frac{\varepsilon}{\alpha^{\prime}}\)-approximation of \(\mathfrak{a}_{v}\) and Lipschitz continuous with constant \(2\lambda\). Such an \(\mathfrak{a}_{v}^{\prime}\) exists by Lemma 6.9, because \(\mathfrak{a}_{v}\) is rpl-approximable and Lipschitz continuous with constant \(\lambda\). Let \(\mathfrak{A}^{\prime}\coloneqq\big{(}V,E,(\mathfrak{a}_{v}^{\prime})_{v\in V} \big{)}\). By Lemma 6.8, for all \(\mathbf{x}\in\mathbb{R}^{p}\) we have
\[\big{\|}\mathfrak{A}(\mathbf{x},\mathbf{b}^{\prime},\mathbf{w}^{\prime})-\mathfrak{A}^{ \prime}(\mathbf{x},\mathbf{b}^{\prime},\mathbf{w}^{\prime})\big{\|}_{\infty}\leq\beta \big{(}\left\|\mathbf{w}^{\prime}\right\|_{\infty}+1\big{)}^{d}\big{(}\left\|\mathbf{ x}\right\|_{\infty}+\left\|\mathbf{b}^{\prime}\right\|_{\infty}+1\big{)}\frac{ \varepsilon}{\alpha^{\prime}}\leq\big{(}\left\|\mathbf{x}\right\|_{\infty}+1\big{)} \frac{\varepsilon}{2}.\]
Overall,
\[\big{\|}\mathfrak{F}(\mathbf{x})-\mathfrak{F}^{\prime}(\mathbf{x})\big{\|}_ {\infty} =\big{\|}\mathfrak{A}(\mathbf{x},\mathbf{b},\mathbf{w})-\mathfrak{A}^{ \prime}(\mathbf{x},\mathbf{b}^{\prime},\mathbf{w}^{\prime})\big{\|}_{\infty}\] \[\leq\big{\|}\mathfrak{A}(\mathbf{x},\mathbf{b},\mathbf{w})-\mathfrak{A}(\mathbf{ x},\mathbf{b}^{\prime},\mathbf{w}^{\prime})\big{\|}_{\infty}+\big{\|}\mathfrak{A}(\mathbf{x}, \mathbf{b}^{\prime},\mathbf{w}^{\prime})-\mathfrak{A}^{\prime}(\mathbf{x},\mathbf{b}^{\prime },\mathbf{w}^{\prime})\big{\|}_{\infty}\] \[\leq\big{(}\left\|\mathbf{x}\right\|_{\infty}+1\big{)}\varepsilon.\]
### Bounds and Approximations for GNNs
We start with a more explicit version of Lemmas 4.2 and 4.3, the growth bounds for GNN layers. Recall that the depth of a GNN layer is the maximum of the depths of the FNNs for the combination and the message function.
**Lemma 6.11**.: _For every \(d\) there is a polynomial \(\pi(X)\) such that the following holds. Let \(\mathfrak{L}\) be a GNN layer of depth \(d\), and let \(p\) be the input dimension of \(\mathfrak{L}\). Then for all graphs \(G\) and all signals \(\mathbf{x}\in\mathcal{S}_{p}(G)\) we have_
\[\big{\|}\mathfrak{L}(G,\mathbf{x})\big{\|}_{\infty}\leq\pi\big{(}\operatorname{wt} (\mathfrak{L})\big{)}\big{(}\left\|\mathbf{x}\right\|_{\infty}+1\big{)}|G|. \tag{6.X}\]
Proof.: The proof of Lemma 4.2 yields
\[\big{\|}\mathfrak{L}(G,\mathbf{x})(v)\big{\|}_{\infty}\leq 2\gamma_{\mathsf{msg}} \gamma_{\mathsf{comb}}\big{(}\left\|\mathbf{x}\right\|_{\infty}+1\big{)}|G|,\]
where \(\gamma_{\mathsf{msg}}\), \(\gamma_{\mathsf{comb}}\) are growth bounds for the message and combination functions of \(\mathfrak{L}\). It follows from Lemma 6.5 that \(\gamma_{\mathsf{msg}}\), \(\gamma_{\mathsf{comb}}\) can be chosen polynomial in the weight of \(\mathfrak{L}\).
**Lemma 6.12**.: _For every \(d\) there is a polynomial \(\pi(X)\) such that the following holds. Let \(\mathfrak{L}\) be a GNN layer of depth \(d\), and let \(p\) be the input dimension of \(\mathfrak{L}\). Then for all graphs \(G\) and all signals \(\mathbf{x},\mathbf{x}^{\prime}\in\mathcal{S}_{p}(G)\) we have_
\[\big{\|}\mathfrak{L}(G,\mathbf{x})-\mathfrak{L}(G,\mathbf{x}^{\prime})\big{\|}_{\infty} \leq\pi\big{(}\operatorname{wt}(\mathfrak{L})\big{)}\left\|\mathbf{x}-\mathbf{x}^{ \prime}\right\|_{\infty}|G|. \tag{6.Y}\]
Proof.: The proof of Lemma 4.3 yields
\[\left\|\widetilde{\mathfrak{L}}(G,x)(v)-\widetilde{\mathfrak{L}}(G,x^{\prime})(v) \right\|_{\infty}\leq\lambda_{\mathsf{msg}}\lambda_{\mathsf{comb}}\left\|x-x^ {\prime}\right\|_{\infty}|G|,\]
where \(\lambda_{\mathsf{msg}}\), \(\lambda_{\mathsf{comb}}\) are Lipschitz constants for the message and combination functions of \(\mathfrak{L}\). It follows from Lemma 6.6 that \(\lambda_{\mathsf{msg}}\), \(\lambda_{\mathsf{comb}}\) can be chosen polynomial in the weight of \(\mathfrak{L}\).
**Corollary 6.13**.: _For every \(d\) there is a polynomial \(\pi(X)\) such that the following holds. Let \(\mathfrak{N}\) be a GNN of depth \(d\), and let \(p\) be the input dimension of \(\mathfrak{N}\). Then for all graphs \(G\) and all signals \(x,x^{\prime}\in\mathcal{S}_{p}(G)\) we have_
\[\left\|\widetilde{\mathfrak{N}}(G,x)-\widetilde{\mathfrak{N}}(G,x^{\prime}) \right\|_{\infty}\leq\pi\big{(}\operatorname{wt}(\mathfrak{L})\big{)}\left\| x-x^{\prime}\right\|_{\infty}|G|^{d}.\]
**Lemma 6.14**.: _For every \(d\in\mathbb{N}_{>0}\) there exist polynomials \(\pi(X)\) and \(\pi^{\prime}(X,Y)\) such that the following holds. Let \(\mathfrak{L}\) be an rpl-approximable GNN layer of depth \(d\), and let \(p\) be the input dimension of \(\mathfrak{L}\). Then for all \(\varepsilon>0\) there exists a rational piecewise-linear GNN layer \(\mathfrak{L}^{\prime}\) of size at most \(\pi^{\prime}(\varepsilon^{-1},\operatorname{wt}(\mathfrak{L}))\) with the same skeleton as \(\mathfrak{L}\) such that the Lipschitz constants of the activation functions of \(\mathfrak{L}^{\prime}\) are at most twice the Lipschitz constants of the corresponding activation functions in \(\mathfrak{L}\) and for all graphs \(G\), all signals \(x,x^{\prime}\in\mathcal{S}_{p}(G)\), and all vertices \(v\in V(G)\) it holds that_
Proof.: Let \(\mathsf{msg}:\mathbb{R}^{2p}\to\mathbb{R}^{r}\) and \(\mathsf{comb}:\mathbb{R}^{p+r}\to\mathbb{R}^{q}\) be the message and combination functions of \(\mathfrak{L}\), and let \(\mathfrak{F}_{\mathsf{msg}}\) and \(\mathfrak{F}_{\mathsf{comb}}\) be the FNNs for these functions. By Lemma 6.10 there are rational piecewise linear FNNs \(\mathfrak{F}^{\prime}_{\mathsf{msg}}\) and \(\mathfrak{F}^{\prime}_{\mathsf{comb}}\) of size polynomial in \(\operatorname{wt}(\mathfrak{F}_{\mathsf{msg}}),\operatorname{wt}(\mathfrak{ F}_{\mathsf{comb}})\leq\operatorname{wt}(\mathfrak{L})\) with activation functions of Lipschitz constants at most twice the Lipschitz constants of the corresponding activation functions in \(\mathfrak{F}_{\mathsf{msg}}\), \(\mathfrak{F}_{\mathsf{comb}}\) such that for all \(x,x^{\prime}\in\mathbb{R}^{p}\) and \(z\in\mathbb{R}^{r}\) we have
\[\left\|\mathsf{msg}(x,x^{\prime})-\mathsf{msg}^{\prime}(x,x^{ \prime})\right\|_{\infty}\leq\big{(}\left\|(x,x^{\prime})\right\|_{\infty}+1 \big{)}\varepsilon,\] (6.Z) \[\left\|\mathsf{comb}(x,z)-\mathsf{comb}^{\prime}(x,z)\right\|_{\infty} \leq\big{(}\left\|(x,z)\right\|_{\infty}+1\big{)}\varepsilon.\] (6.AA)
Let \(\mathfrak{L}^{\prime}\) be the GNN layer with message function \(\mathsf{msg}^{\prime}\), combination function \(\mathsf{comb}^{\prime}\), and the same aggregation function \(\mathsf{agg}\) as \(\mathfrak{L}\). By Lemma 6.5 there is an \(\alpha\in\mathbb{N}_{>0}\) that is polynomial in \(\operatorname{wt}(\mathfrak{F}_{\mathsf{msg}})\) and hence polynomial in \(\operatorname{wt}(\mathfrak{L})\) such that for all \(x,x^{\prime}\in\mathbb{R}^{p}\) it holds that
\[\left\|\mathsf{msg}(x,x^{\prime})\right\|_{\infty}\leq\alpha\big{(}\max\big{\{} \left\|x\right\|_{\infty},\left\|x^{\prime}\right\|_{\infty}\big{\}}+1\big{)}.\] (6.AB)
By Lemma 6.6 there is an \(\alpha^{\prime}\in\mathbb{N}_{>0}\) polynomial in \(\operatorname{wt}(\mathfrak{F}^{\prime}_{\mathsf{comb}})\) and hence polynomial in \(\operatorname{wt}(\mathfrak{L})\) such that for all \(x,x^{\prime}\in\mathbb{R}^{p},z,z^{\prime}\in\mathbb{R}^{r}\) it holds that
\[\left\|\mathsf{comb}^{\prime}(x,z)-\mathsf{comb}^{\prime}(x^{\prime},z^{ \prime})\right\|_{\infty}\leq\alpha^{\prime}\max\big{\{}\left\|x-x^{\prime} \right\|_{\infty},\left\|z-z^{\prime}\right\|_{\infty}\big{\}}.\] (6.AC)
By Lemma 6.12 there is an \(\alpha^{\prime\prime}\in\mathbb{N}_{>0}\) polynomial in \(\operatorname{wt}(\mathfrak{L}^{\prime})\leq\operatorname{size}(\mathfrak{L}^{ \prime})\) and thus polynomial in \(\operatorname{wt}(\mathfrak{L})\) such that for all \(G\) and \(\boldsymbol{x}\in\mathcal{S}_{p}(G)\),
\[\big{\|}\widetilde{\mathfrak{L}}^{\prime}(G,\boldsymbol{x})-\widetilde{ \mathfrak{L}}^{\prime}(G,\boldsymbol{x}^{\prime})\big{\|}_{\infty}\leq\alpha^{ \prime\prime}\,\big{\|}\boldsymbol{x}-\boldsymbol{x}^{\prime}\big{\|}_{\infty }\,|G|.\] (6.AD)
Let \(G\) be a graph of order \(n=|G|\) and \(v\in V(G)\). Furthermore, let \(\boldsymbol{x},\boldsymbol{x}^{\prime}\in\mathcal{S}_{p}(G)\) and \(\boldsymbol{y}\coloneqq\widetilde{\mathfrak{L}}(G,\boldsymbol{x})\), \(\boldsymbol{y}^{\prime}\coloneqq\widetilde{\mathfrak{L}}^{\prime}(G, \boldsymbol{x})\), \(\boldsymbol{y}^{\prime\prime}\coloneqq\widetilde{\mathfrak{L}}^{\prime}(G, \boldsymbol{x}^{\prime})\). Then
\[\big{\|}\widetilde{\mathfrak{L}}(G,\boldsymbol{x})(v)-\widetilde{\mathfrak{ L}}^{\prime}(G,\boldsymbol{x}^{\prime})(v)\big{\|}_{\infty}\leq\big{\|} \boldsymbol{y}(v)-\boldsymbol{y}^{\prime}(v)\big{\|}_{\infty}+\big{\|} \boldsymbol{y}^{\prime}(v)-\boldsymbol{y}^{\prime\prime}(v)\big{\|}_{\infty}\,.\]
By (6.AD) we have
\[\big{\|}\boldsymbol{y}^{\prime}(v)-\boldsymbol{y}^{\prime\prime}(v)\big{\|}_{ \infty}\leq\alpha^{\prime\prime}n\,\big{\|}\boldsymbol{x}-\boldsymbol{x}^{ \prime}\big{\|}_{\infty}\,.\] (6.AE)
Thus we need to bound \(\|\boldsymbol{x}(v)-\boldsymbol{y}^{\prime}(v)\|_{\infty}\). Let
\[\boldsymbol{x}(v) \coloneqq\operatorname{\mathsf{agg}}\Bigl{(}\big{\{}\mathsf{ msg}(\boldsymbol{x}(v),\boldsymbol{x}(w))\,\big{|}\,w\in N_{G}(v)\big{\}} \Bigr{)},\] \[\boldsymbol{x}^{\prime}(v) \coloneqq\operatorname{\mathsf{agg}}\Bigl{(}\big{\{}\mathsf{ msg}(\boldsymbol{x}^{\prime}(v),\boldsymbol{x}^{\prime}(w))\,\big{|}\,w\in N_{G}(v) \big{\}}\Bigr{)}.\]
Then by (6.Z), we have
\[\big{\|}\boldsymbol{x}(v)-\boldsymbol{x}^{\prime}(v)\big{\|}_{\infty}\leq n \left(\big{\|}\boldsymbol{x}\big{\|}_{\infty}+1\right)\varepsilon.\] (6.AF)
Furthermore, by (6.AB), for all \(w\in N(v)\) we have
\[\big{\|}\mathsf{msg}\bigl{(}\boldsymbol{x}(v),\boldsymbol{x}(w)\bigr{)} \big{\|}_{\infty}\leq\alpha\Bigl{(}\big{\|}\boldsymbol{x}\big{\|}_{\infty}+1 \Bigr{)}\]
Thus, since \(\alpha\geq 1\) and \(n\geq 1\),
\[\big{\|}\big{(}\boldsymbol{x}(v),\boldsymbol{x}(v)\big{)}\big{\|}_{\infty}= \max\Bigl{\{}\,\|\boldsymbol{x}(v)\|_{\infty}\,,\|\boldsymbol{x}(v)\|_{\infty }\,\Bigr{\}}\leq\alpha n\Bigl{(}\big{\|}\boldsymbol{x}\big{\|}_{\infty}+1 \Bigr{)}.\] (6.AG)
Putting things together, we get
\[\big{\|}\boldsymbol{y}(v)-\boldsymbol{y}^{\prime}(v)\big{\|}_{\infty} =\big{\|}\operatorname{\mathsf{comb}}\bigl{(}\boldsymbol{x}(v), \boldsymbol{x}(v)\bigr{)}-\operatorname{\mathsf{comb}}^{\prime}\bigl{(} \boldsymbol{x}(v),\boldsymbol{x}^{\prime}(v)\bigr{)}\big{\|}_{\infty}\] \[\leq\big{\|}\operatorname{\mathsf{comb}}\bigl{(}\boldsymbol{x}(v ),\boldsymbol{x}(v)\bigr{)}-\operatorname{\mathsf{comb}}^{\prime}\bigl{(} \boldsymbol{x}(v),\boldsymbol{x}(v)\bigr{)}\big{\|}_{\infty}\] \[\quad+\big{\|}\operatorname{\mathsf{comb}}^{\prime}\bigl{(} \boldsymbol{x}(v),\boldsymbol{x}(v)\bigr{)}-\operatorname{\mathsf{comb}}^{ \prime}\bigl{(}\boldsymbol{x}(v),\boldsymbol{x}^{\prime}(v)\bigr{)}\big{\|}_{\infty}\] \[\leq\big{(}\,\|(\boldsymbol{x}(v),\boldsymbol{x}(v))\|_{\infty}+1 \big{)}\varepsilon\] by (6.AA) \[\quad+\alpha^{\prime}\,\big{\|}\boldsymbol{x}(v)-\boldsymbol{x} ^{\prime}(v)\big{\|}_{\infty}\] (6.AC) \[\leq 2\alpha n\Bigl{(}\big{\|}\boldsymbol{x}\big{\|}_{\infty}+1 \Bigr{)}\varepsilon+\alpha^{\prime}n\bigl{(}\big{\|}\boldsymbol{x}\big{\|}_{ \infty}+1\bigr{)}\varepsilon\] by (6.AG) and (6.AF) \[\leq(2\alpha+\alpha^{\prime})n\bigl{(}\,\big{\|}\boldsymbol{x} \big{\|}_{\infty}+1\bigr{)}\varepsilon\]
Combined with (6.AE), this yields the assertion of the lemma.
Now we are ready to prove the main lemma of this section.
**Lemma 6.15**.: _For every \(d\in\mathbb{N}_{>0}\) there exist a polynomials \(\pi(X,Y)\) such that the following holds. Let \(\mathfrak{N}\) be an rpl-approximable GNN of depth \(d\), and let \(p\) be the input dimension of \(\mathfrak{N}\). Then for all \(\varepsilon>0\) there exists a rational piecewise-linear GNN \(\mathfrak{N}^{\prime}\) of size at most \(\pi(\varepsilon^{-1},\operatorname{wt}(\mathfrak{N}))\) with the same skeleton as \(\mathfrak{N}\) such that the Lipschitz constants of the activation functions of \(\mathfrak{N}^{\prime}\) are at most twice the Lipschitz constants of the corresponding activation functions in \(\mathfrak{N}\) and for all graphs \(G\) and all signals \(\boldsymbol{x}\in\mathcal{S}_{p}(G)\) it holds that_
\[\big{\|}\widehat{\mathfrak{N}}(G,\boldsymbol{x})-\widehat{\mathfrak{N}}^{ \prime}(G,\boldsymbol{x})\big{\|}_{\infty}\leq|G|^{d}\big{(}\left\|\boldsymbol {x}\right\|_{\infty}+1\big{)}\varepsilon.\]
Proof.: Suppose that \(\mathfrak{N}=(\mathfrak{L}_{1},\ldots,\mathfrak{L}_{d})\). For every \(t\in[d]\), let \(p_{t-1}\) be the input dimension of \(\mathfrak{L}_{t}\). Then \(p=p_{0}\). By Lemma 6.11 there is an \(\alpha\) polynomial in \(\operatorname{wt}(\mathfrak{N})\) such that for all \(t\in[d]\), \(G\), and \(\boldsymbol{x}\in\mathcal{S}_{p_{t-1}}(G)\) we have
\[\big{\|}\widetilde{\mathfrak{L}}_{t}(G,\boldsymbol{x})\big{\|}_{\infty}\leq \alpha|G|\big{(}\left\|\boldsymbol{x}\right\|_{\infty}+1\big{)}.\] (6.AH)
Let \(\pi^{\prime}(X)\) be the polynomial of Lemma 6.14,
\[\alpha^{\prime}\coloneqq\max_{t\in[d]}\pi^{\prime}\big{(}\operatorname{wt}( \mathfrak{L}_{t})\big{)},\]
and
\[\beta \coloneqq 3\max\{\alpha,\alpha^{\prime}\},\] \[\varepsilon^{\prime} \coloneqq\frac{\varepsilon}{\beta^{d}}.\]
Note that \(\varepsilon^{\prime}\) is polynomial in \(\operatorname{wt}(\mathfrak{N})\). For every \(t\in[d]\), we apply Lemma 6.14 to \(\mathfrak{L}_{t}\) and \(\varepsilon^{\prime}\) and obtain a rational piecewise linear GNN layer \(\mathfrak{L}_{t}^{\prime}\) such that for all graphs \(G\) and all signals \(\boldsymbol{x},\boldsymbol{x}^{\prime}\in\mathcal{S}_{p_{t-1}}(G)\) we have
\[\big{\|}\widetilde{\mathfrak{L}}_{t}(G,\boldsymbol{x})-\widetilde{\mathfrak{ L}}_{t}^{\prime}(G,\boldsymbol{x}^{\prime})\big{\|}_{\infty}\leq\alpha^{ \prime}|G|\Big{(}\left\|\boldsymbol{x}-\boldsymbol{x}^{\prime}\right\|_{ \infty}+\big{(}\left\|\boldsymbol{x}\right\|_{\infty}+1\big{)}\varepsilon^{ \prime}\Big{)}.\] (6.AI)
Let \(G\) be a graph of order \(n\coloneqq|G|\), and \(\boldsymbol{x}\in\mathcal{S}_{p}(G)\). Let \(\boldsymbol{x}_{0}\coloneqq\boldsymbol{x}_{0}^{\prime}\coloneqq\boldsymbol{x}\), and for \(t\in[d]\), let \(\boldsymbol{x}_{t}\coloneqq\widetilde{\mathfrak{L}}_{t}(G,\boldsymbol{x}_{t-1})\) and \(\boldsymbol{x}_{t}^{\prime}\coloneqq\widetilde{\mathfrak{L}}_{i}^{\prime}(G, \boldsymbol{x}_{t-1}^{\prime})\). We shall prove that for all \(t\in\{0,\ldots,d\}\) we have
\[\left\|\boldsymbol{x}_{t}\right\|_{\infty} \leq\beta^{t}n^{t}\big{(}\left\|\boldsymbol{x}\right\|_{\infty}+1 \big{)},\] (6.AJ) \[\big{\|}\boldsymbol{x}_{t}-\boldsymbol{x}_{t}^{\prime}\big{\|}_{\infty} \leq\beta^{t}n^{t}\big{(}\left\|\boldsymbol{x}\right\|_{\infty}+1 \big{)}\varepsilon^{\prime}.\] (6.AK)
Since \(\beta^{d}\varepsilon^{\prime}=\varepsilon\) and \(\boldsymbol{x}_{d}=\widehat{\mathfrak{N}}(G,\boldsymbol{x})\), \(\boldsymbol{x}_{d}^{\prime}\coloneqq\widehat{\mathfrak{N}}^{\prime}(G, \boldsymbol{x})\), (6.AK) implies the assertion of the lemma.
We prove (6.AJ) and (6.AK) by induction on \(t\). The base step \(t=0\) is trivial, because \(\boldsymbol{x}_{0}=\boldsymbol{x}\) and \(\boldsymbol{x}_{0}=\boldsymbol{x}^{\prime}\). For the inductive step, let \(t\geq 1\). By (6.AH) and the induction hypothesis we have
\[\left\|\boldsymbol{x}_{t}\right\|_{\infty}\leq\alpha n\big{(}\left\| \boldsymbol{x}_{t-1}\right\|_{\infty}+1\big{)}\]
\[\leq\alpha n\Big{(}\beta^{t-1}n^{t-1}\big{(}\left\|\boldsymbol{x} \right\|_{\infty}+1\big{)}+1\Big{)}\] \[\leq\beta^{t}n^{t}\big{(}\left\|\boldsymbol{x}\right\|_{\infty}+1 \big{)},\]
where the last inequality holds because \(2\alpha\leq\beta\). This proves (6.AJ).
By (6.AJ) we have
\[\left\|\boldsymbol{x}_{t}-\boldsymbol{x}_{t}^{\prime}\right\|_{\infty}\leq \alpha^{\prime}n\Big{(}\left\|\boldsymbol{x}_{t-1}-\boldsymbol{x}_{t-1}^{ \prime}\right\|_{\infty}+\big{(}\left\|\boldsymbol{x}_{t-1}\right\|_{\infty}+ 1\big{)}\varepsilon^{\prime}\Big{)}.\] (6.AL)
By induction hypothesis (6.AK),
\[\alpha^{\prime}n\left\|\boldsymbol{x}_{t-1}-\boldsymbol{x}_{t-1}^ {\prime}\right\|_{\infty} \leq\alpha^{\prime}n\beta^{t-1}n^{t-1}\big{(}\left\|\boldsymbol{x }\right\|_{\infty}+1\big{)}\varepsilon^{\prime}\] \[\leq\frac{1}{3}\beta^{t}n^{t}\big{(}\left\|\boldsymbol{x}\right\| _{\infty}+1\big{)}\varepsilon^{\prime}.\] (6.AM)
By induction hypothesis (6.AJ),
\[\alpha^{\prime}n\big{(}\left\|\boldsymbol{x}_{t-1}\right\|_{ \infty}+1\big{)}\varepsilon^{\prime} \leq\alpha^{\prime}n\Big{(}\beta^{t-1}n^{t-1}\big{(}\left\| \boldsymbol{x}\right\|_{\infty}+1\big{)}+1\Big{)}\varepsilon^{\prime}\] \[\leq\frac{2}{3}\beta^{t}n^{t}\big{(}\left\|\boldsymbol{x}\right\| _{\infty}+1\big{)}\varepsilon^{\prime}\] (6.AN)
Plugging (6.AM) and (6.AN) into (6.AL), we obtain the desired inequality (6.AK).
### Proof of Theorem 6.2
Let us first remark that we cannot directly apply Theorem 5.1 (the "uniform theorem") to a family of rational piecewise linear GNN approximating the GNNs in our family \(\mathscr{N}\). The reason is that in Theorem 5.1 the GNN is "hardwired" in the formula, whereas in our non-uniform setting we obtain a different GNN for every input size. Instead, we encode the sequence of rational piecewise linear GNNs we obtain into the numerical built-in relations. Then our formula evaluates these GNNs directly on the numerical side of the structures.
Proof of Theorem 6.2.: Let \(\mathscr{N}=(\mathfrak{N}^{(n)})_{n\in\mathbb{N}}\). Furthermore, let \(d\) be an upper bound on the depth of all the \(\mathfrak{N}^{(n)}\). Without loss of generality we assume that every \(\mathfrak{N}^{(n)}\) has exactly \(d\) layers \(\mathfrak{L}_{1}^{(n)},\ldots,\mathfrak{L}_{d}^{(n)}\). For \(t\in[d]\), let \(p_{t-1}^{(n)}\) and \(p_{t}^{(n)}\) be the input and output dimension of \(\mathfrak{L}_{t}^{(n)}\). Then \(p^{(n)}\!=\!p_{0}^{(n)}\) is the input dimension of \(\mathfrak{N}^{(n)}\) and \(q^{(n)}\!:=\!p_{d}^{(n)}\) is the output dimension. By the definition of the weight, the \(p_{t}^{(n)}\) are polynomially bounded in \(n\). Let \(\lambda^{(n)}\in\mathbb{N}\) be a Lipschitz constant for all activation functions in \(\mathfrak{N}^{(n)}\). By the definition of the weight of an FNN and GNN we may choose \(\lambda^{(n)}\) polynomial in \(\operatorname{wt}(\mathfrak{N}^{(n)})\) and thus in \(n\).
For all \(k,n\in\mathbb{N}_{>0}\), we let \(\mathfrak{N}^{(n,k)}\) be a rational piecewise linear GNN with the same skeleton as \(\mathfrak{N}^{(n)}\) of size polynomial in \(\operatorname{wt}(\mathfrak{N}^{(n)})\), hence also polynomial in \(n\), and \(k\) such that all activation functions of \(\mathfrak{N}^{(n,k)}\) have Lipschitz constant at most \(2\lambda^{(n)}\), and for all graphs \(G\) of order \(n\) and all signals \(\boldsymbol{x}\in\mathscr{S}_{p}(G)\) it holds that
\[\left\|\mathfrak{\Re}^{(n)}(G,\boldsymbol{x})-\mathfrak{\Re}^{(n,k)}(G, \boldsymbol{x})\right\|_{\infty}\leq\frac{n^{d}}{k}\big{(}\left\|\boldsymbol{x }\right\|_{\infty}+1\big{)}.\] (6.AO)
Such an \(\mathfrak{N}^{(n,k)}\) exists by Lemma 6.15. Let \(\mathfrak{L}_{1}^{(n,k)},\ldots,\mathfrak{L}_{d}^{(n,k)}\) be the layers of \(\mathfrak{N}^{(n,k)}\).
We want to describe the GNNs \(\mathfrak{N}^{(n,k)}\) with built-in relations, using an encoding similar to the F-schemes. We cannot just use the same encoding as for the F-schemes because the non-uniform logic \(\mathsf{FO}+\mathsf{C}_{nu}\) does not allow for built-in numerical functions.6
Footnote 6: The reader may wonder why we do not simply allow for built-in function variables to avoid this difficulty. The reason is that then we could built terms whose growth is no longer polynomially bounded in the size of the input structure, which would make the logic too powerful. In particular, the logic would no longer be contained in \(\mathsf{TC}^{0}\).
Let \(t\in[d]\). In the following, we define the built-in relations that describe the \(t\) layers \(\mathfrak{L}_{t}^{(n,k)}\) of all the \(\mathfrak{N}^{(n,k)}\). We need to describe FNNs \(\mathfrak{F}_{\mathsf{msg}}^{(n,k)}\) and \(\mathfrak{F}_{\mathsf{comb}}^{(n,k)}\) for the message and combination functions of the layers, and in addition we need to describe the aggregation function. For the aggregation functions we use three relations \(A_{t}^{\mathsf{SUM}},A_{t}^{\mathsf{MEAN}},A_{t}^{\mathsf{MAX}}\subseteq \mathbb{N}^{2}\), where
\[A_{t}^{\mathsf{SUM}}\coloneqq\big{\{}(n,k)\in\mathbb{N}^{2}\,\big{|}\,\, \mathfrak{a}_{t}^{(n,k)}=\mathsf{SUM}\big{\}},\]
and \(A_{t}^{\mathsf{MEAN}}\) and \(A_{t}^{\mathsf{MAX}}\) are defined similarly. For each of the two FNNs \(\mathfrak{F}_{\mathsf{msg}}^{(n,k)}\) and \(\mathfrak{F}_{\mathsf{comb}}^{(n,k)}\) we use 18 relations. We only describe the encoding of \(\mathfrak{F}_{\mathsf{msg}}^{(n,k)}\) using relations \(M_{t}^{1},\ldots,M_{t}^{18}\). The encoding of \(\mathfrak{F}_{\mathsf{comb}}^{(n,k)}\) is analogous using a fresh set of 18 relations \(C_{t}^{1},\ldots,C_{t}^{18}\).
Say,
\[\mathfrak{F}_{\mathsf{msg}}^{(n,k)}=\Big{(}V^{(n,k)},E^{(n,k)},(\mathfrak{a}_ {v}^{(n,k)})_{v\in V^{(n,k)}},(w_{e}^{(n,k)})_{e\in E^{(n,k)}},(b_{v}^{(n,k)} )_{v\in V^{(n,k)}}\Big{)},\]
where without loss of generality we assume that \(V^{(n,k)}\) is an initial segment of \(\mathbb{N}\). We use relation \(M_{t}^{1}\subseteq\mathbb{N}^{3}\) and \(M_{t}^{2}\subseteq\mathbb{N}^{4}\) to describe the vertex set \(V\) and the edge set \(E\), letting
\[M_{t}^{1} \coloneqq\big{\{}(n,k,v)\,\big{|}\,v\in V^{(n,k)}\big{\}},\] \[M_{t}^{2} \coloneqq\big{\{}(n,k,v,w)\,\big{|}\,vw\in E^{(n,k)}\big{\}}.\]
As the bitsize of the skeleton of \(\mathfrak{N}^{(n)}\) and hence \(|V^{(n,k)}|\) is polynomially bounded in \(n\), there is an arithmetical term \(\theta_{V}(y,y^{\prime})\) such that for all graphs \(G\) and all assignments \(\omega\) we have
\[[\![\theta_{V}]\!]^{(G,\omega)}\,(n,k)=|V^{(n,k)}|\]
This term uses the constant \(\mathsf{ord}\) as well as the built-in relation \(M_{t}^{1}\). It does not depend on the graph \(G\) or the assignment \(\omega\).
For the weights, we use the relations \(M_{t}^{3},M_{t}^{4},M_{t}^{5}\subseteq\mathbb{N}^{5}\), letting
\[M_{t}^{3} \coloneqq\big{\{}(n,k,v,w,r)\,\big{|}\,(v,w)\in E^{(n,k)}\text{ with }w_{e}^{(n,k)}=(-1)^{r}2^{-s}m\big{\}},\] \[M_{t}^{4} \coloneqq\big{\{}(n,k,v,w,s)\,\big{|}\,(v,w)\in E^{(n,k)}\text{ with }w_{e}^{(n,k)}=(-1)^{r}2^{-s}m\big{\}},\] \[M_{t}^{5} \coloneqq\big{\{}(n,k,v,w,i)\,\big{|}\,(v,w)\in E^{(n,k)}\text{ with }w_{e}^{(n,k)}=(-1)^{r}2^{-s}m\text{ and }\operatorname{ Bit}(i,m)=1\big{\}}.\]
We always assume that \(w_{e}^{(n,k)}=(-1)^{r}2^{-s}m\) is the _canonical representation_ of \(w_{e}^{(n,k)}\) with \(r=0,m=0,s=0\) or \(r\in\{0,1\}\) and \(s=0\) and \(m\neq 0\) or \(r\in\{0,1\}\) and \(s\neq 0\) and \(m\neq 0\)
odd. As the bitsize of the numbers \(w_{e}^{(n,k)}\) is polynomial in \(k\), we can easily construct an arithmetical r-expression \(\boldsymbol{\rho}_{w}(y,y^{\prime},z,z^{\prime})\) such that for all graphs \(G\) and assignments \(\omega\) and \(n,k,v,w\in\mathbb{N}\) such that \((v,w)\in E^{(n,k)}\),
\[\langle\!\langle\boldsymbol{\rho}_{w}\rangle\!\rangle^{(G,\,\omega)}\,(n,k,v,w )=w_{e}^{(n,k)}.\]
This r-expression depends on the built-in relations \(M_{t}^{i}\), but not on \(G\) or \(\omega\).
Similarly, we define the relations \(M_{t}^{6},M_{t}^{7},M_{t}^{8}\subseteq\mathbb{N}^{4}\) for the biases and an r-expression \(\boldsymbol{\rho}_{b}(y,y^{\prime},z)\) such that for all graphs \(G\) and assignments \(\omega\) and \(n,k,v\in\mathbb{N}\) with \(v\in V^{(n,k)}\),
\[\langle\!\langle\boldsymbol{\rho}_{b}\rangle\!\rangle^{(G,\,\omega)}\,(n,k,v) =b_{v}^{(n,k)}.\]
To store the activation functions \(\mathfrak{a}_{v}^{(n,k)}\) we use the remaining ten relations \(M_{t}^{9}\subseteq\mathbb{N}^{4}\), \(M_{t}^{10},\ldots,M_{t}^{18}\subseteq\mathbb{N}^{5}\). The relation \(M_{t}^{9}\) is used to store the number \(m_{v}^{(n,k)}\) of thresholds:
\[M_{t}^{9}\coloneqq\{(n,k,v,m_{v}^{(n,k)})\,\big{|}\,v\in V^{(n,k)}\}.\]
As the bitsize of \(\mathfrak{a}_{v}^{(n,k)}\) is polynomial in \(k\), the number \(m_{v}^{(n,k)}\) is bounded by a polynomial in \(k\). Thus we can construct an arithmetical term \(\theta_{\mathfrak{a}}(y,y^{\prime},z)\) such that for all graphs \(G\), all assignments \(\omega\), all \(n,k\in\mathbb{N}\), and all \(v\in V^{(n,k)}\) we have
\[\llbracket\theta_{\mathfrak{a}}\rrbracket^{(G,\,\omega)}\,(n,k,v)=m_{v}^{(n,k)}\]
Of course this term needs to use the built-in relation \(M_{t}^{9}\). It does not depend on the graph \(G\) or the assignment \(\omega\).
The relations \(M_{t}^{10},N_{t}^{11},N_{t}^{12}\) are used to store thresholds. Say, the thresholds of \(\mathfrak{a}_{v}^{(n,k)}\) are \(t_{v,1}^{(n,k)}<\ldots<t_{v,m}^{(n,k)}\), where \(m=m_{v}^{(n,k)}\). We let
\[N_{t}^{10} \coloneqq\big{\{}(n,k,v,i,r)\,\big{|}\,v\in V^{(n,k)},1\leq i\leq m _{v}^{(n,k)}\text{ with }t_{v,i}^{(n,k)}=(-1)^{r}2^{-s}m\big{\}},\] \[N_{t}^{11} \coloneqq\big{\{}(n,k,v,i,s)\,\big{|}\,v\in V^{(n,k)},1\leq i\leq m _{v}^{(n,k)}\text{ with }t_{v,i}^{(n,k)}=(-1)^{r}2^{-s}m\big{\}},\] \[N_{t}^{12} \coloneqq\big{\{}(n,k,v,i,j)\,\big{|}\,v\in V^{(n,k)},1\leq i\leq m _{v}^{(n,k)}\text{ with }t_{v,i}^{(n,k)}=(-1)^{r}2^{-s}m\text{ and }\text{ Bit}(j,m)=1\big{\}}.\]
We always assume that \(t_{v,i}^{(n,k)}=(-1)^{r}2^{-s}m\) is the canonical representation of \(t_{v,i}^{(n,k)}\). As the bitsize of \(\mathfrak{a}_{v}^{(n,k)}\) is polynomial in \(k\), we can construct an arithmetical r-expression \(\boldsymbol{\rho}_{t}(y,y^{\prime},z,z^{\prime})\) such that for all graphs \(G\) and assignments \(\omega\) and \(n,k,v,i\in\mathbb{N}\) with \(v\in V^{(n,k)}\), \(i\in[m_{v}^{(n,k)}]\),
\[\langle\!\langle\boldsymbol{\rho}_{t}\rangle\!\rangle^{(G,\,\omega)}\,(n,k,v,i) =t_{v,i}^{(n,k)}.\]
Similarly, we use the relations \(N_{t}^{13},N_{t}^{14},N_{t}^{15}\) to represent the slopes of \(\mathfrak{a}_{v}^{(n,k)}\) and the relations \(N_{t}^{16},N_{t}^{17},N_{t}^{18}\) to represent the constants. Furthermore, we construct arithmetical r-expressions \(\boldsymbol{\rho}_{s}(y,y^{\prime},z,z^{\prime})\) and \(\boldsymbol{\rho}_{c}(y,y^{\prime},z,z^{\prime})\) to access them. We can combine the term \(\theta_{\mathfrak{a}}\) and the r-expressions \(\boldsymbol{\rho}_{t}(y,y^{\prime},z,z^{\prime})\), \(\boldsymbol{\rho}_{s}(y,y^{\prime},z,z^{\prime})\), \(\boldsymbol{\rho}_{c}(y,y^{\prime},z,z^{\prime})\) to an L-expression
\(\mathbf{\chi}(y,y^{\prime},z)\) such that for all graphs \(G\), all assignments \(\mathpzc{a}\), all \(n,k\in\mathbb{N}\), and all \(v\in V^{(n,k)}\) we have
\[\left\langle\!\left\langle\mathbf{\chi}\right\rangle\!\right\rangle^{(G,\mathpzc{a} )}(n,k,v)=\mathfrak{a}_{v}^{(n,k)}.\]
Then we can combine the term \(\theta_{V}\), the relation \(N_{t}^{E}\), the r-expressions \(\mathbf{\rho}_{w},\mathbf{\rho}_{b}\), and the L-expression \(\mathbf{\chi}\) to an F-expression \(\mathbf{\varphi}_{t}^{\mathsf{msg}}(y,y^{\prime})\) such that for all graphs \(G\), all assignments \(\mathpzc{a}\), and all \(n,k\in\mathbb{N}\) we have
\[\left\langle\!\left\langle\mathbf{\varphi}_{t}^{\mathsf{msg}}\right\rangle\! \right\rangle^{(G,\mathpzc{a})}(n,k)=\mathfrak{F}_{\mathsf{msg}}^{(n,k)}.\]
Similarly, we obtain an F-expression \(\mathbf{\varphi}_{t}^{\mathsf{comb}}(y,y^{\prime})\) for the combination function which uses the relation \(C_{t}^{1},\ldots,C_{t}^{18}\) that represent \(\mathfrak{F}_{\mathsf{comb}}^{(n,k)}\).
In addition to the \(\mathfrak{N}^{(n,k)}\), we also need access to the Lipschitz constants \(\lambda^{(n)}\). We use one more built-in relation
\[L\coloneqq\{(n,\lambda^{(n)})\,\big{|}\,n\in\mathbb{N}\}\subseteq\mathbb{N}^ {2}.\]
As \(\lambda^{(n)}\) is polynomially bounded in \(n\), there is a term \(\theta_{\lambda}(y)\) using the built-in relation \(L\) such that for all \(G\), \(\mathpzc{a}\) and all \(n\),
\[\left[\!\left[\theta_{\lambda}\right]\!\right]^{(G,\mathpzc{a})}(n)=\lambda^ {(n)}.\]
_Claim 1._ For all \(t\in[d]\), there is an arithmetical term \(\eta_{t}(y,y^{\prime})\) such that for all \(n,k\in\mathbb{N}\) and all \(G\), \(\mathpzc{a}\), the value \(\left[\!\left[\eta_{t}\right]\!\right]^{(G,\mathpzc{a})}(n,k)\) is a Lipschitz constant for the combination function \(\mathsf{comb}_{t}^{(n,k)}\) of \(\mathfrak{L}_{t}^{(n,k)}\).
_Proof._ Using Lemma 6.6 we can easily obtain such a Lipschitz constant using the fact that \(2\lambda^{(n)}\) is a Lipschitz constant for all activation functions of \(\mathsf{comb}_{t}^{(n,k)}\), and using the term \(\theta_{\lambda}(y)\) to access \(\lambda^{(n)}\) and the F-expression \(\mathbf{\varphi}_{t}^{\mathsf{comb}}(y,y^{\prime})\) to access the FNN for \(\mathsf{comb}_{t}^{(n,k)}\).
_Claim 2._ For all \(t\in[d]\), there is an arithmetical term \(\zeta_{t}(y,y^{\prime})\) such that for all \(n,k\in\mathbb{N}\), all \(G\), \(\mathpzc{a}\), and all \(\mathpzc{x},\mathpzc{x}^{\prime}\in\mathcal{S}_{p_{t-1}}(G)\),
\[\left|\!\left|\mathfrak{F}_{t}^{(n,k)}(G,\mathpzc{x})-\mathfrak{F}_{t}^{(n,k )}(G,\mathpzc{x})\right|\!\right|_{\infty}\leq\left[\!\left\langle\mathpzc{c} \right\rvert\!\right]^{(G,\mathpzc{a})}(n,k)n\left|\!\left|\mathpzc{x}- \mathpzc{x}^{\prime}\right|\!\right|_{\infty}\]
_Proof._ To construct this term we use Lemma 6.12.
Note that the bound \(\left[\!\left\langle\mathpzc{c}\right\rvert\!\right]^{(G,\mathpzc{a})}(n,k)\) is independent of the graph \(G\) and the assignment \(\mathpzc{a}\) and only depends on \(\mathfrak{F}_{t}^{(n,k)}\) and \(\lambda^{(n)}\).
The following claim is an analogue of Lemma 5.2.
_Claim 3._ Let \(t\in[d]\). Let \(\mathbf{X}\) be an r-schema of type \(\mathsf{vn}\to\mathtt{r}\), and let \(W\) be a function variable of type \(\varnothing\to\mathtt{n}\). Then there is a guarded r-expression \(\mathsf{l}\mathsf{-eval}_{t}(y,y^{\prime},x,y^{\prime\prime})\) such
that the following holds for all \(n,k\in\mathbb{N}\), all graphs \(G\), and all assignments \(\omega\) over \(G\). Let \(x\in\mathcal{S}_{p_{t-1}^{(n)}}(G)\) be the signal defined by
\[x(v)\coloneqq\left(\langle\!\langle X\rangle\!\rangle^{(G,a)}\,(v,0),\ldots, \langle\!\langle X\rangle\!\rangle^{(G,a)}\,(v,p_{t-1}^{(n)}-1)\right)\]
and let \(y\coloneqq\widetilde{\mathfrak{L}}_{t}^{(n,k)}(G,x)\in\mathcal{S}_{p_{t}^{(n) }}(G)\). Then for all \(v\in V(G)\),
\[\left\|y\!\!\langle v\rangle-\left(\langle\!\langle 1\text{-}\text{eval} \rangle\!\rangle^{(G,a)}\,(n,k,v,0),\ldots,\langle\!\langle 1\text{-}\text{eval} \rangle\!\rangle^{(G,a)}\,(n,k,v,p_{t}^{(n)}-1)\right)\right\|_{\infty}\leq 2^{-a \,(W)}\]
Proof.: The proof is completely analogous to the proof of Lemma 5.2, except that in the proof of Claims 1 and 2 we use Lemma 3.23 instead of Corollary 3.25 to evaluate the FNNs computing the message function and combination function. We substitute suitable instantiations of the r-expression \(\boldsymbol{\chi}(y,y^{\prime})\) and the L-expression \(\boldsymbol{\psi}(y,y^{\prime})\) for the r-schemas \(\boldsymbol{Z}_{v},\boldsymbol{Z}_{e}\) representing the parameters of the FNN and the L-schemas \(\boldsymbol{Y}_{v}\) representing the activation functions in Lemma 3.23.
Furthermore, in Case 3 of the proof of Lemma 5.2 (handling MEAN-aggregation) we need a term that defines a Lipschitz constant for the combination function of \(\mathfrak{L}_{t}^{(n,k)}\). (In the proof of Lemma 5.2, this is the constant \(\lambda\).) We can use the term \(\eta_{t}(y,y^{\prime})\) of Claim 1.
The next claim is the analogue of Theorem 5.1 for our setting with built-in relations.
_Claim 4._ Let \(\boldsymbol{X}\) be an r-schema of type \(\mathtt{vn\to r}\), and let \(W\) be a function variable of type \(\varnothing\to\mathtt{n}\). Then there is a guarded r-expression \(\mathsf{n\text{-}eval}(y,y^{\prime},x,y^{\prime\prime})\) such that the following holds for all \(n,k\in\mathbb{N}\), all graphs \(G\), and all assignments \(\omega\) over \(G\). Let \(\boldsymbol{x}\in\mathcal{S}_{p^{(n)}}(G)\) be the signal defined by
\[\boldsymbol{x}(v)\coloneqq\left(\langle\!\langle X\rangle\!\rangle^{(G,a)}\, (v,0),\ldots,\langle\!\langle X\rangle\!\rangle^{(G,a)}\,(v,p^{(n)}-1)\right)\!,\]
and let \(y\coloneqq\widetilde{\mathfrak{N}}^{(n,k)}(G,x)\in\mathcal{S}_{q^{(n)}}(G)\). Then for all \(v\in V(G)\),
\[\left\|y\!\!\langle v\rangle-\left(\langle\!\langle\mathsf{n\text{-}eval} \rangle\!\rangle^{(G,a)}\,(n,k,v,0),\ldots,\langle\!\langle\mathsf{n\text{-} eval}\rangle\!\rangle^{(G,a)}\,(n,k,v,q^{(n)}-1)\right)\right\|_{\infty}\leq 2^{-a \,(W)}\] (6.AP)
Proof.: The proof is completely analogous to the proof of Theorem 5.1, using Claim 3 instead of Lemma 5.2. In the proof of Theorem 5.1 we need access to term that defines a constant \(\lambda^{(t)}\) that bounds the growth of the \(t\)th layer of \(\mathfrak{N}^{(n,k)}\). We use the term \(\zeta_{t}(y,y^{\prime})\) of Claim 2.
What remains to be done is choose the right \(k\) to achieve the desired approximation error in (6.D). We will define \(k\) using a closed term err that depends on \(W,W^{\prime}\) as well as the order of the input graph. We let
\[\mathsf{err}\coloneqq 2\mathsf{ord}\cdot\left(W+1\right)\cdot W^{\prime}.\]
In the following, let us assume that \(G\) is a graph of order \(n\) and \(\omega\) is an assignment over \(G\) satisfying the two assumptions \(\omega(W^{\prime})\neq 0\) and \(\left\|\boldsymbol{x}\right\|_{\infty}\leq\omega(W)\) for the signal \(\boldsymbol{x}\in\mathcal{S}_{p^{(n)}}(G)\) defined by \(\boldsymbol{X}\) as in (6.C). Let
\[k\coloneqq\llbracket\mathsf{err}\rrbracket^{(G,\,a)}=2n(\omega(W)+1)\omega(W^{ \prime})\geq 2n\big{(}\left\|\boldsymbol{x}\right\|_{\infty}+1\big{)}\omega(W^{ \prime})\]
and thus
\[n\big{(}\left\|\boldsymbol{x}\right\|_{\infty}+1\big{)}\frac{1}{k}\leq\frac{1 }{2\omega(W^{\prime})}.\]
Thus by (6.AO),
\[\left\|\widetilde{\mathfrak{N}}^{(n)}(G,\boldsymbol{x})-\widetilde{\mathfrak{ N}}^{(n,k)}(G,\boldsymbol{x})\right\|_{\infty}\leq\frac{1}{2\omega(W^{ \prime})}.\] (6.AQ)
Now we let \(\mathsf{gnn-eval}(x,y)\) be the formula obtained from the formula \(\mathsf{gnn-eval}(y,y^{\prime},x,y^{\prime\prime})\) of Claim 4 by substituting \(\mathsf{ord}\) for \(y\), \(\mathsf{err}\) for \(y^{\prime}\), \(y\) for \(y^{\prime\prime}\), and \(W^{\prime}\) for \(W\). Then by (6.AP), we have
\[\left\|\widetilde{\mathfrak{N}}^{(n,k)}(G,\boldsymbol{x})(v)- \big{(}\left(\langle\mathsf{gnn-eval}\rangle\right)^{(G,\,a)}(v,0),\ldots, \langle\langle\mathsf{gnn-eval}\rangle\rangle^{(G,\,a)}(v,q^{(n)}-1)\big{)} \right\|_{\infty}\] \[\leq 2^{-a(W^{\prime})}\leq\frac{1}{2\omega(W^{\prime})}\]
Combined with (6.AQ), this yields
\[\left\|\widetilde{\mathfrak{N}}(G,\boldsymbol{x})(v)-\big{(}\left(\langle \mathsf{gnn-eval}\rangle\right)^{(G,\,a)}(v,0),\ldots,\langle\langle\mathsf{ gnn-eval}\rangle\rangle^{(G,\,a)}(v,q^{(n)}-1)\big{)}\right\|_{\infty}\leq\frac{1}{ \omega(W^{\prime})},\]
that is, the desired inequality (6.D).
## 7 A Converse
The main result of this section is a converse of Corollary 6.3. For later reference, we prove a slightly more general lemma that not only applies to queries over labelled graphs, that is, graphs with Boolean signals, but actually to queries over graphs with integer signals within some range.
**Lemma 7.1**.: _Let \(U_{1},\ldots,U_{p}\) be function variables of type \(\mathtt{v}\to\mathtt{n}\), and let \(\varphi(x)\) be a \(\mathsf{GFO}+\mathsf{C}_{\mathrm{nu}}\)-formula that contains no relation or function variables except possibly the \(U_{i}\). Then there is a polynomial-size bounded-depth family \(\mathcal{N}\) of rational piecewise-linear GNNs of input dimension \(p\) such that for all graphs \(G\) of order \(n\) and all assignments \(\omega\) over \(G\) the following holds. Let \(\omega\in\mathcal{S}_{p}(G)\) be the signal defined by_
\[\omega(v)\coloneqq\big{(}\omega(U_{1})(v),\ldots,\omega(U_{p})(v)\big{)}. \tag{7.A}\]
_Assume that \(\omega(U_{i})(v)<n\) for all \(i\in[p]\) and \(v\in V(G)\). Then for all \(v\in V(G)\), \(\widetilde{\mathcal{N}}(G,\omega)\in\{0,1\}\) and_
\[\widetilde{\mathcal{N}}(G,\omega)(v)=1\iff(G,\omega)\vDash\varphi(v).\]
_Furthermore, all GNNs in \(\mathcal{N}\) only use \(\mathrm{lsig}\)-activations and \(\mathsf{SUM}\)-aggregation._
In the following, we will use the following more suggestive notation for the setting of the lemma: for a signal \(\iota\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}V(G)\to\{0,\ldots,n-1\}^ {p}\), we write
\[(G,\iota)\vDash\varphi(v)\]
if \((G,\iota)\vDash\varphi(v)\) for some assignments \(\iota\) with \(\iota(U_{i})(v)=\iota(v)_{i}\) for all \(i\in[p]\) and \(v\in V(G)\). This is reasonable because \(\varphi\) only depends on the assignment to \(U_{i}\) and to \(x\). Thus if \((G,\iota)\vDash\varphi(v)\) then \((G,\iota)\vDash\varphi(v)\) for all assignments \(\iota\) with with \(\iota(U_{i})(v)=\iota(v)_{i}\).
To prove Lemma 7.1, we need the following.
**Lemma 7.2**.: _Let \(m,n\in\mathbb{N}\). Then there is an FNN \(\mathfrak{F}\) of input dimension \(m\) and output dimension \(m\cdot n\) such that for all \(\vec{x}=(x_{0},\ldots,x_{m-1})\in\{0,\ldots,n-1\}^{m}\) the following holds. Suppose that \(\mathfrak{F}(\vec{x})=\vec{y}=(y_{0},\ldots,y_{mn-1})\). Then for \(k=in+j\), \(0\leq i<m,0\leq j<n\),_
\[y_{k}=\begin{cases}1&\text{if $j=x_{i}$},\\ 0&\text{otherwise}.\end{cases}\]
\(\mathfrak{F}\) _has size \(O(mn)\), depth \(2\), and it only uses \(\operatorname{lsig}\) activations._
**Example 7.3**.: Suppose that \(m=3,n=4\), and \(\vec{x}=(1,3,0)\). Then we want \(\mathfrak{F}(\vec{x})\) to be
\[(0,1,0,0,\ 0,0,0,1,\ 1,0,0,0).\]
Proof of Lemma 7.2.: Observe that the function \(f(x)=\operatorname{lsig}(x+1)-\operatorname{lsig}(x)\) satisfies \(f(0)=1\) and \(f(k)=0\) for all integers \(k\neq 0\). We design the network such that
\[y_{im+j}=\operatorname{lsig}(x_{i}-j+1)-\operatorname{lsig}(x_{i}-j).\]
On the middle layer we compute the values \(\operatorname{lsig}(x_{i}-j+1)\) and \(\operatorname{lsig}(x_{i}-j)\) for all \(i,j\).
Proof of Lemma 7.1.: Let us fix an \(n\in\mathbb{N}\). We need to define a GNN \(\mathfrak{N}\) of size polynomial in \(n\) and of depth only depending on \(\varphi\), but not on \(n\), such that for every graph \(G\) of order \(n\), every signal \(\iota:V(G)\to\{0,\ldots,n-1\}^{p}\), and every \(v\in V(G)\) it holds that \(\mathfrak{G}(G,\iota)(v)\in\{0,1\}\) and
\[(G,\iota)\vDash\varphi(v)\iff\mathfrak{N}(G,\iota)(v)=1. \tag{7.B}\]
We will prove (7.B) by induction on the formula \(\varphi(x)\). Let us understand the structure of this formula. The formula uses two vertex variables \(x_{1},x_{2}\). We usually refer to them as \(x,x^{\prime}\), with the understanding that if \(x=x_{i}\) then \(x^{\prime}=x_{3-i}\). In addition, the formula uses an arbitrary finite set of number variables, say, \(\{y_{1},\ldots,y_{\ell}\}\). When we want to refer to any of these variables without specifying a particular \(y_{i}\), we use the notations like \(y,y^{\prime},y^{j}\).
By a slight extension of Lemma 3.2 to a setting where we allow function variables, but require them to take values smaller than the order of the input structure, all subterms of \(\varphi(x)\) are polynomially bounded in the order \(n\) of the input graph. Thus in every
counting subterm \(\#(x^{\prime},y^{1}<\theta_{1},\ldots,y^{k}<\theta_{k}).\big{(}E(x,x^{\prime}) \wedge\psi\big{)}\) or \(\#(y^{1}<\theta_{1},\ldots,y^{k}<\theta_{k}).\psi\), we may replace the \(\theta_{i}\) by a fixed closed term \(\theta\coloneqq(\mathsf{ord}+1)^{r}\) for some constant \(r\in\mathbb{N}\) and rewrite the counting terms as \(\#(x^{\prime},y^{1}<\theta,\ldots,y^{k}<\theta).\big{(}E(x,x^{\prime})\wedge y ^{1}<\theta_{1}\wedge\ldots\wedge y^{k}<\theta_{k}\wedge\psi\big{)}\) and \(\#(y^{1}<\theta,\ldots,y^{k}<\theta).\big{(}y^{1}<\theta_{1}\wedge\ldots\wedge y ^{k}<\theta_{k}\wedge\psi\big{)}\), respectively. We fix \(\theta\) for the rest of the proof and assume that it is used as the bound in all counting terms appearing in \(\varphi\).
Arguably the most important building blocks of the formula \(\varphi\) are subterms of the form
\[\#(x^{\prime},y^{1}<\theta,\ldots,y^{k}<\theta).\big{(}E(x,x^{\prime})\wedge \psi\big{)}. \tag{7.C}\]
Let us call these the _neighbourhood terms_ of \(\varphi\). Note that the only guards available in our vocabulary of graphs are \(E(x,x^{\prime})\) and \(E(x^{\prime},x)\), and since we are dealing with undirected graphs these two are equivalent. This is why we always assume that the guards \(\gamma\) of subterms \(\#(x^{\prime},y_{1}<\theta,\ldots,y_{k}<\theta).(\gamma\wedge\ldots)\) are of the form \(E(x,x^{\prime})\). Furthermore, we may assume that atoms \(E(x,x^{\prime})\) only appear as guards of neighbourhood terms. This assumption is justified by the observation that atoms \(E(x,x^{\prime})\) must always occur within some neighbourhood term (otherwise both \(x\) and \(x^{\prime}\) would occur freely in \(\varphi\)), and it would make no sense to have either \(E(x,x^{\prime})\) or its negation in the subformula \(\psi\) in (7.C) unless it appeared within a neighbourhood term of \(\psi\). We may also assume that \(\varphi\) has no atomic subformulas \(E(x,x)\), because they always evaluate to false (in the undirected simple graphs we consider), or \(x=x\), because they always evaluate to true.
In the following, we will use the term "expression" to refer to subformulas and subterms of \(\varphi\), and we denote expressions by \(\xi\). An _vertex-free expression_ is an expression with no free vertex variables. A _vertex expression_ is an expression with exactly one free vertex variable \(x\) (so \(\varphi=\varphi(x)\) itself is a vertex expression), and an _edge expression_ is an expression with two free vertex variables. This terminology is justified by the observation that edge expressions must be guarded, so the two free variables must be interpreted by the endpoints of an edge. The most important edge formulas are the formulas \(E(x,x^{\prime})\wedge\psi\) appearing within the neighbourhood terms.
To simplify the presentation, let us a fix a graph \(G\) of order \(n\) and a signal \(\omega:V(G)\to\{0,\ldots,n-1\}^{p}\) in the following. Of course the GNN we shall define will not depend on \(G\) or \(\omega\); it will only depend on \(n\) and \(\varphi\).
Recall that all subterms of \(\varphi\) take values polynomially bounded in \(n\). Let \(m\in\mathbb{N}\) be polynomial in \(n\) such that all subterms of \(\psi\) take values strictly smaller than \(m\). Let \(M\coloneqq\{0,\ldots,m-1\}\). When evaluating \(\varphi\), we only need to consider assignments that map number variables \(y_{1},\ldots,y_{\ell}\) appearing in \(\varphi\) to values in \(M\). Then every vertex-free formula \(\psi\) defines a relation \(S_{\psi}\subseteq M^{\ell+1}\) consisting of all tuples \((b,a_{1},\ldots,a_{\ell})\in M^{\ell+1}\) such that \((G,\omega)\vDash\psi\) for all assignments \(\omega\) over \(G\) with \(\omega(y_{j})=a_{j}\) for all \(j\in[\ell]\). (The first coordinate \(b\) is just a dummy coordinate that will allow us to work with \(\ell+1\)-ary relations throughout.) Every vertex-free term \(\theta\) defines a relation \(S_{\theta}\subseteq M^{\ell+1}\) consisting of all tuples \((b,a_{1},\ldots,a_{\ell})\in M^{\ell+1}\) such that \([\![\theta]\!]^{(G,\alpha)}=b\) for all assignments \(\omega\) over \(G\) with \(\omega(y_{j})=a_{j}\) for all \(j\in[\ell]\). Similarly, every vertex expression \(\xi\) defines a relation \(S_{\xi}(v)\subseteq M^{\ell+1}\) for every \(v\in V(G)\) and every edge expression \(\xi\) defines a relation \(S_{\xi}(v,v^{\prime})\subseteq M^{\ell+1}\) for every pair \((v,v^{\prime})\in V(G)^{2}\).
Let \(\widetilde{m}\coloneqq m^{\ell+1}\) and \(\widetilde{M}\coloneqq\{0,\ldots,\widetilde{m}-1\}\). Let \(\langle\cdot\rangle:M^{\ell+1}\to\widetilde{M}\) be the bijection defined by \(\left((a_{0},\ldots,a_{\ell})\right)=\sum_{i=0}^{\ell}a_{i}m^{i}\). Note that \(\langle\cdot\rangle\) maps relations \(R\subseteq M^{\ell+1}\) to subsets \(\langle R\rangle\subseteq\widetilde{M}\), which we may also view as vectors in \(\{0,1\}^{\widetilde{m}}\): for \(i\in\widetilde{m}\) we let \(\langle R\rangle_{i}=1\) if \(i\in\langle R\rangle\) and \(\langle R\rangle_{i}=0\) otherwise.
Thus every vertex-free expression \(\xi\) defines a vector \(\mathbf{x}_{\xi}\coloneqq\langle S_{\xi}\rangle\in\{0,1\}^{\widetilde{m}}\). Every vertex expression defines an \(\widetilde{m}\)-ary Boolean signal \(\mathbf{x}_{\xi}\) defined by
\[\mathbf{x}_{\xi}(v)\coloneqq\left\langle S_{\xi}(v)\right\rangle.\]
Every edge expression \(\xi\) defines an "edge signal" \(\mathbf{y}_{\xi}\) defined by
\[\mathbf{y}_{\xi}(v,v^{\prime})\coloneqq\left\langle S_{\xi}(v,v^{\prime})\right\rangle.\]
Note that, formally, \(\mathbf{y}_{\xi}(v,v^{\prime})\) is defined for all pairs \(v,v^{\prime}\) and not only for the pairs of endpoints of edges.
Assume that the vertex expressions are \(\xi^{(1)},\ldots,\xi^{(d)}\), ordered in such a way that if \(\xi^{(i)}\) is a subexpression of \(\xi^{(j)}\) then \(i<j\). Then \(\xi^{(d)}=\varphi\). Our GNN \(\mathfrak{N}\) will have \(d+1\) layers \(\mathfrak{L}^{(1)},\ldots,\mathfrak{L}^{(d+1)}\). For \(t\in[d]\), the layer \(\mathfrak{L}^{(t)}\) will have input dimension \(p_{t-1}\coloneqq p+(t-1)\widetilde{m}\) and output dimension \(p_{t}\coloneqq p+t\widetilde{m}\). Layer \(\mathfrak{L}^{(d+1)}\) will have input dimension \(p_{d}\) and output dimension \(p_{d+1}\coloneqq 1\). Note that \(p_{0}=p\). So our GNN \(\mathfrak{N}=(\mathfrak{L}^{(1)},\ldots,\mathfrak{L}^{(d+1)})\) will have input dimension \(p\) and output dimension \(1\), which is exactly what we need.
Let \(\mathbf{x}^{(0)}\coloneqq\mathbf{\iota}\) be the input signal, and for every \(t\in[d+1]\), let \(\mathbf{x}^{(t)}\coloneqq\widetilde{\mathfrak{L}}^{(t)}(G,\mathbf{x}^{(t-1)})\). We shall define the layers in such a way that for all \(t\in[d]\), all \(v\in V(G)\) we have
\[\mathbf{x}^{(t)}(v)=\mathbf{\iota}(v)\mathbf{x}_{\xi^{(1)}}(v)\ldots\mathbf{x}_{\xi^{(t)}}(v). \tag{7.D}\]
Furthermore,
\[\mathbf{x}^{(d+1)}(v)=\begin{cases}1&\text{if }G\coloneqq\varphi(v),\\ 0&\text{otherwise.}\end{cases} \tag{7.E}\]
So the GNN will take care of the vertex expressions. We also need to take care of the vertex-free expressions and the edge expressions. For vertex-free expressions, this is easy. Observe that a vertex-free expression contains no vertex variables at all, free or bound, because once we introduce a vertex variable the only way to bind it is by a neighbourhood term, and such a term always leaves one vertex variable free. Thus the value of a vertex-free expression does not depend on the input graph \(G\), but only on its order \(n\), the built-in numerical relations, and the integer arithmetic that is part of the logic. This means that for a vertex-free expression \(\xi\) the relation \(S_{\xi}\) is "constant" and can be treated like a built-in numerical relation that can be hardwired into the GNN. Dealing with edge expressions is more difficult. We will handle them when dealing with the neighbourhood terms.
Let us turn to the vertex expressions. Let \(t\in[d]\), and let \(\xi\coloneqq\xi^{(t)}\). We distinguish between several cases depending on the shape of \(\xi\).
_Case 1:_\(\xi=U_{i}(x)\) for some \(i\in[p]\).
Observe that for \(v\in V(G)\) we have
\[S_{\xi}(v)=\{\boldsymbol{\iota}(v)_{i}\}\times M^{\ell}.\]
Thus \(\boldsymbol{x}_{\xi}(v)_{k}=1\) if \(k=\boldsymbol{\iota}(v)_{i}+\sum_{j=1}^{\ell}a_{j}m^{j}\) for some \((a_{1},\ldots,a_{\ell})\in M^{\ell}\) and \(\boldsymbol{x}_{\xi}(v)_{k}=0\) otherwise. Using Lemma 7.2 in the special case \(m=1\), we can design an FNN \(\mathfrak{F}_{1}\) of input dimension \(1\) and output dimension \(m\) that maps \(\boldsymbol{\iota}(v)_{i}\) to \((0,\ldots,0,1,0,\ldots,0)\in\{0,1\}^{m}\) with a \(1\) at index \(\boldsymbol{\iota}(v)_{i}\). We put another layer of \(\widetilde{m}\) output nodes on top of this and connect the node \(k=\sum_{j=0}^{\ell}a_{j}m^{j}\) on this layer to the output node of \(\mathfrak{F}_{1}\) with index \(a_{0}\) by an edge of weight \(1\). All nodes have bias \(0\) and use lsig activations. The resulting FNN \(\mathfrak{F}_{2}\) has input dimension \(1\) and output dimension \(\widetilde{m}\), and it maps an input \(x\) to the vector \(\left\langle\{x\}\times M^{\ell}\right\rangle\in\{0,1\}^{\widetilde{m}}\). Thus in particular, it maps \(\boldsymbol{\iota}(v)_{i}\) to \(\left\langle S_{\xi}(v)\right\rangle\). Padding this FNN with additional input and output nodes, we obtain an FNN \(\mathfrak{F}_{3}\) of input dimension \(p_{t-1}+1\) and output dimension \(p_{t}=p_{t-1}+\widetilde{m}\) that map \((x_{1},\ldots,x_{p_{t-1}+1})\) to \((x_{1},\ldots,x_{p_{t-1}},\mathfrak{F}_{2}(x_{i}))\). We use \(\mathfrak{F}_{3}\) to define the combination functions \(\mathsf{comb}^{(t)}:\mathbb{R}^{p_{t-1}+1}\to\mathbb{R}^{p_{t}}\) of the layer \(\mathfrak{L}^{(t)}\). We define the message function by \(\mathsf{msg}^{(t)}:\mathbb{R}^{2p_{t-1}}\to\mathbb{R}\) by \(\mathsf{msg}^{(t)}(\boldsymbol{x})\succeq 0\) for all \(\boldsymbol{x}\), and we use sum aggregation. Then clearly, \(\mathfrak{L}^{(t)}\) computes the transformation \((G,\boldsymbol{x}^{(t-1)})\mapsto(G,\boldsymbol{x}^{(t)})\) satisfying (7.D).
_Case 2: \(\xi=\xi^{\prime}\ast\xi^{\prime\prime}\), where \(\xi^{\prime},\xi^{\prime\prime}\) are vertex expressions and either \(\ast\in\{+,\cdot,\leq\}\) and \(\xi,\xi^{\prime},\xi^{\prime}\) are terms or \(\ast=\wedge\) and \(\xi,\xi^{\prime},\xi^{\prime\prime}\) are formulas._
We could easily (though tediously) handle this case by explicitly constructing the appropriate FNNs, as we did in Case 1. However, we will give a general argument that will help us through the following cases as well. As the encoding \(R\subseteq M^{\ell+1}\mapsto\langle R\rangle\subseteq\widetilde{M}\) and the decoding \(\langle R\rangle\subseteq\widetilde{M}\mapsto R\subseteq M^{\ell+1}\) are easily definable by arithmetical \(\mathsf{FO}+\mathsf{C}\)-formulas, the transformation
\[\left(\langle S_{\xi^{\prime}}(v)\rangle,\langle S_{\xi^{\prime\prime}}(v) \rangle\right)\mapsto\langle S_{\xi^{\prime\ast}\xi^{\prime\prime}}(v)\rangle,\]
which can be decomposed as
\[\left(\langle S_{\xi^{\prime}}(v)\rangle,\langle S_{\xi^{\prime\prime}}(v) \rangle\right)\mapsto\left(S_{\xi^{\prime}}(v),S_{\xi^{\prime}}(v)\right) \mapsto S_{\xi\ast\xi^{\prime\prime}}(v)\mapsto\langle S_{\xi^{\prime}\ast\xi^ {\prime\prime}}(v)\rangle,\]
is also definable by an arithmetical \(\mathsf{FO}+\mathsf{C}\)-formula, using Lemmas 3.9 and 3.14 for the main step \(\left(S_{\xi^{\prime}}(v),S_{\xi^{\prime}}(v)\right)\mapsto S_{\xi\ast\xi^{ \prime\prime}}(v)\). Hence by Corollary 3.5, it is computable by a threshold circuit \(\mathfrak{C}^{\ast}\) of bounded depth (only depending on \(\ast\)) and polynomial size. Hence by Lemma 2.6 it is computable by an FNN \(\mathfrak{F}^{\ast}\) of bounded depth and polynomial size. Note that for every vertex \(v\), this FNN \(\mathfrak{F}^{\ast}\) maps \(\left(\boldsymbol{x}_{\xi^{\prime}}(v),\boldsymbol{x}_{\xi^{\prime}}(v)\right)\) to \(\boldsymbol{x}_{\xi}(v)\). We have \(\xi^{\prime}=\xi^{(t^{\prime})}\) and \(\xi^{\prime\prime}=\xi^{(t^{\prime\prime})}\) for some \(t^{\prime},t^{\prime\prime}<t\). Based on \(\mathfrak{F}^{\ast}\) we construct an FNN \(\mathfrak{F}\) of input dimension \(p_{t-1}+1\) and output dimension \(p_{t}=p_{t-1}+\widetilde{m}\) such that for \(\boldsymbol{u}\in\{0,\ldots,n-1\}^{p}\) and \(\boldsymbol{x}_{1},\ldots,\boldsymbol{x}_{t-1}\in\mathbb{R}^{\widetilde{m}}\), \(x\in\mathbb{R}\)
\[\mathfrak{F}(\boldsymbol{u}\boldsymbol{x}_{1}\ldots\boldsymbol{x}_{t-1}x)= \boldsymbol{u}\boldsymbol{x}_{1}\ldots\boldsymbol{x}_{t-1}\mathfrak{F}^{\ast} (\boldsymbol{x}_{t^{\prime}},\boldsymbol{x}_{t^{\prime\prime}}).\]
Continuing as in Case 1, we use \(\mathfrak{F}\) to define the combination functions \(\mathsf{comb}^{(t)}\) of the layer \(\mathfrak{L}^{(t)}\), and again we use a trivial message function.
_Case 3:_: \(\xi=\neg\xi^{\prime}\), where \(\xi^{\prime}\) is vertex formula.
This case can be handled as Case 2.
_Case 4:_: \(\xi=\xi^{\prime}\star\xi^{\prime\prime}\), where \(\xi^{\prime}\) is a vertex expression, \(\xi^{\prime\prime}\) is a vertex-free expression, and either \(\star\in\{+,\cdot,=,\leq\}\) and \(\xi,\xi^{\prime},\xi^{\prime}\) are terms or \(\star\in\wedge\) and \(\xi,\xi^{\prime},\xi^{\prime\prime}\) are formulas.
As in Case 2, we construct a threshold circuit \(\mathfrak{C}^{*}\) of bound depth and polynomial size that computes the mapping \(\big{(}\langle S_{\xi^{\prime}}(v)\rangle,\langle S_{\xi^{\prime\prime}} \rangle\big{)}\mapsto\langle S_{\xi^{\prime}\ast\xi^{\prime\prime}}(v)\rangle\). As we argued above, the relation \(S_{\xi}^{\prime\prime}\) for the vertex-free expression \(\xi^{\prime\prime}\) and hence the vector \(\big{\langle}S_{\xi^{\prime\prime}}\big{\rangle}\) do not depend on the input graph. Hence we can simply hardwire the \(\big{\langle}S_{\xi^{\prime\prime}}\big{\rangle}\) into \(\mathfrak{C}^{*}\), which gives us a circuit that computes the mapping \(\langle S_{\xi^{\prime}}(v)\rangle\mapsto\langle S_{\xi^{\prime}\ast\xi^{ \prime\prime}}(v)\rangle\). From this circuit we obtain an FNN \(\mathfrak{F}^{*}\) that computes the same mapping, and we can continue as in Case 2.
_Case 5:_: \(\xi=\#(y^{1}<\theta,\ldots,y^{k}<\theta).\psi\), where \(\psi\) is a vertex formula.
We argue as in Cases 2-4. We construct a threshold circuit that computes the mapping
\[\langle S_{\psi}(v)\rangle\mapsto\langle S_{\xi}(v)\rangle.\]
Turning this circuit into an FNN \(\mathfrak{F}^{*}\) that computes the same mapping, we can continue as in Case 2.
_Case 6:_: \(\xi=\#(x^{\prime},y^{1}<\theta,\ldots,y^{k}<\theta).\big{(}E(x,x^{\prime}) \wedge\psi\big{)}\), where \(\psi\) is an edge formula or a vertex formula.
Without loss of generality, we assume that \(\psi\) is an edge formula. If it is not, instead of \(\psi\) we take the conjunction of \(\psi\) with \(U_{1}(x)+U_{1}(x^{\prime})\geq 0\), which is always true, instead. Moreover, we may assume that \(\psi\) contains no equality atoms \(x=x^{\prime}\), because the guard \(E(x,x^{\prime})\) forces \(x\) and \(x^{\prime}\) to be distinct. Thus \(\psi\) is constructed from vertex expressions and vertex-free expressions using \(+,\cdot,\leq\) to combine terms, \(\neg,\wedge\) to combine formulas, and counting terms \(\#((y^{\prime})^{1}<\theta,\ldots,(y^{\prime})^{k^{\prime}}<\theta).\psi^{\prime}\). Let \(\xi^{(t_{1})},\ldots,\xi^{(t_{s})}\) be the maximal (with respect to the inclusion order on expressions) vertex expressions occurring in \(\psi\). Assume that \(x\) is the free vertex variable of \(\xi^{(t_{1})},\ldots,\xi^{(t_{r})}\) and \(x^{\prime}\) is the free vertex variable of \(\xi^{(t_{r+1})},\ldots,\xi^{(t_{s})}\). Arguing as in Case 2 and Case 5, we can construct a threshold circuit \(\mathfrak{C}\) of bounded depth and polynomial size that computes the mapping
\[\big{(}\langle S_{\xi^{(t_{1})}}(v)\rangle,\ldots,\langle S_{\xi^{(t_{r})}}(v )\rangle,\langle S_{\xi^{(t_{r+1})}}(v^{\prime})\rangle,\ldots,\langle S_{\xi _{i_{s}}}(v^{\prime})\rangle\big{)}\mapsto\langle S_{\psi}(v,v^{\prime})\rangle\]
To simplify the notation, let us assume that \(y^{i}=y_{i}\). Thus the free variables of \(\xi\) are among \(x,y_{k+1},\ldots,y_{\ell}\), and we may write \(\xi(x,y_{k+1},\ldots,y_{\ell})\). Let
\[\zeta(x,x^{\prime},y_{k+1},\ldots,y_{\ell})\coloneqq\#(y_{1}<\theta,\ldots,y_ {k}<\theta).\psi.\]
and observe that for all \(v\in V(G)\) and \(a_{k+1},\ldots,a_{\ell}\in M\) we have
\[\llbracket\xi\rrbracket^{G}\left(v,a_{k+1},\ldots,a_{\ell}\right)=\sum_{v^{\prime }\in N(v)}\llbracket\zeta\rrbracket^{G}\left(v,v^{\prime},a_{k+1},\ldots,a_{ \ell}\right).\]
For \(v,v^{\prime}\in V(G)\), let
\[R_{\zeta}(v,v)\coloneqq\{(a_{0},\ldots,a_{\ell-k-1},b)\;\big{|}\;b<\llbracket \zeta\rrbracket^{G}\left(v,v^{\prime},a_{0},\ldots,a_{\ell-k-1}\right)\} \subseteq M^{\ell-k+1},\]
and slightly abusing notation, let
\[\big{\langle}R_{\zeta}(v,v)\big{\rangle}=\left\{\sum_{i=0}^{\ell-k}a_{i}m^{i} \;\middle|\;(a_{0},\ldots,a_{\ell-k})\in R_{\zeta}(v)\right\}\subseteq\big{\{} 0,\ldots,m^{\ell-k+1}-1\big{\}}\]
which we may also view as a vector in \(\{0,1\}^{m^{\ell-k+1}}\). Again arguing via \(\mathsf{FO}+\mathsf{C}\), we can construct a threshold circuit \(\mathfrak{C}^{\prime}\) that computes the transformation
\[\big{(}\langle S_{\xi^{(t_{1})}}(v)\rangle,\ldots,\langle S_{\xi^{(tr)}}(v) \rangle,\langle S_{\xi^{(tr+1)}}(v^{\prime})\rangle,\ldots,\langle S_{\xi_{i_{ s}}}(v^{\prime})\rangle\big{)}\mapsto\langle R_{\zeta}(v,v^{\prime})\rangle.\]
Using Lemma 2.6, we can turn \(\mathfrak{C}^{\prime}\) into an FNN \(\mathfrak{F}^{\prime}\) that computes the same Boolean function. Let \(\boldsymbol{c}(v,v^{\prime})=(c_{0},\ldots,c_{m^{\ell-k}-1})\in M^{m^{\ell-k}}\) be the vector defined by as follows: for \((a_{0},\ldots,a_{\ell-k-1})\in M^{\ell-k}\) and \(j=\sum_{i=0}^{\ell-k-1}a_{i}m^{i}\) we let
\[c_{j}\coloneqq\llbracket\zeta\rrbracket^{G}\left(v,v^{\prime},a_{0},\ldots,a_ {\ell-k-1}\right).\]
Then
\[c_{j} =\big{|}\big{\{}b\;\big{|}\;b<\llbracket\zeta\rrbracket^{G} \left(v,v^{\prime},a_{0},\ldots,a_{\ell-k-1}\right)\big{\}}\big{|}\] \[=\big{|}\big{\{}b\;\big{|}\;(a_{0},\ldots,a_{\ell-k-1},b)\in R_{ \zeta}(v,v^{\prime})\big{\}}\big{|}\] \[=\sum_{beM}\langle R_{\zeta}(v,v^{\prime})\rangle_{j+bm^{\ell-k}}.\]
Thus an FNN of depth \(1\) with input dimension \(m^{\ell-k+1}\) and output dimension \(m^{\ell-k}\) can transform \(\langle R_{\theta}(v,v^{\prime})\rangle\) into \(\frac{1}{m}\boldsymbol{c}(v,v^{\prime})\). We take the factor \(\frac{1}{m}\) because \(0\leq c_{j}<m\) and thus \(\frac{1}{m}\boldsymbol{c}(v,v^{\prime})\in[0,1]^{m^{\ell-k}}\), and we can safely use lsig-activation. We add this FNN on top of the \(\mathfrak{F}^{\prime}\) and obtain an FNN \(\mathfrak{F}^{\prime\prime}\) that computes the transformation
\[\big{(}\langle S_{\xi^{(t_{1})}}(v)\rangle,\ldots,\langle S_{\xi^{(t_{r})}}(v )\rangle,\langle S_{\xi^{(t_{r+1})}}(v^{\prime})\rangle,\ldots,\langle S_{ \xi_{i_{s}}}(v^{\prime})\rangle\big{)}\mapsto\frac{1}{m}\boldsymbol{c}(v,v^{ \prime}).\]
The message function \(\mathsf{msg}^{(t)}\) of the GNN layer \(\mathfrak{L}^{(t)}\) computes the function
\[\big{(}x^{(t-1)}(v),x^{(t-1)}(v^{\prime})\big{)}\rightarrow\frac{1}{m} \boldsymbol{c}(v,v^{\prime})\]
which we can easily implement by an FNN based on \(\mathfrak{F}^{\prime\prime}\). Aggregating, we obtain the signal \(\varkappa\) such that
\[\varkappa(v)=\sum_{v^{\prime}\in N(v)}\mathsf{msg}^{(t)}(v,v^{\prime})=\frac{1 }{m}\sum_{v^{\prime}\in N(v)}\boldsymbol{c}(v,v^{\prime})\]
Thus for \((a_{0},\ldots,a_{\ell-k-1})\in M^{\ell-k}\) and \(j=\sum_{i=0}^{\ell-k-1}a_{i}m^{i}\) we have
\[\begin{split}\varkappa(v)_{j}&=\frac{1}{m}\sum_{v^{ \prime}\in N(v)}\mathbf{c}(v,v^{\prime})\\ &=\frac{1}{m}\sum_{v^{\prime}\in N(v)}\left[\zeta\right]^{G}(v,v ^{\prime},a_{0},\ldots,a_{\ell-k-1})\\ &=\frac{1}{m}\left[\xi\right]^{G}(v,a_{0},\ldots,a_{\ell-k-1}). \end{split}\]
Our final task will be to transform the vector \(\varkappa(v)\in[0,1]^{m^{\ell-k}}\) to the vector \(\left\langle S_{\xi}\right\rangle\in\{0,1\}^{\widebar{m}}\). In a first step, we transform \(\varkappa(v)\) into a vector \(\varkappa^{\prime}(v)\in M^{m^{\ell}}\) with entries
\[\varkappa^{\prime}(v)_{j}=\left[\xi\right]^{G}(v,a_{k+1},\ldots,a_{\ell})\]
for \((a_{1},\ldots,a_{\ell})\in M^{\ell}\) and \(j=\sum_{i=0}^{\ell-1}a_{i+1}m^{i}\). We need to transform \(\varkappa^{\prime}(v)\) into \(\left\langle S_{\xi}(v)\right\rangle\in\{0,1\}^{\widebar{m}}\), which for every \((a_{1},\ldots,a_{\ell})\in M^{\ell}\) has a single \(1\)-entry in position \(j=\sum_{i=0}^{\ell}a_{i}m^{i}\) for \(a_{0}=\left[\xi\right]^{G}(v,a_{k+1},\ldots,a_{\ell})\) and \(0\)-entries in all positions \(j^{\prime}=\sum_{i=0}^{\ell}a_{i}m^{i}\) for \(a_{0}\neq\left[\xi\right]^{G}(v,a_{k+1},\ldots,a_{\ell})\). We can use Lemma 7.2 for this transformation.
Thus we can construct an FNN \(\mathfrak{F}^{\prime\prime}\) that transforms the output \(\mathbf{z}(v)\) of the aggregation into \(\left\langle S_{\xi}(v)\right\rangle\). We define the combination function \(\mathsf{comb}^{(t)}:\mathbb{R}^{p_{t-1}+m^{\ell-k}}\to\mathbb{R}^{p_{t}}\) by \(\mathsf{comb}^{(t)}(\mathbf{x},\mathbf{z}):=(\mathbf{x},\mathfrak{F}^{\prime\prime}(\mathbf{z })).\) Then
\[\mathsf{comb}^{(t)}(\mathbf{x}(v),\varkappa(v)):=(\mathbf{x},\varkappa_{\xi}(v)).\]
Thus the layer \(\mathfrak{L}^{(t)}\) with message function \(\mathsf{msg}^{(t)}\) and combination function \(\mathsf{comb}^{(t)}\) satisfies (7.D).
All that remains is to define the last layer \(\mathfrak{L}^{(d+1)}\) satisfying (7.E). Since \(\xi^{(d)}=\varphi\), by (7.D) with \(t=d\), the vector \(\varkappa_{\varphi}(v)=\left\langle S_{\varphi}(v)\right\rangle\) is the projection of \(\mathbf{x}^{(d)}(v)\) on the last \(\widewidetilde{m}\) entries. As \(\varphi\) has no free number variables, we have
\[\left\langle S_{\varphi}(v)\right\rangle=\begin{cases}\mathbf{1}&\text{if }G\vDash \varphi(v),\\ \mathbf{0}&\text{if }G\vDash\varphi(v).\end{cases}\]
In particular, the last entry of \(\left\langle S_{\varphi}(v)\right\rangle\) and hence of \(\mathbf{x}^{(d)}(v)\) is \(1\) if \(G\vDash\varphi(v)\) and \(0\) otherwise. Thus all we need to do on the last layer is project the output on the last entry.
This completes the construction.
The following theorem directly implies Theorem 1.1 stated in the introduction.
**Theorem 7.4**.: _Let \(\mathbb{G}\) be a unary query on \(\mathscr{GG}_{p}^{\mathrm{bool}}\). Then the following are equivalent:_
1. \(\mathbb{G}\) _is definable in_ \(\mathsf{GFO}+\mathsf{C}_{\mathrm{nu}}\)_._
_._
2. _There is a polynomial-size bounded-depth family of rational piecewise-linear GNNs using only_ \(\operatorname{\mathrm{isig}}\)_-activations and_ \(\mathsf{SUM}\)_-aggregation that computes_ \(\mathcal{G}\)_._
3. _There is a rpl-approximable polynomial weight bounded-depth family of GNNs that computes_ \(\mathcal{G}\)_._
Proof.: The implication \((1)\Longrightarrow(2)\) follows from Lemma 7.1 in the special case that the \(U_{i}\) are Boolean, that is, only take values in \(\{0,1\}\). We can then replace them by the unary relations \(P_{i}\) that we usually use to represent Boolean signals.
The implication \((2)\Longrightarrow(3)\) is trivial.
The implication \((3)\Longrightarrow(1)\) is Corollary 6.3.
_Remark 7.5_.: As the earlier results, Lemma 7.1 and Theorem 7.4 also hold for GNNs with global readout and the logic \(\mathsf{GFO}+\mathsf{C}_{\mathrm{nu}}^{\mathrm{gc}}\). The proofs can easily be adapted.
_Remark 7.6_.: Let us finally address a question which we we already raised at the end of Section 5. Is every unary query definable in \(\mathsf{GFO}+\mathsf{C}\) computable by single rational piecewise linear or at least by an rpl approximable GNN? In other words: do we really need families of GNNs in Theorem 7.4, or could we just use a single GNN?
It has been observed in [26] that the answer to this question is no. Intuitively, the reason is that GNNs cannot express "alternating" queries like nodes having an even degree. To prove this, we analyse the behaviour of GNNs on stars \(S_{n}\) with \(n\) leaves, for increasing \(n\). The signal at the root node that the GNN computes is approximately piecewise polynomial as a function of \(n\). However, a function that is \(1\) for all even natural numbers \(n\) and \(0\) for all odd numbers is very far from polynomial.
## 8 Random Initialisation
A GNN with random initialisation receives finitely many random features together with its \(p\)-dimensional input signal. We assume that the random features are chosen independently uniformly from the interval \([0,1]\). We could consider other distributions, like the normal distribution \(N(0,1)\), but in terms of expressiveness this makes no difference, and the uniform distribution is easiest to analyse. The random features at different vertices are chosen, independently. As in [2], we always assume that GNNs with random initialisation have global readout.7
Footnote 7: There is no deeper reason for this choice, it is just that the results get cleaner this way. This is the same reason as in [2].
We denote the uniform distribution on \([0,1]\) by \(\mathcal{U}_{[0,1]}\), and for a graph \(G\) we write \(\boldsymbol{r}\sim\mathcal{U}_{[0,1]}^{r\times V(G)}\) to denote that \(\boldsymbol{r}\in\mathcal{S}_{r}(G)\) is obtained by picking the features \(\boldsymbol{r}(v)_{i}\) independently from \(\mathcal{U}_{[0,1]}\). Moreover, for a signal \(\boldsymbol{x}\in\mathcal{S}_{p}(G)\), by \(\boldsymbol{x}\boldsymbol{r}\) we denote the \((p+r)\)-dimensional signal with \(\boldsymbol{x}\boldsymbol{r}(v)=\boldsymbol{x}(v)\boldsymbol{r}(v)\). Formally, a \((p,q,r)\)_-dimensional GNN with \(\mathfrak{N}\)_ with global readout of input dimension \(p+r\) and output dimension \(q\). It computes a random variable mapping pairs \((G,\boldsymbol{x})\in\mathcal{F}\mathcal{S}_{p}\) to the space \(\mathcal{S}_{q}(G)\), which we view as
a product measurable space \(\mathbb{R}^{q\times V(G)}\) equipped with a Borel \(\sigma\)-algebra (or Lebesgue \(\sigma\)-algebra, this does not matter here). Abusing (but hopefully also simplifying) notation, we use \(\mathfrak{R}\) to denote a GNN that we interpret as a GNN with random initialisation, and we use \(\widetilde{\mathfrak{R}}\) to denote the random variable. Sometimes it is also convenient to write \(\mathfrak{R}(G,\mathpzc{x}))\coloneqq(G,\widetilde{\mathfrak{R}}(x))\). It is not hard to show that the mapping \(\widetilde{\mathfrak{R}}\) is measurable with respect to the Borel \(\sigma\)-algebras on the product spaces \(\mathcal{S}_{p}(G)=\mathbb{R}^{p+r\times V(G)}\) and \(\mathcal{S}_{q}(G)=\mathbb{R}^{q\times V(G)}\). Here we use that the activation functions of \(\mathfrak{R}\) are continuous. To define the probability distribution of the random variable \(\widetilde{\mathfrak{R}}\), for all \((G,\mathpzc{x})\in\mathscr{S}_{p}\) and all events (that is, measurable sets) \(\mathcal{Y}\subseteq\mathcal{S}_{q}(G)\) we let
\[\Pr\big{(}\widetilde{\mathfrak{R}}(G,\mathpzc{x})\in\mathcal{Y}\big{)} \coloneqq\Pr_{r\sim\mathcal{Y}_{[0,1]}^{r\times V(G)}}\big{(}\widetilde{ \mathfrak{R}}(G,\mathpzc{x}\mathpzc{r})\in\mathcal{Y}\big{)}, \tag{8.A}\]
where on the left-hand side we interpret \(\mathfrak{R}\) as a \((p,q,r)\)-dimensional GNN with ri and on the right-hand side just as an ordinary GNN of input dimension \((p+r)\) and output dimension \(q\).
For a \((p,q,r)\)-dimensional GNN with ri we call \(p\) the _input dimension_, \(q\) the _output dimension_, and \(r\) the _randomness dimension_.
Let \(\mathscr{G}\) be a unary query on \(\mathscr{S}\mathcal{G}_{p}^{\mathrm{bool}}\). We say that a GNN with ri \(\mathfrak{R}\) of input dimension \(p\) and output dimension \(1\)_computes_\(\mathscr{G}\) if for all \((G,\mathscr{G})\in\mathscr{S}\mathcal{G}_{p}^{\mathrm{bool}}\) and all \(v\in V(G)\) it holds that
\[\begin{cases}\Pr\big{(}\widetilde{\mathfrak{R}}(G,\mathscr{G})\geq\frac{2}{3 }\big{)}\geq\frac{2}{3}&\text{if }\mathscr{G}(G,\mathscr{G})=1,\\ \Pr\big{(}\widetilde{\mathfrak{R}}(G,\mathscr{G})\leq\frac{1}{3}\big{)}\geq \frac{2}{3}&\text{if }\mathscr{G}(G,\mathscr{G})=0.\end{cases}\]
We can easily extend this definition to families \(\mathscr{R}=(\mathfrak{R}^{(n)})_{n\in\mathbb{N}}\) of GNNs with ri.
By a fairly standard probability amplification result, we can make the error probabilities exponentially small.
**Lemma 8.1**.: _Let \(\mathscr{G}\) be a unary query over \(\mathscr{S}\mathcal{S}_{p}\) that is computable by family \(\mathscr{R}=(\mathfrak{R}^{(n)})_{n\in\mathbb{N}}\) of GNNs with ri, and let \(\pi(X)\) be a polynomial. Then there is a family \(\mathscr{R}^{\prime}=(\mathfrak{R}^{\prime})^{(n)})_{n\in\mathbb{N}}\) of GNNs with ri such that the following holds._
1. \(\mathscr{R}^{\prime}\) _computes_\(\mathscr{G}\)_, and for every_ \(n\)_, every_ \((G,\mathscr{\ell})\in\mathscr{S}\mathcal{G}_{p}^{\mathrm{bool}}\) _of order_ \(n\)_, and every_ \(v\in V(G)\)_,_ \[\begin{cases}\Pr\big{(}\widetilde{\mathscr{R}}^{\prime}(G,\mathscr{\ell})=1 \big{)}\geq 1-2^{-\pi(n)}&\text{if }\mathscr{G}(G,\mathscr{\ell})=1,\\ \Pr\big{(}\widetilde{\mathscr{R}}^{\prime}(G,\mathscr{\ell})=0\big{)}\geq 1-2^{-\pi(n)}& \text{if }\mathscr{G}(G,\mathscr{\ell})=0.\end{cases}\] (8.B)
2. _The weight of_ \((\mathfrak{R}^{\prime})^{(n)}\) _is polynomially bounded in the weight of_ \(\mathfrak{R}^{(n)}\) _and_ \(n\)_._
3. _The depth of_ \((\mathfrak{R}^{\prime})^{(n)}\) _is at most the depth of_ \(\mathfrak{R}^{(n)}\) _plus_ \(2\)_._
4. _The randomness dimension of_ \((\mathfrak{R}^{\prime})^{(n)}\) _is polynomially bounded in_ \(n\) _and the randomness dimension of_ \(\mathfrak{R}^{(n)}\)_._
Proof.: We just run sufficiently many copies of \(\mathscr{R}\) in parallel (polynomially many suffice) and then take a majority vote in the end, which is responsible for one of the additional layers. This way we obtain a family \(\mathscr{R}^{\prime\prime}\) that achieves
\[\begin{cases}\Pr\left(\mathscr{R}^{\prime\prime}(G,\mathscr{O})\geq\frac{2}{3} \right)\geq 1-2^{-\pi(n)}&\text{if }\mathscr{O}(G,\mathscr{O})=1,\\ \Pr\left(\mathscr{R}^{\prime}(G,\mathscr{O})\leq\frac{1}{3}\right)\geq 1-2^{-\pi(n)}& \text{if }\mathscr{O}(G,\mathscr{O})=0.\end{cases}\]
To get the desired \(0,1\)-outputs, we apply the transformation \(\operatorname{lsig}(3x-1)\) to the output on an additional layer.
**Lemma 8.2**.: _Let \(\mathscr{O}\) be a unary query over \(\mathscr{SG}_{p}\) that is computable by an rpl-approximable polynomial-weight, bounded-depth family \(\mathscr{R}\) of GNNs with ri. Then there is an order-invariant \(\mathsf{GFO}+\mathsf{C}_{\operatorname{nn}}^{\operatorname{gc}}\)-formula that defines \(\mathscr{O}\)._
Proof.: Suppose that \(\mathscr{R}=(\mathfrak{R}^{(n)})_{n\in\mathbb{N}}\), and for every \(n\), let \(r^{(n)}\) be the randomness dimension of \(\mathfrak{R}^{(n)}\). Viewed as a standard GNN, \(\mathfrak{R}^{(n)}\) has input dimension \(p+r^{(n)}\) and output dimension \(1\). By the previous lemma, we may assume that our family satisfies (8.B) for \(\pi(X)=3pX^{3}\).
Our first step is to observe that we can safely truncate the random numbers, which we assume to be randomly chosen from \([0,1]\), to \(O(n)\) bits. This follows easily from Corollary 6.13. Truncating the numbers to \(cn\) bits means that we replace the random signal \(\boldsymbol{\nu}\) by a \(\boldsymbol{\nu}^{\prime}\) such that \(\left\|\boldsymbol{\nu}-\boldsymbol{\nu}^{\prime}\right\|_{\infty}\leq 2^{-cn}\), and the corollary implies that if we choose \(c\) sufficiently large, we will approximate the original GNN up to an additive error of \(\frac{1}{10}\). Thus for some constant \(c\) we may assume that the random strings are not drawn uniformly from \([0,1]\), but from the set
\[U_{n}\coloneqq\left\{\left.\sum_{i=0}^{cn-1}a_{i}2^{-i-1}\ \right|\ a_{0},\ldots,a_{cn-1}\in\{0,1\}\right\}.\]
Let us denote the uniform distribution on this set by \(\mathscr{U}_{n}\) Hence for for every \(n\), every \((G,\mathscr{O})\in\mathscr{SG}_{p}^{\operatorname{bool}}\) of order \(n\), and every \(v\in V(G)\),
\[\begin{cases}\Pr_{\boldsymbol{\nu}\sim\mathscr{U}_{n}^{r(n)}\times V(G)}\left( \mathscr{R}(G,\mathscr{O})\geq\frac{9}{10}\right)\geq 1-2^{-\pi(n)}&\text{if } \mathscr{O}(G,\mathscr{O})=1,\\ \Pr_{\boldsymbol{\nu}\sim\mathscr{U}_{n}^{r(n)}\times V(G)}\left(\mathscr{R} (G,\mathscr{O})\leq\frac{1}{10}\right)\geq 1-2^{-\pi(n)}&\text{if } \mathscr{O}(G,\mathscr{O})=0.\end{cases} \tag{8.C}\]
Next, we want to apply Theorem 6.2 in the version for GNNs with global readout and the logic \(\mathsf{GFO}+\mathsf{C}_{\operatorname{nn}}^{\operatorname{gc}}\). Let \(\boldsymbol{X}_{r}\) be an r-schema of type \(\mathsf{v}\mathsf{n}\to\mathsf{r}\). Suppose that \((G,\mathscr{O})\in\mathscr{SG}_{p}^{\operatorname{bool}}\), and let \(\mathscr{L}\) be an assignment over \(G\). We view \((G,\mathscr{O})\) as a \(p\)-labelled graph here, that is, as an \(\{E,P_{1},\ldots,P_{p}\}\)-structure. So \(((G,\mathscr{O}),\mathscr{L})\) is the pair consisting of this structure together with the assignment \(\mathscr{L}\). Let
\[\boldsymbol{\nu}(v)\coloneqq\Big{(}\langle\!\langle\boldsymbol{X}_{r}\rangle\! \rangle^{((G,\mathscr{O}),\mathscr{L})}\,(v,0),\ldots,\langle\!\langle \boldsymbol{X}_{r}\rangle\!\rangle^{((G,\mathscr{O}),\mathscr{L})}\,(v,r^{(n) }-1)\Big{)}. \tag{8.D}\]
In the following we will write \((G,\mathscr{O},\boldsymbol{\nu})\) instead of \(((G,\mathscr{O}),\mathscr{L})\) if \(\boldsymbol{\nu}\) is obtained from some assignment \(\mathscr{L}\) via (8.D), always assuming that \(\langle\!\langle\boldsymbol{X}_{r}\rangle\!\rangle^{((G,\mathscr{O}),\mathscr{ L})}\,(v,k)=0\) for \(k\geq r^{(n)}\).
Then if \(\varphi(x)\) is a formula whose only free variables are among \(x\) and the relation and function variables appearing in \(\mathbf{X}_{r}\), the value \(\llbracket\varphi\rrbracket^{((G,\mathscr{G}),\mathpzc{a})}\) only depends on \((G,\mathscr{E})\), \(\mathpzc{r}\) and \(v\coloneqq\mathpzc{a}(x)\), and we may ignore the rest of \(\mathpzc{a}\). In particular, we may write \((G,\mathscr{E},\mathpzc{r})\vDash\varphi(v)\) instead of \(\llbracket\varphi\rrbracket^{(G,\mathscr{G},\mathpzc{a})}=1\).
With is notation at hand, let us apply Theorem 6.2. Replacing \(W\) by the constant \(1\) and \(W^{\prime}\) by \(10\), we obtain an r-expression \(\mathbf{\rho}\) in \(\mathsf{GFO}+\mathsf{C}^{\mathrm{gc}}\) such that for all \((G,\mathscr{E})\in\mathscr{GF}_{p}\) of order \(n\) and \(\mathpzc{r}\in\mathscr{S}_{r^{(n)}}(G)\) defined as in (8.D), if \(\left\lVert\mathpzc{r}\right\rVert_{\infty}\leq 1\), then for all \(v\in V(G)\) we have
\[\left\lvert\tilde{\mathscr{R}}(G,\mathscr{E},\mathpzc{r})(v)-\left\langle \!\langle\mathbf{\rho}\rangle\!\right\rangle^{(G,\mathscr{E},\mathpzc{r})}(v) \right\rvert\leq\frac{1}{10}.\]
Note that if \(\mathpzc{r}(v)_{i}\in U_{n}\) for \(0\leq i<r^{(n)}\), then we have \(\left\lVert\mathpzc{r}\right\rVert_{\infty}\leq 1\). Hence,
\[\begin{cases}\Pr_{\mathpzc{r}\sim\mathpzc{M}_{n}^{r^{(n)}\times V(G)}}\big{(} \left\langle\!\langle\mathbf{\rho}\rangle\!\right\rangle^{(G,\mathscr{E},\mathpzc{ r})}(v)\geq\frac{8}{10}\big{)}\geq 1-2^{-\pi(n)}&\text{if }\mathscr{G}(G,\mathscr{E})=1,\\ \Pr_{\mathpzc{r}\sim\mathpzc{M}_{n}^{r^{(n)}\times V(G)}}\big{(}\left\langle \!\langle\mathbf{\rho}\rangle\!\right\rangle^{(G,\mathscr{E},\mathpzc{r})}(v)\leq \frac{2}{10}\big{)}\geq 1-2^{-\pi(n)}&\text{if }\mathscr{G}(G,\mathscr{E})=0.\end{cases}\]
From \(\mathbf{\rho}\) we easily obtain a formula \(\varphi(x)\), just saying \(\mathbf{\rho}\geq\frac{1}{2}\), such that
\[\Pr_{\mathpzc{r}\sim\mathpzc{M}_{n}^{r^{(n)}\times V(G)}}\big{(}\left\llbracket \varphi\rrbracket^{(G,\mathscr{E},\mathpzc{r})}(v)=\mathscr{G}(G,\mathscr{E} )(v)\right)\geq 1-2^{-\pi(n)}. \tag{8.E}\]
Our next step will be to simplify the representation of the random features in the signal \(\mathpzc{r}\in U_{n}^{r^{(n)}\times V(G)}\). Each number in \(U_{n}\) can be described by a subset of \(\{0,\ldots,cn-1\}\): the set \(S\) represents the number \(\sum_{s\in S}2^{-s-1}\). Thus we can represent \(\mathpzc{r}\in U_{n}^{r^{(n)}\times V(G)}\) by a relation \(R\subseteq V(G)\times\{0,\ldots,r^{(n)}-1\}\times\{0,\ldots,cn-1\}\), and we can transform \(\varphi(x)\) into a formula \(\varphi^{\prime}(x)\) that uses a relation variable \(X_{r}\) of type \(\{\texttt{vnn}\}\) instead of the r-schema \(\mathbf{X}_{r}\) such that
\[\Pr_{R}\Big{(}\left\llbracket\varphi^{\prime}\right\rrbracket^{(G,\mathscr{E},R)}(v)=\mathscr{G}(G,\mathscr{E})(v)\Big{)}\geq 1-2^{-\pi(n)}, \tag{8.F}\]
where the probability is over all \(R\) chosen uniformly at random from \(V(G)\times\{0,\ldots,r^{(n)}-1\}\times\{0,\ldots,cn-1\}\) and \((G,\mathscr{E},R)\) is \(\big{(}(G,\mathscr{E}),\mathpzc{a}\big{)}\) for some assignment \(\mathpzc{a}\) with \(\mathpzc{a}(X_{r})=R\).
The next step is to introduce a linear order and move towards an order invariant formula. We replace the relation variable \(X_{r}\) of type \(\{\texttt{vnn}\}\) by a relation variable \(Y_{r}\) of type \(\{\texttt{nnn}\}\), and we introduce a linear order \(\leqslant\) on \(V(G)\). Then we replace atomic subformulas \(X_{r}(x^{\prime},y,y^{\prime})\) of \(\varphi^{\prime}(x)\) by
\[\exists y^{\prime\prime}\leq\mathsf{ord}\big{(}\#x.x\leqslant x^{\prime}=\# y^{\prime\prime\prime}\leq\mathsf{ord}.y^{\prime\prime\prime}\leq y^{\prime \prime}\wedge Y_{r}(y^{\prime\prime},y,y^{\prime})\big{)}\]
The equation \(\#x.x\leqslant x^{\prime}=\#y^{\prime\prime\prime}\leq\mathsf{ord}.y^{\prime \prime\prime}\leq y^{\prime\prime}\) just says that \(x^{\prime}\) has the same position in the linear order \(\leqslant\) on \(V(G)\) as \(y^{\prime\prime}\) has in the natural linear order \(\leqslant\). So basically, we store the random features for the \(i\)th vertex in the linear order \(\leqslant\) in the \(i\)-entry of \(Y_{r}\). We obtain a new formula \(\varphi^{\prime\prime}(x)\) satisfying
\[\Pr_{R}\Big{(}\left\llbracket\varphi^{\prime\prime}\right\rrbracket^{(G, \mathscr{E},\leqslant,R)}(v)=\mathscr{G}(G,\mathscr{E})(v)\Big{)}\geq 1-2^{-\pi(n)}, \tag{8.G}\]
where the probability is over all \(R\) chosen uniformly at random from \(\{0,\ldots,n-1\}\times\{0,\ldots,r^{(n)}-1\}\times\{0,\ldots,cn-1\}\). Importantly, this holds for all linear orders \(\leqslant\) on \(V(G)\). Thus in some sense, the formula is order invariant, because \(\mathscr{O}(G,\mathscr{O})(v)\) does not depend on the order. However, the set of \(R\subseteq\{0,\ldots,n-1\}\times\{0,\ldots,r^{(n)}-1\}\times\{0,\ldots,cn-1\}\) for which we have \(\llbracket\varphi^{\prime\prime}\rrbracket^{(G,\mathscr{O},\leqslant,R)}(v)= \mathscr{O}(G,\mathscr{O})(v)\) may depend on \(\leqslant\); (8.G) just says that this set contains all \(R\) except for an exponentially small fraction.
In the final step, we apply a standard construction to turn randomness into non-uniformity, which is known as the "Adleman trick". It will be convenient to let
\[\Omega\coloneqq 2^{\{0,\ldots,n-1\}\times\{0,\ldots,r^{(n)}-1\}\times\{0, \ldots,cn-1\}}.\]
This is the sample space from which we draw the relations \(R\) uniformly at random. By (8.G), for each triple \((G,\mathscr{O},\leqslant)\) consisting of a graph \(G\) of order \(n\), a signal \(\mathscr{O}\in\mathcal{S}_{p}^{\mathrm{bool}}(G)\), and a linear order \(\leqslant\) on \(V(G)\) there is a "bad" set \(B(G,\mathscr{O},\leqslant)\subseteq\Omega\) such that
\[\forall R\not\in B(G,\mathscr{O},\leqslant):\,\llbracket\varphi^{\prime\prime }\rrbracket^{(G,\mathscr{O},\leqslant,R)}(v)=\mathscr{O}(G,\mathscr{O})(v) \tag{8.H}\]
and
\[\frac{\big{|}B(G,\mathscr{O},\leqslant)\big{|}}{|\Omega|}\leq 2^{-\pi(n)}. \tag{8.I}\]
Observe that the number of triples \((G,\mathscr{O},\leqslant)\) is bounded from above by \(2^{n^{2}+pn+n\log n}\) and that \(n^{2}+pn+n\log n<\pi(n)\). Thus we have
\[\frac{\big{|}\bigcup_{(G,\mathscr{O},\leqslant)}B(G,\mathscr{O},\leqslant) \big{|}}{|\Omega|}\leq\sum_{(G,\mathscr{O},\leqslant)}\frac{\big{|}B(G, \mathscr{O},\leqslant)\big{|}}{|\Omega|}\leq 2^{n^{2}+pn+n\log n}\cdot 2^{-\pi(n)}<1.\]
This means that there is a \(R^{(n)}\in\Omega\setminus\bigcup_{(G,\mathscr{O},\leqslant)}B(G,\mathscr{O}, \leqslant)\), and by (8.H) we have
\[\llbracket\varphi^{\prime\prime}\rrbracket^{(G,\mathscr{O},\leqslant,R^{(n)}) }(v)=\mathscr{O}(G,\mathscr{O})(v) \tag{8.J}\]
for all graphs \(G\) of order \(n\), signals \(\mathscr{O}\in\mathcal{S}_{p}^{\mathrm{bool}}(G)\), and linear orders \(\leqslant\) on \(V(G)\). Let \(R^{*}\coloneqq\bigcup_{n\in\mathbb{N}}\{n\}\times R^{(n)}\subseteq\mathbb{N}^ {4}\). We use \(R^{*}\) as built-in numerical relation (in addition to the numerical relations already in \(\varphi^{\prime\prime}\)). In \(\varphi^{\prime\prime}(x)\) we replace atomic subformulas \(Y_{r}(y,y^{\prime},y^{\prime\prime})\) by \(R^{*}(\mathsf{ord},y,y^{\prime},y^{\prime\prime})\). Then it follows from (8.J) that the resulting formula is order-invariant and defines \(\mathscr{O}\).
Before we prove the converse of the previous lemma, we need another small technical lemma about FNNs.
**Lemma 8.3**.: _Let \(k,n\in\mathbb{N}_{>0}\). There is a rational piecewise linear FNN \(\mathfrak{F}\) of input and output dimension \(1\), size polynomial in \(n\) and \(k\) such that_
\[\Pr_{r\sim\mathcal{U}}\big{(}\mathfrak{F}(r)\in\{0,\ldots,n-1\}\big{)}\geq 1-2 ^{-k},\]
_and for all \(i\in\{0,\ldots,n-1\}\),_
\[\frac{1-2^{-k}}{n}\leq\Pr_{r\sim\mathcal{U}}\left(\mathfrak{F}(r)=i\right)\leq \frac{1}{n}.\]
_Furthermore, \(\mathfrak{F}\) only uses \(\operatorname{relu}\)-activations._
Proof.: For all \(a\in\mathbb{R}\) and \(\varepsilon>0\), let \(f_{\varepsilon,a}:\mathbb{R}\to\mathbb{R}\) be defined by \(f_{\varepsilon,a}(x)\coloneqq\operatorname{lsig}\left(\frac{1}{\varepsilon}x -\frac{a}{\varepsilon}\right)\). Then
\[\begin{cases}f_{\varepsilon,a}(x)=0&\text{if }x\leq a,\\ 0\leq f_{\varepsilon,a}(x)\leq 1&\text{if }a\leq x\leq a+\varepsilon,\\ f_{\varepsilon,a}(x)=1&\text{if }a+\varepsilon\leq x.\end{cases}\]
Furthermore, for \(a,b\in\mathbb{R}\) with \(a+2\varepsilon\leq b\) and \(g_{\varepsilon,a,b}\coloneqq f_{\varepsilon,a}-f_{\varepsilon,b-\varepsilon}\) we have
\[\begin{cases}g_{\varepsilon,a,b}(x)=0&\text{if }x\leq a,\\ 0\leq g_{\varepsilon,a,b}(x)=1&\text{if }a\leq x\leq a+\varepsilon,\\ g_{\varepsilon,a,b}(x)=1&\text{if }a+\varepsilon\leq x\leq b-\varepsilon,\\ 0\leq g_{\varepsilon,a,b}(x)=1&\text{if }b-\varepsilon\leq x\leq b,\\ g_{\varepsilon,a,b}(x)=0&\text{if }b\leq x.\end{cases}\]
Let \(\ell\coloneqq\lceil\log n\rceil\) and \(\varepsilon\coloneqq 2^{-k-\ell}\). In the following, let For \(0\leq i\leq n\), let \(a_{i}\coloneqq\frac{i}{n}\). Let \(a_{i}^{-}\coloneqq\left\lfloor 2^{k+\ell+2}\frac{i}{n}\right\rfloor 2^{-k-\ell-2}\) and \(a_{i}^{+}\coloneqq\left\lceil 2^{k+\ell+2}\frac{i}{n}\right\rceil 2^{-k-\ell-2}\). Then \(a_{i}^{-}\leq a_{i}\leq a_{i}^{+}\) and \(a_{i}-a_{i}^{-}\leq\frac{\varepsilon}{4}\), \(a_{i}^{+}-a_{i}\leq\frac{\varepsilon}{4}\). Moreover, \(a_{i}^{-}-a_{i-1}^{+}\geq\frac{1}{n}-\varepsilon 2\geq\frac{\varepsilon}{2}\). Let \(a_{i}^{++}\coloneqq a_{i}^{+}+\frac{\varepsilon}{4}\) and \(a_{i}^{--}\coloneqq a_{i}^{-}-\frac{\varepsilon}{4}\). Then \(a_{i}^{-}\leq a_{i}^{-}\leq a_{i}\leq a_{i}^{+}\leq a_{i}^{++}\) and \(a_{i}-a_{i}^{-}\leq\frac{\varepsilon}{2}\), \(a_{i}^{++}-a_{i}\leq\frac{\varepsilon}{2}\).
For \(1\leq i\leq n\), let \(I_{i}\coloneqq[a_{i-1},a_{i}]\) and \(J_{i}\coloneqq[a_{i-1}^{++},a_{i}^{--}]\). Then \(J_{i}\subseteq I_{i}\). The length of \(I_{i}\) is \(\frac{1}{n}\), and the length of \(J_{i}\) is at least \(\frac{1}{n}-\varepsilon\geq\frac{1}{n}(1-2^{-k})\). Thus the probability that a randomly chosen \(r\in[0,1]\) ends up in one of the intervals \(J_{i}\) is at least \(1-2^{-k}\). Moreover, for every \(i\) we have
\[\frac{1-2^{-k}}{n}\leq\frac{1}{n}-\varepsilon\leq\Pr_{r\sim\mathcal{U}}(r\in J _{i})\leq\frac{1}{n}.\]
Let \(g_{i}\coloneqq g_{\frac{\varepsilon}{4},a_{i-1}^{+},a_{i}^{-}}\). Then \(g(r)=1\) for \(r\in J_{i}\), \(g_{i}(r)=0\) for \(r\not\in I_{i}\), and \(0\leq g_{i}(r)\leq 1\) for \(r\in I_{i}\setminus J_{i}\).
We use the first two layers of our FNN \(\mathfrak{F}\) to compute \(g_{i}\) of the input for all \(i\). That is, on the second level, \(\mathfrak{F}\) has \(n\) nodes \(v_{1},\ldots,v_{n}\), and \(f_{\mathfrak{F},v_{i}}(r)=g_{i}(r)\). As the intervals \(I_{i}\) are disjoint, at most one \(v_{i}\) computes a nonzero value, and with probability at least \(2-2^{-k}\), at least one of the nodes takes value \(1\). On the last level, for each \(i\) there is an edge of weight \(i-1\) from \(v_{i}\) to the output node. Then if \(r\in J_{i}\) the output is \(i-1\), and the assertion follows.
**Lemma 8.4**.: _Let \(\mathbb{G}\) be a unary query over \(\mathcal{G}\mathcal{S}_{p}^{\mathrm{bool}}\) that is definable by an order-invariant \(\mathsf{GFO}+\mathsf{C}_{\mathrm{nu}}^{\mathrm{gc}}\)-formula. Then there is a polynomial-size bounded-depth family \(\mathcal{R}\) of rational piecewise-linear GNNs with \(\operatorname{ri}\) that computes \(\mathbb{G}\)._
_Furthermore, the GNNs in \(\mathcal{R}\) only use \(\operatorname{relu}\)-activations and \(\mathsf{SUM}\)-aggregation._
Proof.: Let \(\varphi(x)\) be an order-invariant \(\mathsf{GFO}+\mathsf{C}_{\mathrm{nu}}^{\mathcal{E}_{\mathrm{nu}}}\)-formula that defines \(\mathscr{O}\). This means that for all \(p\)-labelled graphs \((G,\mathscr{O})\in\mathscr{E}\mathcal{S}_{p}^{\mathrm{bool}}\), all linear orders \(\leqslant\) on \(V(G)\), and all \(v\in V(G)\), we have
\[(G,\mathscr{O},\leqslant)\vDash\varphi(v)\iff\mathscr{O}(G,\mathscr{O})(v)=1.\]
We want to exploit that if we choose the random features for each vertex, independently for all vertices, then with high probability, they are all distinct and thus they give us a linear order on the vertices.
However, in order to be able to apply Lemma 7.1, we need to carefully limit the randomness. Let \(U_{1},U_{2},U_{3}\) be function variables of type \(\mathtt{v}\to\mathtt{n}\). We let \(\varphi^{\prime}(x)\) be the formula obtained from \(\varphi\) by replacing each atomic subformula \(x\leqslant x^{\prime}\) by the formula
\[U_{1}(x)<U_{1}(x^{\prime})\vee\big{(}U_{1}(x)=U_{1}(x^{\prime}) \wedge U_{2}(x)<U_{2}(x^{\prime})\big{)}\] \[\vee\big{(}U_{1}(x)=U_{1}(x^{\prime})\wedge U_{2}(x)=U_{2}(x^{ \prime})\wedge U_{3}(x)\leq U_{3}(x^{\prime})\big{)}.\]
That is, we order the vertices lexicographically by their \(U_{i}\)-values. If no two vertices have identical \(U_{i}\)-values for \(i=1,2,3\), then this yields a linear order.
Let \((G,\mathscr{O})\in\mathscr{E}\mathcal{S}_{p}^{\mathrm{bool}}\) of order \(n\coloneqq|G|\). For functions \(F_{1},F_{2},F_{3}:V(G)\to\{0,\ldots,n-1\}\) and \(v\in V(G)\), we write \((G,\mathscr{O},F_{1},F_{2},F)\vDash\varphi^{\prime}(v)\) instead of \(((G,\mathscr{O}),\mathscr{O})\vDash\varphi^{\prime}\) for some and hence every assignment \(\mathscr{O}\) with \(\mathscr{O}(U_{i})=F_{i}\) and \(\mathscr{O}(x)=v\). Let us call \(F_{1},F_{2},F_{3}:V(G)\to\{0,\ldots,n-1\}\)_bad_ if there are distinct \(v,w\in V(G)\) such that \(F_{i}(v)=F_{i}(w)\) for \(i=1,2,3\), and call them _good_ otherwise. Observe that for randomly chosen \(F_{1},F_{2},F_{3}\), the probability that they are bad is at most \(\frac{1}{n}\).
By the construction of \(\varphi^{\prime}\) from \(\varphi\), if \(F_{1},F_{2},F_{3}\) are good then for all \(v\in V(G)\) we have
\[(G,\mathscr{O},F_{1},F_{2},F_{3})\vDash\varphi^{\prime}(v)\iff\mathscr{O}(G, \mathscr{O})(v)=1.\]
By Lemma 7.1 in its version for GNNs with global readout and the logic \(\mathsf{FO}^{2}+\mathsf{C}_{\mathrm{nu}}\), there is a polynomial-size bounded-depth family \(\mathscr{N}=(\mathfrak{N}^{(n)})_{n\in\mathbb{N}}\) of rational piecewise-linear GNNs of input dimension \(p+3\) such that for all \((G,\mathscr{O})\in\mathscr{E}\mathcal{S}_{p}\) of order \(n\) and all functions \(F_{1},F_{2},F_{3}:V(G)\to\{0,\ldots,n-1\}\) the following holds. Let \(\mathscr{U}\in\mathcal{S}_{3}(G)\) be the signal defined by \(\mathscr{U}(v)=\big{(}F_{1}(v),F_{2}(v),F_{3}(v)\big{)}\). Then for all \(v\in V(G)\) we have \(\widetilde{\mathscr{N}}(G,\mathscr{O}\mathscr{U})\in\{0,1\}\) and
\[\widetilde{\mathscr{N}}(G,\mathscr{O}\mathscr{U})=1\iff(G,\mathscr{O},F_{1}, F_{2},F_{3})\vDash\varphi^{\prime}(v).\]
Thus if \(F_{1},F_{2},F_{3}\) are good,
\[\widetilde{\mathscr{N}}(G,\mathscr{O}\mathscr{U})=\mathscr{O}(G,\mathscr{O}).\]
Thus all we need to do is use the random features to generate three random functions from \(V(G)\to\{0,\ldots,n-1\}\). At first sight this seems easy, because the random features from \([0,1]\) contain "more randomness" than the functions. However, in fact it is not possible, essentially because we cannot map the interval \([0,1]\) to a discrete subset of the reals of more than one element by a continuous function. But it is good enough to do this approximately, and for this we can use Lemma 8.3. We use this lemma to create a GNN layer \(\mathfrak{L}^{(n)}\) that takes a random random signal \(\mathscr{U}\in\mathcal{S}_{3}(G)\) and computes a signal \(\mathscr{U}\in\mathcal{S}_{3}(G)\) such that with high probability, \(\mathscr{U}(v)_{i}\in\{0,\ldots,n-1\}\), and \(\mathscr{U}(v)_{i}\) is almost
uniformly distributed in \(\{0,\ldots,n-1\}\), for all \(i,v\). This is good enough to guarantee that with high probability the functions \(F_{1},F_{2},F_{3}\) defined by \(\upomega\) are good. Thus if we combined \(\mathfrak{L}^{(n)}\) with \(\mathfrak{N}^{(n)}\) for all \(n\), we obtain a family of GNNs with ri that computes \(\mathcal{G}\).
_Remark 8.5_.: With a little additional technical work, we could also prove a version of the lemma where the GNNs only use lsig-activations. But it is not clear that this is worth the effort, because actually relu activations are more important (in practice) anyway. Recall that lsig can be simulated with relu, but not the other way around.
Finally, we can prove the following Theorem, which implies Theorem 1.3 stated in the introduction.
**Theorem 8.6**.: _Let \(\mathcal{G}\) be a unary query on \(\mathcal{FG}_{p}^{\mathrm{bool}}\). Then the following are equivalent:_
1. \(\mathcal{G}\) _is definable in order-invariant_ \(\mathsf{GFO}+\mathsf{C}_{\mathrm{nu}}^{\mathrm{gc}}\)_._
2. _There is a polynomial-size bounded-depth family of rational piecewise-linear GNNs with ri, using only_ \(\mathrm{relu}\)_-activations and_ \(\mathsf{SUM}\)_-aggregation, that computes_ \(\mathcal{G}\)_._
3. _There is a rpl-approximable polynomial-weight bounded-depth family of GNNs with ri that computes_ \(\mathcal{G}\)_._
4. \(\mathcal{G}\) _is in_ \(\mathsf{TC}^{0}\)_._
Proof.: The implication \((1)\Longrightarrow(2)\) is Lemma 8.4. The implication \((2)\Longrightarrow(3)\) is trivial. The implication \((3)\Longrightarrow(1)\) is Lemma 8.4. And finally, the equivalence \((1)\Longleftrightarrow(4)\) is Corollary 3.31.
## 9 Conclusions
We characterise the expressiveness of graph neural networks in terms of logic and Boolean circuit complexity. While this forces us to develop substantial technical machinery with many tedious details, the final results, as stated in the introduction, are surprisingly simple and clean: GNNs correspond to the guarded fragment of first-order logic with counting, and with random initialisation they exactly characterise \(\mathsf{TC}^{0}\). One reason I find this surprising is that GNNs carry out real number computations, whereas the logics and circuits are discrete Boolean models of computation.
We make some advances on the logical side that may be of independent interest. This includes our treatment of rational arithmetic, non-uniformity and built-in relations on unordered structures, and unbounded aggregation (both sum and max). The latter may also shed new light on the relation between first-order logic with counting and the recently introduced weight aggregation logics [6] that deserves further study.
Our results are also interesting from a (theoretical) machine-learning-on-graphs perspective. Most importantly, we are the first to show limitations of GNNs with random initialisation. Previously, it was only known that exponentially large GNNs with ri can
approximate all functions on graphs of order \(n\)[2], but no upper bounds were known for the more realistic model with a polynomial-size restriction. Another interesting consequence of our results (Theorems 7.4 and 8.6) is that arbitrary GNNs can be simulated by GNNs using only \(\mathsf{SUM}\)-aggregation and relu-activations with only a polynomial overhead in size. This was partially known [32] for GNNs distinguishing two graphs, but even for this weaker result the proof of [32] requires exponentially large GNNs because it is based on the universal approximation theorem for feedforward neural networks [32]. It has recently been shown that such a simulation of arbitrary GNNs by GNNs with \(\mathsf{SUM}\)-aggregation is not possible in a uniform setting [26].
We leave it open to extend our results to higher-order GNNs [24] and the bounded-variable fragments of \(\mathsf{FO}+\mathsf{C}\). It might also be possible to extend the results to larger classes of activation functions, for example, those approximable by piecewise-polynomial functions (which are no longer Lipschitz continuous). Another question that remains open is whether the "Uniform Theorem", Theorem 5.1 or Corollary 5.3, has a converse in terms of some form of uniform families of GNNs. It is not even clear, however, what a suitable uniformify notion for families of GNNs would be.
The most interesting extensions of our results, however, would be to GNNs of unbounded depth. Does the correspondence between circuits and GNNs still hold if we drop or relax the depth restriction? And, thinking about uniform models, is there a descriptive complexity theoretic characterisation of recurrent GNNs?
|
2304.01914 | Accelerating and Compressing Deep Neural Networks for Massive MIMO CSI
Feedback | The recent advances in machine learning and deep neural networks have made
them attractive candidates for wireless communications functions such as
channel estimation, decoding, and downlink channel state information (CSI)
compression. However, most of these neural networks are large and inefficient
making it a barrier for deployment in practical wireless systems that require
low-latency and low memory footprints for individual network functions. To
mitigate these limitations, we propose accelerated and compressed efficient
neural networks for massive MIMO CSI feedback. Specifically, we have thoroughly
investigated the adoption of network pruning, post-training dynamic range
quantization, and weight clustering to optimize CSI feedback compression for
massive MIMO systems. Furthermore, we have deployed the proposed model
compression techniques on commodity hardware and demonstrated that in order to
achieve inference gains, specialized libraries that accelerate computations for
sparse neural networks are required. Our findings indicate that there is
remarkable value in applying these model compression techniques and the
proposed joint pruning and quantization approach reduced model size by 86.5%
and inference time by 76.2% with minimal impact to model accuracy. These
compression methods are crucial to pave the way for practical adoption and
deployments of deep learning-based techniques in commercial wireless systems. | Omar Erak, Hatem Abou-Zeid | 2023-01-20T12:44:04Z | http://arxiv.org/abs/2304.01914v1 | # Accelerating and Compressing Deep Neural Networks for Massive MIMO CSI Feedback
###### Abstract
The recent advances in machine learning and deep neural networks have made them attractive candidates for wireless communications functions such as channel estimation, decoding, and downlink channel state information (CSI) compression. However, most of these neural networks are large and inefficient making it a barrier for deployment in practical wireless systems that require low-latency and low memory footprints for individual network functions. To mitigate these limitations, we propose accelerated and compressed efficient neural networks for massive MIMO CSI feedback. Specifically, we have thoroughly investigated the adoption of network pruning, post-training dynamic range quantization, and weight clustering to optimize CSI feedback compression for massive MIMO systems. Furthermore, we have deployed the proposed model compression techniques on commodity hardware and demonstrated that in order to achieve inference gains, specialized libraries that accelerate computations for sparse neural networks are required. Our findings indicate that there is remarkable value in applying these model compression techniques and the proposed joint pruning and quantization approach reduced model size by 86.5% and inference time by 76.2% with minimal impact to model accuracy. These compression methods are crucial to pave the way for practical adoption and deployments of deep learning-based techniques in commercial wireless systems.
Deep learning, CSI feedback, accelerated neural networks, model compression, massive MIMO.
## I Introduction
In recent years, machine learning (ML) has proven to be a fast, reliable, and robust alternative for solving complex problems in different fields, such as computer vision [1], autonomous driving [2] and medical diagnosis [3]. As a result, there is a growing interest to explore the potential applications of machine learning in wireless communications to improve performance and reliability. For example, [4] have demonstrated promising performance for different coding schemes such as low-density parity-check (LDPC) and BCH codes using a graph neural network (GNN)-based architecture for channel decoding. The work in [5] has shown that it is possible to use a neural network (NN) for the synchronization of narrowband physical random-access channel (NPRACH) for uplink narrowband internet of things (NB-IoT). Machine learning has also been used for downlink applications. CsiNet was developed using deep learning and has proven to be a successful channel state information (CSI) sensing and recovery mechanism that outperformed existing compressive sensing (CS)-based methods [6].
The performance and reliability improvements that have been achieved as a result of applying machine learning techniques in wireless communications continues to drive further research forward. However, these improvements come with a hefty cost that is often neglected. These machine learning solutions are usually computationally demanding and they require a large amount of resources, including memory, CPU, and energy to train and deploy the models, and this has significantly hindered their practical deployment.
Recently, there has been a growing focus on developing tiny and efficient machine learning models that are small enough to run on embedded systems with limited resources. By applying pruning, trained quantization and huffman coding [7] were able to compress AlexNet, LeNet and VGG-16 by 35\(\times\), 39\(\times\) and 49\(\times\) respectively with no loss in accuracy. More recently, a convolutional neural network (CNN) architecture called SqueezeNet was developed and it was able to achieve the same accuracy AlexNet achieved on ImageNet whilst being 510\(\times\) smaller in size [8]. These examples showcase the tremendous success of developing tiny, efficient neural networks in computer vision. However, there has been limited work accelerating and compressing deep neural networks in wireless communications.
Our work aims to demonstrate the potential and importance of accelerating and compressing deep learning based wireless communications solutions. Developing efficient models will pave the way for practical adoption and deployments in commercial systems. The main contributions of this paper are:
* as well as the results of combining multiple techniques together.
* A thorough investigation of the degree of neural network pruning and different quantization levels has been conducted, and new insights on the resulting performances and trade-offs are drawn.
* We have deployed the proposed model compression techniques on a Raspberry Pi 4 and demonstrated that to achieve inference gains, specialized libraries that accelerate computations for sparse neural networks are required. To the best of our knowledge, this is the first study to deploy CsiNet compressed models on commodity hardware, use acceleration libraries, and measure inference times.
* The proposed compression method that combines network pruning and quantization achieved an average of 86.5% reduction in model size and reduced inference time by 76.2% compared to the original CsiNet models with minimal impact to model accuracy.
The results of this paper highlight the significant gains model compression techniques can have for wireless communications deep learning models. All the code to reproduce our results and conduct further research to other CSI compression networks, and other wireless communications models will be made available upon acceptance.
The rest of this paper is organized as follows. In Section II we provide a summary of related research. Following that, Section III presents the system model of CsiNet [6] and our proposed model compression techniques for CsiNet. In Section IV we describe our experimental setup and evaluation metrics, and Section V discusses the results of our findings. Finally, we discuss conclusions and future directions in Section VI.
## II Related Work
A limited number of studies have investigated and researched the development of the low-complexity deep learning models in the context of ML for wireless communications. Recently, [9] developed a tiny ML approach for channel estimation and signal detection. Their approach splits each large layer in a deep neural network into three smaller sub-layers, and they were able to achieve 4.5\(\times\) reduction in model storage. In [10], the authors have explored the applications of some model compression techniques and demonstrated their potential for signal detection in MIMO communications and CSI feedback compression. The authors of [11] and [12] developed lightweight neural networks for CSI feedback in massive MIMO. In [11], a new lightweight network called ShuffleCsiNet that achieved around 5\(\times\) reduction in parameter number compared to ConvCsiNet is proposed, and in [12] the authors reduce the complexity of existing networks by 25.50%-43.46% using knowledge distillation, with minimal impacts to accuracy.
Despite the promising results achieved, a comprehensive understanding of neural network pruning, quantization, and weight clustering on deep learning compression models for massive MIMO CSI feedback has not been conducted. To this end, in this paper we have applied these standard model compression techniques and analyzed both their individual and combined performances to a deep learning network for CSI feedback compression. We have measured the key quantitative metrics of model storage, prediction accuracy, and inference time. Different from prior work, we measure real-world _inference time_ on a Raspberry Pi 4 after deploying the compressed deep learning models on the hardware. We then show that specialized libraries that can accelerate the computations for sparse neural networks are required to realize the inference gains that are cited in literature as theoretical reductions in computational complexity. Our results indicate that without such accelerators there may be limited real-world gains in inference time.
## III Accelerated and Compressed Deep Learning Models for CSI Feedback
### _System Model of CsiNet_
CsiNet is a CNN that is used for channel state information (CSI) sensing and recovery mechanism. This model uses an encoder (sensing) and decoder (recovery) to compress CSI at the user equipment (UE) and reconstruct it at the base station (BS). CsiNet outperformed traditional compressive sensing algorithms at all compression ratios [6].
When developing CsiNet, the authors of [6] considered a massive single-cell downlink MIMO system with a single receiver antenna at the UE. The number of transmitting antennas at the BS is \(N_{t}\gg 1\). This system operates in OFDM over \(\hat{N_{c}}\) subcarriers.
Let \(\hat{\textbf{H}}\) be the CSI matrix in the spatial frequency domain acquired at the UE. To reduce the significant impact of feedback, [6] proposed converting \(\hat{\textbf{H}}\) into an angular delay domain matrix **H** using a 2D discrete Fourier transform (DFT) as follows:
\[\textbf{H}=\textbf{F}_{d}\hat{\textbf{H}}\textbf{F}_{a}^{H} \tag{1}\]
where \(\textbf{F}_{d}\) is a \(\hat{N_{c}}\)\(\times\)\(\hat{N_{c}}\) DFT matrix and \(\textbf{F}_{a}\) is a \(N_{t}\)\(\times\)\(N_{t}\) DFT matrix. The total number of feedback parameters is given by \(N\) = \(N_{t}\)\(\times\)\(\hat{N_{c}}\).
CsiNet implements an encoder that takes the angular delay domain matrix **H** as input applies the encoding function **f** and returns an \(M\) dimensional encoded vector **s**, where \(M\)\(<\)\(N\) as shown below:
\[\textbf{s}=\textbf{f(H)} \tag{2}\]
The compression ratio \(\gamma\) for the encoded vector is given by \(M/N\). In this paper, we will evaluate the performance of CsiNet, and the proposed efficient and compressed CsiNet networks for various compression ratios. CsiNet also implements a decoder at the BS that takes **s** as the input and ideally returns the original angular delay domain matrix **H** as follows:
\[\textbf{H}=\textbf{f}^{-1}\textbf{(s)} \tag{3}\]
Let \(\tilde{\textbf{H}}\) be the recovered channel from CsiNet. To evaluate the performance of the CSI reconstruction the normalized mean square error (NMSE) is used as follows:
\[\textbf{NMSE}=\mathbb{E}\left\{\frac{||\tilde{\textbf{H}}-\textbf{H}||_{2}^{ 2}}{||\textbf{H}||_{2}^{2}}\right\} \tag{4}\]
Another metric used by [6] to evaluate the performance of CsiNet is the cosine similarity \(\rho\) to measure the quality of the beamforming vector \(\textbf{v}_{n}\). \(\rho\) and \(\textbf{v}_{n}\) are given as follows:
\[\textbf{v}_{n}=\frac{\textbf{h}_{n}}{||\textbf{h}_{n}||_{2}} \tag{5}\]
\[\rho=\mathbb{E}\left\{\frac{1}{\hat{N_{c}}}\sum_{n=1}^{\hat{N_{c}}}\frac{|| \textbf{h}_{n}^{H}\tilde{\textbf{h}}_{n}|}{||\textbf{h}_{n}||_{2}||\textbf{h}_{ n}||_{2}}\right\} \tag{6}\]
original channel vector of the \(n\)th subcarrier.
The training and testing samples used in this paper are equivalent to the ones used by [6] to ensure a fair comparison is made. The channel matrices for two environments are created using the COST 2100 model [13]. The first environment is the indoor picocellular at the 5.3GHz band and the second environment is the outdoor rural environment at the 300 MHz band. All parameters follow their default setting in [13].
### _Compression Techniques_
In this work, we apply three common model compression techniques individually and then we combine post-training quantization with pruning and with weight clustering. Figure 1 summarizes the workflow used to apply the different techniques and evaluate their overall performance.
#### Iii-B1 **Magnitude-Based Weight Pruning**
Pruning is a model compression technique used to remove parameters from a large, dense network, yielding a sparse subnetwork that is smaller in size and more computationally efficient compared to the original network. There are different pruning methods and workflows, in this paper we apply magnitude-based weight pruning which works by ranking connections in a network according to the absolute values of their weights, i.e how much they contribute to the overall output. A target sparsity ratio is selected and low-weight connections that have minimal impact are set to 0 and then removed to achieve the targeted sparsity ratio. The resultant sparse model is then retrained so new weights can be determined in order to maintain similar accuracy [14][15].
Firstly, we restructure the original model to comply with the constrains of XNNPack sparse inference for CNN models [16][17]. XNNPack is a highly optimized library of neural network inference operators that allows us to improve the inference performance for the sparse pruned model. Common nueral network operators that are used in the original CsiNet model and are supported by XNNPack include: 2D Convolution, Leaky ReLU and sigmoid [6][17]. Then, we apply magnitude based weight pruning to the restructured CsiNet model, we apply pruning to the dense layers and we avoid pruning the first and final layers so that the model's accuracy is not significantly impacted. We also use the PruneForLatencyOnXNNPack pruning policy which ensures that only parts of the model that are compliant with XNNPack sparse inference are pruned [14][16][17]. Finally, we fine-tune the pruned model by retraining it. Algorithm 1 presents the pseudo-code for magnitude-based weight pruning of dense layers.
#### Iii-B2 **Post-Training Quantization**
This is a model compression technique that works by storing weights and activations with a lower precision, for example, weights can be stored as 16 bit floating point values or as 8 bit integers instead of the full precision of 32 bit floating point values. This would allow for around \(2\times\) to \(4\times\) reduction in model size. Furthermore, hardware support for 8 bit integer computations is typically faster compared to 32 bit floating point values computations, which helps improve inference time.
We apply quantization to a trained model and we evaluate the quantized model without retraining. We apply dynamic range quantization which statically quantizes only the weights and activations from floating point to integer and performs computations with 8-bit weights and activations [14]. The above method is unchanged when we combine post-training quantization with other techniques.
#### Iii-B3 **Weight Clustering**
This is a model compression technique that works by grouping \(n\) weights \(W=\{w_{1},w_{2}...,w_{n}\}\) of each layer in a network into \(k\) clusters \(C=\{c_{1},c_{2},...,c_{k}\}\) where \(n\gg k\). The clusters' centroids are then initialized using a centroid initialization method such as linear, density, random and kmeans++ initialization. The clusters' centroid value is then shared for all the weights belonging to the cluster. The model is then fine-tuned by retraining for less epochs in order to maintain a high accuracy [7][14].
We apply weight clustering to a trained model. We only apply weight clustering to the dense layers and we avoid weight clustering the first and final layers so that the model's accuracy is not significantly impacted. Finally, we fine-tune
Fig. 1: Process of Applying Different Model Compression Techniques
the weight clustered model by retraining it.
#### Iii-B4 **Combining Techniques**
In this paper we combine dynamic range quantization with pruning and dynamic range quantization with weight clustering. Figure 1 summarizes the workflow used to combine these compression techniques. For pruning and weight clustering, the models needs to be retrained so that they are fine-tuned which helps maintain a high accuracy. Therefore we apply quantization in the final step once the models have been trained so that the new retrained weights are quantized. Algorithm 1 presents the pseudo-code for combining pruning with quantization.
```
Input:model:CsiNet Model ratio:Sparsity Ratio level:Quantization Level Output:final model:Pruned Quantized Model FunctionPruneQuantize(model, ratio, level):\(model\Leftarrow\)SparseModelOptimization(model) foreach\(layer\in model.layers\)do if\(layer==Dense\)then \(T\Leftarrow\)TotalNumberOfWeights(layer.weights) \(S\Leftarrow\)Sort(layer.weights) \(min\Leftarrow S[integer(T*ratio)]\) foreach\(weight\in layer.weights\)do if\(weight<min\)then \(weight\Leftarrow\) 0 \(model\Leftarrow\)FineTuneTraining(model) final model \(\Leftarrow\)Quantization(model, level) returnfinal model
```
**Algorithm 1**CsiNet Magnitude-Based Weight Pruning and Quantization
## IV Experimental Setup
### _Hardware and Software Platforms_
The results in this paper were obtained using the following hardware and software platforms:
**Hardware** The system used to conduct the experiments is a Raspberry Pi 4. It has a 64-bit quad-core ARM Cortex-A72 (BCM2711) CPU running at 1.5GHz. This system has 4GB of LPDDR4-3200 SDRAM.
**Software** The system used to conduct the experiments runs the Raspberry Pi OS, Kernel version: 5.15 Debian version: 11 (bullseye). We use Python (v3.9.2) and TensorFlow (2.10.0). We use the TensorFlow Lite interpreter to perform inference, and benchmark using the TensorFlow Lite benchmark tool.
### _Evaluation Metrics_
To evaluate the effectiveness of pruning, post-training quantization and weight clustering on the CsiNet models we use the following metrics:
**Model Size (lower is better)** This is the model binary size once it is saved on disk. This metric is measured in megabytes.
**Inference Time (lower is better)** This is the time it takes the model to process the input and produce a valid output. This metric is measured in microseconds.
**Normalized Mean Square Error (lower is better)** This is the metric described by Equation 4 to evaluate the performance of the model. It is the normalized difference between the recovered channel and the original channel.
**Cosine Similarity** (higher is better) This is the metric described by Equation 6 and is also used to evaluate the performance of the model. This metric measures the quality of the beamforming vector and will be referred to as \(\rho\).
## V Numerical Results and Analysis
In this section we first evaluate and explain the results observed from applying pruning, post-training quantization and weight clustering independently to CsiNet. These results are summarized in Table I. We then present the results obtained from combining different model compression techniques together. These results are summarized in Table II. Next, we investigate the effects of applying different levels of model sparsity percentages and quantization levels - both independently and jointly. These results presented in Figure 2 provide insights on the trade-offs between accuracy, inference time, and model size - and indicate the sparsity and quantization levels that provide an excellent operating point for CsiNet. Finally, we demonstrate the need to leverage and implement sparse inference acceleration for hardware implementations of pruned neural networks. These results are presented in Figure 3 and show the gains in inference time with the XNNPack sparse inference accelerator [16][17].
### _Pruning_
The pruned models used to collect the results in Table I and II have 50% sparsity in the dense layers. This value was experimentally chosen to give a good trade-off between model size, inference time and NMSE (results shown in Sec V-F).
#### V-A1 **Model Size**
Pruning successfully reduced the size of CsiNet for the four different compression ratios \(\gamma\) used in [6]. An average size reduction of 43.9% was achieved for the four different \(\gamma\) values.
#### V-A2 **Inference**
Pruning achieved the lowest latency compared to the original, quantized and weight clustered models for all values of \(\gamma\). On average, pruning provided 66.8% decrease in inference time when compared to the original CsiNet model. This is because 0 weights were skipped during inference when XNNPack acceleration is implemented. These results demonstrate the effectiveness of pruning in enabling low-latency in wireless communications.
#### V-A3 **NMSE and \(\rho\)**
The performance of the pruned CsiNet models was comparable to the original models for most cases. In an indoor environment, \(\rho\) for the original and pruned models were identical for all values of \(\gamma\). For \(\gamma\) = 1/4, 1/16 and 1/32 in an indoor environment the pruned model outperformed the original model in terms of NMSE by a small margin. In an outdoor environment the original model outperformed the pruned model in \(\rho\) and NMSE. This was most significant for \(\gamma\) = 1/16 where the NMSE of the pruned model was on average 15.3% larger than the original model and suggests that for compressed CsiNets in an outdoor environment a lower sparsity level is needed to maintain an on par accuracy.
### _Post-Training Quantization_
Dynamic range quantization was used to collect the results in Table I and Table II. This level of quantization was experimentally chosen to give the best trade-off between model size, inference time and accuracy (results shown in Sec V-F).
#### V-B1 **Model Size**
Post-training dynamic range quantization was very effective in reducing the size of CsiNet for all compression ratios \(\gamma\) - reducing the size of CsiNet by 73.1% on average. This suggests that post-training quantization is an effective and quick method to apply to wireless deep neural networks as the network size is dominated by its weights and activations which can be significantly reduced by quantization with minimal impact to performance.
#### V-B2 **Inference**
Applying post-training quantization reduced inference compared to the original CsiNet model for all values of \(\gamma\). The inference time for the quantized model was on average 10.4% less than the original model. This acceleration is because computations performed with 8-bit weights and activations are less computationally intensive compared to 32 bit floating point computations. Post-training dynamic range quantization was most effective for \(\gamma\) = 1/4, resulting in 22.6% decrease in inference time. This is because for larger models there are more weights to quantize which leads to more gains.
#### V-B3 **NMSE and \(\rho\)**
Applying post-training dynamic range quantization to the original CsiNet models had very little impact on the values of NMSE and \(\rho\). For all values of \(\gamma\) in an indoor and outdoor environment \(\rho\) was equivalent between the original CsiNet models and the quantized models. In an indoor environment for \(\gamma\) = 1/32, 1/64 the quantized model outperformed the original model in terms of NMSE by a very small margin and for \(\gamma\) = 1/4 the original model outperformed the quantized model by a negligible amount. These results suggest that post-training dynamic range quantization has a very minimal effect on model performance, and in essence the model size and latency gains can be "free".
### _Weight Clustering_
The weight clustered models used to collect the results in Table I and Table II use kmeans++ centroid initialization and 32 clusters per network layer. This value was experimentally chosen to give a good trade-off between model size, inference time and accuracy.
#### V-C1 **Model Size**
Weight clustering had the best performance in terms of model size reduction compared to pruning and quantization. Weight clustering reduced the size of CsiNet by 79.2% on average for the four different values of \(\gamma\). This suggests that weight clustering is an effective method to apply to wireless models to achieve significant reduction in model size. Weight clustering works by grouping weights into N clusters. Therefore by choosing a suitable value of N for any wireless model, significant model size reductions are expected.
#### V-C2 **Inference**
Weight clustering did not provide inference time gains. As shown in Table I, the weight clustered models had a comparable inference time to that of the base models. This is expected since weight clustering reduces the number
of unique weights but does not affect the total number of inference operations.
#### Iii-C3 **NMSE and \(\rho\)**
Weight clustering performed very well in the outdoor environment but had a worse performance in the indoor environment compared to pruning and quantization. For the indoor environment, \(\rho\) was on average 2.71% smaller compared to the original models for all values of \(\gamma\). The NMSE for the weight clustered models was on average 17.5% larger compared to the original CsiNet models. The weight clustered model performed much better in an outdoor environment and provided the best NMSE out of all methods for \(\gamma=4\). This suggests that extra care is needed when applying weight clustering to different models and environments and a generalized approach is not as reliable as with pruning and quantization.
### _Pruning and Post-Training Quantization_
#### Iii-D1 **Model Size**
Combining pruning and post-training quantization achieved the smallest model size compared to all other results presented in this work. Pruning and post-training quantization reduced the size of CsiNet by 86.5% on average for the four different values of \(\gamma\). This result is higher than results achieved by pruning and quantization individually, which demonstrates that combining pruning and quantization is an effective way to significantly reduce the model size.
#### Iii-D2 **Inference**
Combining pruning and post-training dynamic range quantization resulted in the lowest latency out of all the models presented in Table I and II. The inference time for the pruned dynamic range quantized models were on average 76.2% shorter than the original CsiNet models. These results demonstrate the effectiveness of pruning and post training dynamic range quantization in enabling low-latency in wireless communications.
#### Iii-D3 **NMSE and \(\rho\)**
The results for NMSE and the cosine similarity \(\rho\) when combining pruning and post-training dynamic range quantization were comparable to the results obtained by the original model for an indoor environment. The model unperformed slightly in an outdoor environment which is expected because the original pruned model before quantization also underperformed in an outdoor environment.
### _Weight Clustering and Post-Training Quantization_
The weight clustered models used to collect the results in Table I and Table II use 32 clusters and kmeans++ centroid initialization. These clustering parameters were experimentally chosen to give high accuracy and low model size.
#### Iii-E1 **Model Size**
Combining weight clustering and post-training dynamic range quantization resulted in a large reduction in model size compared to the original CsiNet models. Weight clustering and quantization reduced the size of CsiNet by 85.0% on average for the four different values of \(\gamma\). This reduction is higher than results achieved by weight clustering and quantization individually, this is because once the weights are grouped into N clusters they are then represented as 8 bit integers instead of 32 bit floating point values. However, because there are less unique weight values, there are less weights to quantize and we only see a small improvement when we combine the two techniques.
#### Iii-E2 **Inference**
The combination of weight clustering and post-training dynamic range quantization resulted in lower inference times compared to the inference times of the original CsiNet models. This is mainly due to inference improvements that result from post-training dynamic range quantization. The inference time for the weight clustered and quantized models were on average 10.5% less than the original CsiNet models.
#### Iii-E3 **NMSE and \(\rho\)**
The results for NMSE and \(\rho\) when combining weight clustering and post-training dynamic range quantization were equivalent to the results achieved by weight clustering individually. As we saw earlier, post-training dynamic quantization had minimal effects on the NMSE and \(\rho\) and this is confirmed again here as the results obtained from combining both compression techniques were identical to the results obtained by the weight clustered models.
### _Varying Pruning and Quantization Parameters_
#### Iii-F1 **Model Size**
Figure 2(a) highlights the impact of model sparsity % and quantization level on model size. It is evident
Fig. 2: Results of Varying Sparsity and Quantization Levels for \(\gamma\) = 1/4 in an indoor environment (a) Model Size (b) Inference Time (c) NMSE
that increasing model sparsity leads to a large decrease in model size regardless of the quantization level. Dynamic range quantization provides the smallest model size, followed by floating point 16 quantization and then the original model in floating point 32. At a high model sparsity %, the effects of quantization become less noticeable. This is expected because there are less weights to quantize at a higher model sparsity.
#### V-C2 **Inference**
Figure 2(b) highlights the impact of model sparsity % and quantization level on inference time. The higher the model sparsity %, the lower the inference time. Inference time for floating point 16 quantization was very similar to the original floating point 32 model. Dynamic range quantization provided the best inference improvements compared to all other levels of quantization.
#### V-C3 **Nmse**
Figure 2(c) highlights the impact of model sparsity % and quantization level on the NMSE. The quantization level has minimal effects on the NMSE, and the model sparsity % is the dominant factor. Therefore, quantization to 8-bits should always be conducted to reap its model size and acceleration gains. At 70% sparsity we see a drop in NMSE and at 90% sparsity we see the largest drop. These results indicate that the configuration of 50% sparsity and 8-bit quantization provides the largest complexity reduction gains without an impact to accuracy.
### _Accelerating Sparse Models_
**Inference:** Figure 3 demonstrates the importance of XNNPack sparse inference to improve the inference time of the resultant sparse models after pruning. Pruning provides remarkable improvements in model size, however without XNNPack sparse inference, the inference time of the pruned model is comparable to the original model. This highlights the importance of optimized neural network inference operators, and opens the door for future research that can focus on accelerating sparse models further [16][17].
## VI Conclusion
This paper presented a comprehensive study on applying three common model compression techniques to CsiNet, a wireless communications deep learning model for massive MIMO CSI feedback. We demonstrated the effectiveness of these techniques in terms of model size, inference time, and accuracy - and analyzed the interaction of combining multiple compression techniques to obtain even more efficient networks that do not impact performance. Furthermore, we show the importance of accelerating sparse models to achieve low inference times on commodity hardware. The results of this paper demonstrate the importance of adopting these techniques when developing deep learning wireless models to obtain tiny, efficient, and accurate models that can provide low-latency processing and consume limited memory and power. These methods are crucial to pave the way for practical adoption and deployments of deep learning-based techniques in commercial wireless systems.
|
2306.08191 | Solving Large-scale Spatial Problems with Convolutional Neural Networks | Over the past decade, deep learning research has been accelerated by
increasingly powerful hardware, which facilitated rapid growth in the model
complexity and the amount of data ingested. This is becoming unsustainable and
therefore refocusing on efficiency is necessary. In this paper, we employ
transfer learning to improve training efficiency for large-scale spatial
problems. We propose that a convolutional neural network (CNN) can be trained
on small windows of signals, but evaluated on arbitrarily large signals with
little to no performance degradation, and provide a theoretical bound on the
resulting generalization error. Our proof leverages shift-equivariance of CNNs,
a property that is underexploited in transfer learning. The theoretical results
are experimentally supported in the context of mobile infrastructure on demand
(MID). The proposed approach is able to tackle MID at large scales with
hundreds of agents, which was computationally intractable prior to this work. | Damian Owerko, Charilaos I. Kanatsoulis, Alejandro Ribeiro | 2023-06-14T01:24:42Z | http://arxiv.org/abs/2306.08191v2 | # Solving Large-scale Spatial Problems with Convolutional Neural Networks
###### Abstract
Over the past decade, deep learning research has been accelerated by increasingly powerful hardware, which facilitated rapid growth in the model complexity and the amount of data ingested. This is becoming unsustainable and therefore refocusing on efficiency is necessary. In this paper, we employ transfer learning to improve training efficiency for large-scale spatial problems. We propose that a convolutional neural network (CNN) can be trained on small windows of signals, but evaluated on arbitrarily large signals with little to no performance degradation, and provide a theoretical bound on the resulting generalization error. Our proof leverages shift-equivariance of CNNs, a property that is underexploited in transfer learning. The theoretical results are experimentally supported in the context of mobile infrastructure on demand (MID). The proposed approach is able to tackle MID at large scales with hundreds of agents, which was computationally intractable prior to this work.
convolutional neural networks, transfer learning, deep learning, stationary process
## I Introduction
Over the past decade, there has been a rapid advancement in machine learning (ML), particularly in deep learning, which has produced state-of-the-art results in a wide range of applications [1, 2, 3]. This progress has been fueled by increasingly powerful hardware [1, 2] that has enabled the processing of larger datasets [4] and the training of deep learning models with more parameters. Theoretical evidence [5, 6] and empirical evidence [7, 8] suggest that using overparametrized models and larger datasets benefits neural network training. Large language models, such as GPT-3, with 175 billion parameters trained on a dataset of approximately 374 billion words, represent a new extreme in this trend [9, 10, 11, 12]. However, the trend of increasing model complexity and dataset size is not sustainable in the long term due to diminishing returns on costs of computation and data acquisition [13, 14]. Moreover, some applications lack data availability, making this strategy impossible. Therefore, it is necessary to refocus on efficiency and explore more sustainable ML approaches.
Transfer learning [15, 16, 17, 18] is a powerful tool for efficient and sustainable ML. It refers to a set of methodologies to apply knowledge learned from a source domain to a different target domain. For example, in [19] the authors demonstrate that it is consistently beneficial to pre-train a convolutional neural network (CNN) on ImageNet before fine-tuning on medical images. In this case, transfer learning is especially beneficial because of the unavailability of large medical image datasets.
CNNs are one of the most popular deep learning architectures [2], especially for image classification [20]. Although initially used for image processing, they have proven useful for a wide variety of other signals such as text, audio, weather, ECG data, traffic data and many others [2, 21, 22]. Shift-equivariance is an interesting property of CNNs. When there are no dilations, any translation of the input to the CNN will also translate the output by the same amount. Previous works focus on leveraging this property to achieve translation invariant image classification [23, 24]. However, it is difficult to exploit shift-equivariance for small images with deep architectures [25, 26]. Nevertheless, our work shows that shift-equivariance is fundamental for efficient large-scale image-to-image regression tasks, as we explain below.
In this paper, we use CNNs and transfer learning to tackle large-scale spatial problems. In particular, we leverage the shift-equivariance property of CNNs to efficiently train when the input-output signals are jointly stationary. Our analysis uses stochastic process theory to provide a bound on the generalization error of CNNs. The derived bound implies that a CNN can be trained on small signal windows, yet evaluated on arbitrarily large windows with minimal performance loss. Following, our theoretical result, we propose to recast spatial problems as image-to-image prediction tasks and use CNNs to solve them on a large scale. The proposed framework is applied to mobile infrastructure on demand (MID) tasks [27]. Our experimental results showcase that transfer learning with CNNs can tackle MID at scales that were previously considered intractable. Our main contributions are summarized as follows.
1. Provide a bound on CNN generalization error after training on a small window and executing on arbitrarily large signals.
2. Propose how to reinterpret large-scale spatial problems as image-to-image tasks.
3. Demonstrate the proposed method by solving the MID problem at scale.
**Notation:** We denote a stochastic process as \(\{X(t)\}_{t}\) where
each \(X(t)\) is a random variable. Correspondingly, \(X\) denotes a random signal - a random element in a function space. Sets \(\mathcal{X}\) are denoted by calligraphic letters. Lowercase boldface symbols \(\mathbf{x}\) are vectors while uppercase ones \(\mathbf{X}\) are matrices with \(\mathbf{X}_{ij}\) being their elements. \(\mathbf{\Phi}\) is an exception that symbolizes an ML model by convention.
## II Learning to process stationary signals
Commonly in ML the task is to find a model \(\mathbf{\Phi}(\cdot)\) which minimizes the expected mean squared error (MSE) between an input \(X\) and a desired output \(Y\). Let \(\mathbf{\Phi}(X)(t)\) be the output of the model at time \(t\) given the signal \(X\).
\[\min_{\mathbf{\Phi}}\mathbb{E}\left[\lim_{T\to\infty}\frac{1}{T}\int_{-T/2}^{T /2}|\mathbf{\Phi}(X)(t)-Y(t)|^{2}dt\right] \tag{1}\]
(1) expresses this mean squared error minimization problem in general, where the input-output signals can be infinitely wide. In practice, solving this directly when the signals are very wide is typically computationally intractable.
Instead, we propose an approximate solution to (1) by considering small windows over the signals. To evaluate this analytically, we model \(X,\ Y\) as stationary random signals and assume the model \(\mathbf{\Phi}\) is a CNN. Note that if \(X\) is a random signal, then the model's output \(\mathbf{\Phi}(X)\) is another random signal. In Section III we provide a bound on the MSE in (1) when \(\mathbf{\Phi}\) is a CNN trained on small windows of the input-output signals. To facilitate this analysis we first define stochastic processes, stationarity, and CNNs.
### _Stationary Stochastic Processes_
A stochastic process [28, 29, 30] is a family of random variables \(\{X(t):\Omega\to S\}_{t\in\mathcal{I}}\) parametrized by an index set \(\mathcal{I}\). Each element in this family is a map from a sample space \(\Omega\) onto a measurable space \(S\). Equivalently, we can describe the process as a map \(X(t,\omega):\mathcal{I}\times\Omega\to S\) from the cartesian product of the index set and the sample space onto the measurable space. In this context, a random signal is the corresponding random function, \(X:=X(\cdot,\omega)\). For our purposes, these three representations are equivalent and we use them interchangeably.
We limit our discussion to real continuous stationary stochastic processes. In particular, consider two stochastic processes \(X:\mathbb{R}\times\Omega\to\mathbb{R}\) and \(Y:\mathbb{R}\times\Omega\to\mathbb{R}\). We assume that \(X\) and \(Y\) are jointly stationary, following definition 1. Simply put, two continuous random signals \(X\) and \(Y\) are jointly stationary if and only if all their finite joint distributions are shift-invariant.
**Definition 1**.: _Consider two real continuous stochastic processes \(\{X(t)\}_{t\in\mathbb{R}}\) and \(\{Y(t)\}_{t\in\mathbb{R}}\). The two processes are jointly stationary if and only if they satisfy (2) for any shift \(\tau\in\mathbb{R}\), non-negative \(n,m\in\mathbb{N}_{0}\), indices \(t_{i},s_{i}\in\mathbb{R}\), and Borel sets of the real line \(A_{i},B_{i}\in\mathcal{B}\)._
\[\begin{split}& P(X(t_{1})\in A_{1},...,X(t_{n})\in A_{n},...,Y(s_{m}) \in B_{m})\\ &=P(...,X(t_{n}+\tau)\in A_{n},...,Y(s_{m}+\tau)\in B_{m})\end{split} \tag{2}\]
Following (1), we interpret \(X\) as a quantity that we can observe, while \(Y\) as something that we want to estimate. Numerous signals of interest such as financial [31], weather [32] and multi-agent systems [33] data can be represented by stationary or quasi-stationary processes. Therefore, it is clear that characterizing the performance of ML models for stationary signals of paramount interest. In particular, we consider CNNs which themselves exhibit the same translational symmetries as stationary signals. Such analysis will allow us to further demystify their performance and provide guidelines on how to efficiently train CNNs.
### _Convolutional Neural Networks_
CNNs are powerful architectures and oftentimes the tool of choice to approach the problem in (1). The convolution operation is the cornerstone of the CNN architecture. Convolutions are shift-equivariant, which allows them to exploit stationarity in the signals. In its simplest form, a CNN is a cascade of layers, where the output of the \(l^{\text{th}}\) layer can be described by (3).
\[x_{l}(t)=\sigma\left(\int h_{l}(s)x_{l-1}(t-s)ds\right) \tag{3}\]
The output \(x_{l}\) at the \(l^{\text{th}}\) layer is obtained by convolving the input \(x_{l-1}\) by a filter \(h_{l-1}\) and applying a pointwise nonlinearity \(\sigma_{l}\). Each filter is a continuous real function and we assume that it has finite support.
Let \(\mathbf{\Phi}(X;\mathcal{H})(t):=x_{L}(t)\) be the output of a CNN with \(L\) layers, input \(x_{0}(t):=X(t)\), and a set of filters \(\mathcal{H}=\{h_{1},...,h_{L}\}\) as described by (3). Thus, the problem in (1) can be reformulated as finding a set of filters \(\mathcal{H}\) that minimizes \(\mathcal{L}_{\infty}(\mathcal{H})\), the expected average squared error.
\[\mathcal{L}_{\infty}(\mathcal{H})=\mathbb{E}\left[\lim_{T\to\infty}\frac{1}{T} \int_{-T/2}^{T/2}|\mathbf{\Phi}(X;\mathcal{H})(t)-Y(t)|^{2}dt\right] \tag{4}\]
Evaluating (4) is challenging since it requires data processing over all real numbers. Although real-world signals typically have finite support, in many applications the signals are too wide to realistically collect data or evaluate (4). For example, in Section V, we consider an optimization problem involving a large multi-agent system.
## III Training on a window
To overcome the aforementioned challenges we propose a transfer learning approach, where we train over small windows and execute the trained network in larger settings. Instead of minimizing (4) directly, we can solve a simpler problem where we window the input-output signals \(X,\ Y\). In this section, we summarize our theoretical result. It provides an upper bound for \(\mathcal{L}_{\infty}\) as described by (4) in terms of the cost \(\mathcal{L}_{\sqcap}(\mathcal{H})\) associated with (5), which is easier to evaluate.
To characterize the behavior of CNNs in this context, we consider windows over the input and output signals of width \(A\) and \(B\), respectively; we restrict our attention to the case when \(A\geq B\). Denote a square pulse of width \(s\in\mathbb{R}_{+}\) by \(\sqcap_{s}\)
so that \(\sqcap_{\mathrm{s}}(t)=1\) whenever \(t\in[-s/2,s/2]\). (5) defines a new cost function.
\[\mathcal{L}_{\sqcap}(\mathcal{H})=\frac{1}{B}\mathbb{E}\left[\int_{-B/2}^{B/2}| \mathbf{\Phi}(\sqcap_{A}X;\mathcal{H})(t)-Y(t)|^{2}dt\right] \tag{5}\]
Unlike (4), this can be readily computed since the signals involved are time-limited, especially when \(A,B\) are small. Therefore optimizing the set of parameters \(\mathcal{H}\) with respect to \(\mathcal{L}_{\sqcap}\) is possible in practice. Our theoretical result quantifies the performance degradation when training on a small window, but evaluating on the whole signal. In particular, let \(\tilde{\mathcal{H}}\) be a candidate obtained by minimizing (5). Given the following assumptions, we show that \(\tilde{\mathcal{H}}\) is an almost equally good solution (4).
**Assumption 1**.: _The random signals \(X\) and \(Y\) are jointly stationary following definition 1._
This is the main assumption of our analysis. As mentioned earlier we analyze the performance of CNNs, since various data in high-impact domains are exactly or approximately stationary, and also such an analysis will also allow us to demystify the 'black-box' architecture of CNNs. The remaining assumptions are very-mild and also typical for model analysis.
**Assumption 2**.: _The stochastic processes \(X\) and \(Y\) are bounded so that \(|X(t)|<\infty\) and \(|Y(t)|<\infty\) for all \(t\)._
Boundedness of the signals is a sufficient condition for the existence and finiteness of some limits needed for the proof. However, the magnitude of the bound does not matter for the final result.
**Assumption 3**.: _The filters \(h_{l}\in\mathcal{H}\) are continuous with a finite width \(K\). That is \(h_{l}(t)=0\) for all \(t\notin[K/2,K/2]\)._
**Assumption 4**.: _The filters have finite L1 norms \(||h_{l}||_{1}\leq\infty\) for all \(h_{l}\in\mathcal{H}\)._
Assumptions 3 and 4 are going to be satisfied in a typical implementation of a CNN which represents the filters as vectors - or tensors when the signals are multi-dimensional.
**Assumption 5**.: _The nonlinearities \(\sigma_{l}(\cdot)\) are normalized Lipschitz continuous (the Lipshitz constant is equal to 1)._
Note that the majority of pointwise nonlinear functions used in deep learning, e.g., ReLU, Leaky ReLU, hyperbolic tangent, are normalized Lipshitz for numerical stability.
Theorem 1 quantifies the difference between the mean squared error of a CNN trained according to (4), and executed on larger settings, according to (5).
**Theorem 1**.: _Let \(\tilde{\mathcal{H}}\) be a set of filters which achieves a cost of \(\mathcal{L}_{\sqcap}(\tilde{\mathcal{H}})\) on the windowed problem as defined by (5) with an input window of width \(A\) and an output window width \(B\). Then the associated cost \(\mathcal{L}_{\infty}(\tilde{\mathcal{H}})\) on the original problem as defined by (4) is bounded by the following._
\[\mathcal{L}_{\infty}(\tilde{\mathcal{H}})\leq\mathcal{L}_{\sqcap}(\tilde{ \mathcal{H}})+H\frac{B+LK-A}{B}\text{var}(X) \tag{6}\]
_In (6), \(H=\prod_{l}^{L}||h_{l}||_{1}\) is the product of all the L1 norms of the CNNs filters, \(L\) is the number of layers, and \(K\) is the width of the filters._
The proof of Theorem 1 is deferred to the journal version due to space limitations. Theorem 1 leverages the shift-equivariance property of CNNs to justify that CNNs are the suitable architecture for transfer learning. Since shift-equivariance is equivalent to joint stationarity of random signals, CNNs are able to exploit hidden regularities and translational symmetries in signals. In a nutshell, Theorem 1 shows that the difference between the squared error of a CNN trained on a small setting and executed on a larger setting is bounded by a quantity that is affected by the variance of the input signal, number of layers, filter widths, and the size of the input and output windows. When \(A=B+LK\), (6) reduces to \(\mathcal{L}_{\infty}(\tilde{\mathcal{H}})\leq\mathcal{L}_{\sqcap}(\tilde{ \mathcal{H}})\). This special case occurs because for \(A=B+LK\) there is no zero-padding at the intermediate CNN layers, eliminating border effects.
## IV Representing spatial problems as images
In this section, we explain how we represent spatial problems as images. Consider a machine learning task where the input is a set of positions \(\mathcal{X}=\{\mathbf{x}_{n}\mid\mathbf{x}\in\mathbb{R}^{2}\}\) and the output is another set of positions \(\mathcal{Y}=\{\mathbf{y}_{n}\mid\mathbf{x}\in\mathbb{R}^{2}\}\). Our goal here is to represent these sets by images \(\mathbf{X},\mathbf{Y}\in\mathbb{R}^{N\times N}\). After doing so, we can learn a mapping from \(\mathbf{X}\) to \(\mathbf{Y}\) using a CNN and take advantage of Theorem 1.
We represent \(\mathcal{X}\) by a superposition of Gaussian pulses with variance \(\sigma_{x}^{2}\). There is one pulse for each position \(\mathbf{x}_{i}\in\mathcal{X}\). (7) defines a real function that maps a position \(\mathbf{x}\in\mathbb{R}^{2}\) to a real value, given the set of positions \(\mathcal{X}\).
\[X(\mathbf{x};\mathcal{X}):=\sum_{i=1}^{|\mathcal{X}|}(2\pi\sigma_{x}^{2})^{-1} \exp(-\frac{1}{2\sigma_{x}^{2}}||\mathbf{x}-\mathbf{x}_{i}||_{2}^{2}) \tag{7}\]
Next, we apply a square window of width \(A\) to \(X(\cdot;\mathcal{X})\) and sample with a spatial resolution \(\rho\) to obtain the matrix \(\mathbf{X}\).
\[\mathbf{X}_{ij}=X(\left[\rho i,\rho j\right];\mathcal{X}) \tag{8}\]
(8) describes how the \(i,j\)th pixel of \(\mathbf{X}\) is sampled for for \(i,j\in\{1,...,N\}\). The image width is rounded down to \(N=\lfloor\rho A\rfloor\). The images in the top row of Figure 1 are example input images \(\mathbf{X}\) at different scales. Similarly, we can construct an image \(\mathbf{Y}\) with a width of \(B\) meters from the set \(\mathcal{Y}\).
## V Experimental results
In this section, we test the performance of the proposed framework for mobile infrastructure on demand (MID). The primary goal in MID is to find the positions of a team of _communication agents_ that maximize wireless connectivity between a team of _task agents_. The work in [27] formulated a convex optimization problem to find the optimal positions of the communication agents. However, this approach becomes computationally intractable as the number of agents grows. Instead, [34] proposed a data-driven approach, where a CNN is trained to imitate a convex optimization solution.
The authors of [34] represented the task and communication agent positions as images, following the procedure similar to Section IV. Hence, they were able to train a CNN with a fully convolutional encoder-decoder architecture on examples for two to six task agents uniformly distributed within a 320m window. The final model provided close to optimal configurations of communication agents. However, larger window sizes of task teams were never considered, which is what we focus on in this section.
Motivated by Theorem 1, we utilize a CNN to perform MID in large-scale settings. Specifically, we present our experimental results on zero-shot performance of the model proposed in [34] on MID tasks with up to 125 task agents and 562 communication agents, covering windows of up to 1600m.
We consider varying window widths of \(A=\) 320, 640, 960, 1280, and 1600 meters with \(|\mathcal{X}|=\) 5, 20, 45, 80, and 125 task agents respectively. The number of task agents is proportional to the window area. We use the trained model provided by [34] from the associated GitHub repository, which was trained using a window size of \(A=\)320m. We perform no additional training or fine-tuning since we are only interested in evaluating the zero-shot performance.
At each window width \(A\) we evaluate the performance for 100 different random task agent configurations. The positions of the task agents are sampled uniformly within the square window to obtain a set \(\mathcal{X}\). Using the procedure outlined in Section IV we represent the set of positions as an image \(\mathbf{X}\in\mathbb{R}^{N\times N}\) where \(N=\lfloor\rho A\rfloor\). We use a spatial resolution of \(\rho=1.25\) meters per pixel and Gaussian kernel standard deviation \(\sigma_{x}=6.4\). The corresponding output of the CNN \(\mathbf{\Phi(X;\mathcal{H})}\in\mathbb{R}^{N\times N}\) is another image of the same size that represents the estimated optimal positions of the communication agents. Examples of the resulting CNN output images are shown in Figure 1. Qualitatively, the proposed transfer learning approach shows excellent performance in positioning the communication agents.
To quantify the performance of the proposed approach, we extract the set of the CNN estimated positions of the communication agents \(\hat{\mathcal{Y}}\) using Lloyd's algorithm as described by [34]. Then, we use Equation (9) to calculate the power required to maintain a minimum communication rate between any two agents, assuming a path loss channel model [35]. If two agents are separated by \(d\) meters, (9) relates the power \(P(d)\) in mili-watts required for them to directly communicate at an expected communication rate \(R\). The rate is normalized to be a fraction of the total bitrate across the channel.
\[P(d)=[\text{erf}^{-1}(R)]^{2}\frac{P_{N_{0}}d^{n}}{K} \tag{9}\]
In (9) \(P_{N_{0}}\) is the noise power level in mili-watts, \(K\) is a constant based on the physics of the receivers, and \(n\) is known as the path loss exponent. In our simulations we use \(R=0.5\), \(P_{N_{0}}=1\times 10^{-7}\), \(K=5\times 10^{-6}\) and \(n=2.52\). Notice that \(P(d)\) is defined for every pair of agents and therefore defines a fully connected weighted graph between all agents. Consider the minimum spanning tree of this graph. The average edge weight of the tree is the per-edge power needed for communication.
At each window width \(A\) we compute the power needed for each sampled of task agent configuration and the corresponding CNN prediction. As Table I shows, there is a modest increase of 10.24% in the power needed from \(A=320\) to \(A=640\) meters, but the power needed does not increase further after that. In fact, there is only an 8.61% increase in power from \(A=320\) to \(A=1280\) meters. This suggests that further increases in the scale of the task, will not increase the needed power further.
Fig. 1: Example inputs and outputs to the CNN for the MID task at different window widths \(A=320,650,960,1280,1600\) but with a constant spatial resolution of \(\rho=1.25\) meters per pixel. The top row images represent the positions of the task agents. The bottom row images represent the estimated optimal positions of the communication agents by the CNN. Additionally, the task agent positions are marked in red on the bottom images.
Looking at Figure 2, the variance in the average power per edge decreases with the width. Partially, this is a consequence of the Central Limit Theorem, because in Figure 2 we plot the distribution of an average over the edges in the minimum spanning tree. However, this is only part of the story, because the variance decreases faster than the square root of the number of edges. We think this is because border effects due to padding in the CNN have less impact at higher scales.
To summarize, the CNN gives us a good solution to the MID problem with linear time complexity respective to the area. In contrast, the convex optimization approach has polynomial complexity [34] as the number of agents is proportional to the area, which makes it computationally intractable for a large number of agents. Finally, the high performance of the CNN on the task is aligned with our derived theoretical bound.
## VI Conclusions
This paper explores the use of CNNs to solve big spatial problems, which are too large to solve directly and to obtain training data. We present a novel theoretical result that expands our understanding of the link between shift-equivariance and CNN performance. Motivated by these results, we propose an approach to efficiently train a CNN to solve such tasks. Specifically, a CNN can be trained on a small window of the signal and deployed on arbitrarily large windows, without a large loss in performance. We provide conditions that theoretically guarantee no loss of performance when using this method. To demonstrate this approach experimentally, we recast the MID problem as an image-to-image prediction task. In fact, the proposed approach is able to solve the MID problem for hundreds of agents, which was previously computationally intractable.
|
2305.18822 | Magnetic field regression using artificial neural networks for cold atom
experiments | Accurately measuring magnetic fields is essential for magnetic-field
sensitive experiments in fields like atomic, molecular, and optical physics,
condensed matter experiments, and other areas. However, since many experiments
are conducted in an isolated vacuum environment that is inaccessible to
experimentalists, it can be challenging to accurately determine the magnetic
field. Here, we propose an efficient method for detecting magnetic fields with
the assistance of an artificial neural network (NN). Instead of measuring the
magnetic field directly at the desired location, we detect magnetic fields at
several surrounding positions, and a trained NN can accurately predict the
magnetic field at the target location. After training, we achieve a relative
error of magnetic field magnitude (magnitude of error over the magnitude of
magnetic field) below 0.3$\%$, and we successfully apply this method to our
erbium quantum gas apparatus. This approach significantly simplifies the
process of determining magnetic fields in isolated vacuum environments and can
be applied to various research fields across a wide range of magnetic field
magnitudes. | Ziting Chen, Kin To Wong, Bojeong Seo, Mingchen Huang, Mithilesh K. Parit, Haoting Zhen, Jensen Li, Gyu-Boong Jo | 2023-05-30T08:10:56Z | http://arxiv.org/abs/2305.18822v1 | # Magnetic field regression using artificial neural networks for cold atom experiments
###### Abstract
Accurately measuring magnetic fields is essential for magnetic-field sensitive experiments in fields like atomic, molecular, and optical physics, condensed matter experiments, and other areas. However, since many experiments are conducted in an isolated vacuum environment that is inaccessible to experimentalists, it can be challenging to accurately determine the magnetic field. Here, we propose an efficient method for detecting magnetic fields with the assistance of an artificial neural network (NN). Instead of measuring the magnetic field directly at the desired location, we detect magnetic fields at several surrounding positions, and a trained NN can accurately predict the magnetic field at the target location. After training, we achieve a relative error of magnetic field magnitude (magnitude of error over the magnitude of magnetic field) below 0.3%, and we successfully apply this method to our erbium quantum gas apparatus. This approach significantly simplifies the process of determining magnetic fields in isolated vacuum environments and can be applied to various research fields across a wide range of magnetic field magnitudes.
## Introduction
Precisely calibrating the magnetic field inside a vacuum chamber is of experimental significance to various fields. For instance, a magnetic field is one of the common control knob in the ultracold atoms experiment, enabling various studies on many-body physics [1] including BEC-BCS crossover [2], formation of matter-wave soliton [3; 4; 5], and Efimov states [6] through the Feshbach resonance that tunes inter-atomic interactions [7]. However, the demanding precision of the magnetic field imposed by these experimental control and the inaccessibility to the vacuum chamber makes calibration of the magnetic field a difficult and time-consuming task. Moreover, spectroscopic measurement, a typical approach to magnetic field calibration, is mainly sensitive only to the magnitude of the magnetic field. The magnetic field direction and its precise calibration are of critical importance for magnetic atoms (e.g. erbium or dysprosium) where the orientation of the magnetic dipole moment plays a critical role [8; 9].
Recent years have witnessed the great success of neural network (NN) applied to assist experiments, including multi-parameter optimization of magneto-optical trap [10; 11], optimization of production of Bose-Einstein condensate [12; 13; 14; 15], and recognition of hidden phase from experimental data [16; 17; 18; 19]. Here, we introduce a novel method to precisely determine the magnetic field vector \(B=(B_{x},B_{y},B_{z})\) inside a vacuum chamber with the assistance of a NN. Since the target position inside the vacuum chamber is typically inaccessible, we detect magnetic fields at several surrounding positions, which are sent to the trained NN that is able to accurately deduce the magnetic field inside the vacuum chamber. We apply this method to our apparatus of erbium quantum gas [20; 21; 22], which has large magnetic dipole moment making it particularly sensitive to magnetic field vector. [23; 24]. We present the details of the NN-based method, including setting up the simulation model, training process, and final performance. For simplicity, magnetic field data for training and validation of NN is generated by a standard finite-element simulation package from COMSOL Multiphysics electromagnetic module [25], instead of experimental measurement. Moreover, we systematically investigate the impact of the number of sensors at surrounding positions and the magnitude of the magnetic field on the performance of the method providing a practical guide for implementation. In contrast to previous works [26; 27; 28; 29; 30], which predict the magnetic field vector across a wide experimental region, our goal in this work is to extrapolate the magnetic field vector at a specific position within an inaccessible region. Our approach provides a simple method for monitoring magnetic fields without requiring any prior knowledge of solution of the Maxwell equation.
## Methodology
The implemented machine learning algorithm is an artificial NN that can be coded by common Python packages like Tensorflow and Pytorch [31; 32]. The magnetic field measured by sensors outside a vacuum chamber is fed into the NN, which predicts the magnetic field at the center of the chamber. Hence, the function of the NN is to act as a hyperplane that relates the magnetic field at different spatial points. Regressing such a hyperplane would require a substantial amount of data. Obtaining training data from a real experimental setup, though it can take into account every factor, is difficult and time-consuming. Instead, we generate training data from finite element simulation using COMSOL Multiphysics. As long as the simulation model takes into account important factors, simulation data would be a reliable substitute for real data.
The simulation model is shown in Fig. 1, which contains a science chamber commonly used in cold atoms ex
periments. Originally, the chamber is made of materials including 314L, 304 stainless steel, aluminum, glass, and plastic. However, to reduce computational time, non-magnetic parts of the chamber are removed as they do not affect the result, and only parts made of 314L, and 304 stainless steel are kept. 314L and 304 stainless steel, under low strain and irradiation, can be considered as linear materials with no hysteresis loop [33]. In addition, only a static magnetic field is of concern for most experiments. Therefore, we only need to simulate the magnetic field under time-independent conditions. In order to generate a stable magnetic field, three pairs of copper coils with constant current are placed around the chamber along three orthogonal directions, mimicking the coils used in experiments. To account for the Earth magnetic field \(B_{Earth}\) in the laboratory, a constant 400 mG magnetic field is added in the x-direction. To notice, when adopting this method, it is important to check the direction and magnitude of \(B_{Earth}\) before training the NN since they can vary in different locations.
Even though simulation data is much easier to obtain than real data, simulation of each datum still requires a significant amount of time. To diminish data acquisition time, we exploit the linearity of Maxwell's equations that any superposition of solutions is also a valid solution for linear materials. By virtue of this, the whole output space can be mapped using only three linear independent results, which can be obtained by simulating each pair of coils in the model. Using these results, we obtain new data that is not redundant and form a large data set.
As shown in Fig. 2, after setting up the simulation model and obtaining the simulation data, we input the data into the artificial NN for training and prediction testing. The implemented NN contains two fully-connected hidden layers with ReLU as activation functions. During the training, the neurons' parameters of the NNs are adjusted to minimize the root-mean-square error (RMSE) loss function using the Adam optimizer [18]. This is a common loss function for regression and can be defined as \(\mathbf{RMSE}=\sqrt{\frac{1}{3n}\sum_{i}^{n}\sum_{\alpha}^{x,y,z}(y_{i}^{\alpha }-\hat{y}_{i}^{\alpha})^{2}}\) where \(n\) is the number of data, \(\hat{y}\) is the actual output, \(y\) is the predicted output and \(\alpha\) denotes the component of the magnetic field data. Total number of simulation data is 1.5\(\times\)10\({}^{5}\), 80% of which is used for training while 20% is reserved for validation. When the validation loss isn't improving for several epochs, the training is terminated to prevent overtraining. It should be noted, however, RMSE could not reflect the statistic of the error for each measurement.
To properly evaluate the performance of the NN-based
Figure 1: **Schematic of the simulation model.** (a) Overview of the model. The main body of the model is an experimental chamber in ultra-high vacuum environment. The control magnetic field \(B_{control}\) is generated by three orthogonal pairs of copper coils. Several magnetic field sensors surrounding vacuum chamber are indicated in light yellow. (b) Top view of the model. An exemplary magnetic fields (blue arrows) generated by coils are shown. A red arrow is also added to show the Earth’s magnetic field in the model which is set to the x direction.
magnetic field regression, we define the relative prediction error which is defined as the vector difference between the actual output and prediction over the magnitude of the actual output:
\[\text{Relative prediction error}=\frac{||y-\hat{y}||}{||\hat{y}||} \tag{1}\]
This value is evaluated from another data set of size of \(10^{5}\). We calculate the error from each datum to form a set of errors and extract the upper bound of the relative error, below which 90% of data points would be accurately estimated.
### Interpretation of the training process
To fully evaluate the NN's performance, it is essential to first grasp the purpose of training the NN or specifically, the target equation of the regression. In theory, the hyperplane relating magnetic field at different spatial points can be calculated using Maxwell's equations directly. By substituting the spatial coordinates of the target point into the general solution, we remove its spatial degree of freedom, causing the output to depend only on the integration constants that are determined by the boundary conditions. However, the boundary conditions can just be the magnetic field at other spatial points. Hence, in principle, we can extract a hyperplane from the solution that serves the same function as the NN. Since this hyperplane is entirely based on Maxwell's equations, its output must always be correct. Therefore, the goal of training the NN is to reduce the gap between the NN and this ideal hyperplane.
To have a better understanding of the relation between Maxwell's equations and the NNs, we look at how the error depends on the number of sensors. As shown in Fig. 3, the error drastically reduces when the number of sensors increases to 6 and has a similar value onward. This dramatic reduction can be explained by the number of boundary conditions required to identify the hyperplane from directly calculating Maxwell's equations. \(\nabla^{2}\mathbf{B}-\frac{1}{c^{2}}\frac{\partial^{2}\mathbf{B}}{\partial t^ {2}}=-\mu\nabla\times\mathbf{J}\) shows the form of the time-dependent Maxwell's equations that only depend on the magnetic field. Through this, we can count the number of integration constants and thus, the boundary conditions required to get a unique solution by multiplying the number of independent variables, the order of differential equation, and the number of components together. For the Maxwell's equation, the obtained value is 24, but we are only concerning time-independent cases, therefore
Figure 2: **Pipeline figure for the overall procedure.** First, a simulation model which contains a science chamber and coils as magnetic field sources are set up. Then, the model is simulated to obtain data that is passed into an artificial NN for training. The NN has four layers: the input layer which is the magnetic field measured by sensors outside the chamber, two fully connected hidden layers with ReLU as activation functions, output layer which is the target magnetic field at the chamber center. At last, the prediction of the NN is evaluated.
resulting required number of 18. Since the boundary conditions are provided by magnetic field measured at different spatial points by sensors, the minimum number of sensors, each has three components, needed to obtain a unique and accurate result is equal to 6. This, together with the claim that the NN is approximating the ideal hyperplane calculated from Maxwell's equations, explains the rapid reduction of error in Fig. 3. When below 6 sensors are used, the NN does not have enough information needed for the ideal hyperplane to calculate the unique result. Therefore, the trend in Fig. 3 indicates a strong relation between the NN and Maxwell's equations, strengthening the claim that training the NN is equivalent to bringing the NN closer to the ideal hyperplane calculated from Maxwell's equations. However, it is crucial to notice the NN can only approximate the ideal hyperplane but never reach it since their mathematical forms are fundamentally different. Hence, errors always exist from the network's prediction. Besides, the quality of the approximation greatly depends on and is limited by the training conditions and data. This aspect of the network will be clearly shown in the following result.
### NN's performance over a range of magnetic field
In this study, we evaluate the performance of the neural network under various magnetic fields created by the coils \(B_{control}\), ranging from 0.6 G to 100 G. These values cover the typical range of magnetic fields used in experiments involving ultracold dipolar atoms [21]. This can be achieved by generating multiple prediction data sets with different average magnitudes of output. Before commencing the test, it is crucial to consider the average magnitude of the output of the training data since it could significantly affect the training process that determines the network's response to different magnetic fields.
Fig. 4(a) demonstrates the NN's accuracy in predicting the magnetic field at the center of the chamber under different magnetic field conditions. When the NN is trained with weak magnetic field strength (\(B_{control}\)) of 1-5 G, the error is minimized within that range and can be reduced to as low as approximately 0.2%. However, when the NN is trained with a strong magnetic field strength of 10-50 G, the error is minimized at a \(B_{control}\) order of magnitude of 10 G. In addition, when the \(B_{control}\) is outside of the specified range, the error quickly surges to above 10%. This issue could be avoided if the magnetic field were completely rescalable. Unfortunately, this is not possible due to the presence of the constant Earth magnetic field (\(B_{Earth}\)). As \(B_{control}\) changes, so does \(B_{Earth}/B_{control}\), and these variations can be drastic, particularly when \(B_{control}\) changes by an entire order of magnitude. If the NN is trained with a weak magnetic field strength, there is a possibility that the network would calculate the bias based on \(B_{Earth}/B_{control}\) for a weaker field even when \(B_{control}\) is much stronger, leading to a higher error. This also explains why the applicable range of \(B_{control}\) for NN trained at weaker \(B_{control}\) is narrower, as \(B_{Earth}/B_{control}\) varies much faster at weaker magnetic field strengths.
To expand the applicable range of the neural network, one option is to train it on a larger range of \(B_{control}\). However, this method has been proven to be ineffective, as shown in Fig. 4(a). The figure demonstrates that the errors between the neural network trained with a large range of \(B_{control}\) and the one trained with strong \(B_{control}\) are very similar. This indicates that the neural network simply ignores weaker \(B_{control}\). The reason for this lies in the use of the RMSE loss function in absolute value. Larger \(B_{control}\) usually produces larger absolute errors, causing the network's parameters to be tuned in a way that provides more accurate results at larger \(B_{control}\) over weaker ones.
Here we suggest two viable ways to improve the working range of the NN. First, the NN can simply be trained without \(B_{Earth}\) in the data by adjusting the sensors offsets until the reading is zero or compensating \(B_{Earth}\) in the laboratory. Both of the methods involve removing the effect of the \(B_{Earth}\) during the training process. In Fig. 4(b), the error of the network's prediction can then be kept below 0.4% over the whole range when there is no \(B_{Earth}\). Therefore, this method is very effective in principle but requires good control over the sensors.
The second method is to compensate \(B_{Earth}\) using coils in the laboratory to attenuate the effect of \(B_{Earth}\). Practically, it is difficult to completely eliminate \(B_{Earth}\), but it is possible to compensate 90%. Therefore, we evaluate the effectiveness of this method by training NN with a small bias of 40 mG, which is displayed in Fig. 4(b). The error is below 0.7% over the whole range and below 0.3% from 20 G to 100 G, indicating a good precision and wide working range can be achieved simultaneously with this method.
Nonetheless, even when both of the above methods are not applicable, the NN is still useful and powerful. As long as the approximate value of the magnetic field is
Figure 3: **Relative error of magnetic field** Relative prediction error of magnetic field versus different numbers of sensors, with the vertical axis in logarithmic scale. The error plummets when the number of sensors increases from 1 to 6. Afterward, the error stays in the same order of magnitude and varies non-systematically.
known, the properly trained NN can always be picked to calculate the magnetic field with an extremely low error. When the order of the magnetic field is unknown, however, we can still use the NN trained with the wrong range to find back the order of magnitude since the error, in this case, is still over an order larger. The more accurate result can then be calculated using the properly trained NN.
### Application to cold atoms experiments
The performance of magnetic field regression demonstrated in this work opens a new possibility of applying this method to experiments with cold atoms. One such system is an apparatus of dipolar erbium atoms which have a large magnetic moment of \(7\mu_{B}\) where \(\mu_{B}\) is the Bohr magneton and contains a dense spectrum of Feshbach resonances [24]. Even at a low magnetic field regime, erbium atoms have multiple Feshbach resonances for a magnetic field smaller than 3 G, where the widths of these resonances are around 10-200 mG [23]. The regression method could allow us to monitor the magnetic field vector with the resolution of \(\sim\)10 mG in the range of 3 G, which is accurate enough to properly calibrate the experimental system. This is still favorable for other atomic species, such as alkali atoms, since a maximal error of magnetic field is about 1.25 G in the range of 500 G, which is sensitive enough compared to the linewidth of most commonly used Feshbach resonances [7]. However, even though the regression method can provide useful results, we suggest that its accuracy should be verified with another method, such as radio-frequency spectroscopy.
Apart from precisely determining the magnetic field in
Figure 4: **Relative prediction error for various conditions** Relative prediction error versus the magnetic field generated by the coils \(B_{control}\), covering from 0.6 to 100 G range. (a) The Earth’s magnetic field \(B_{Earth}\) is present and compensated (assumed to be 400 mG). The legend shows in which range of field the artificial NN is trained. Weak refers to a field from 1 to 5 G and strong refers to 10 to 50 G range. The error is minimized and reduced to around 0.2% over a range where the training field has a similar magnitude as the testing field. However, the error increases rapidly to over 10% outside the working range. The working range of the NN trained with a stronger field is wider (in log scale) than the one trained with a weak field. When the NN is trained with both weak and strong fields, the result is similar to the one produced by a NN trained with a strong field only. (b) \(B_{Earth}\) (EB in the figure) is either compensated or removed entirely as indicated by the legend. For the one with compensated \(B_{Earth}\), the NN is trained with a weak field. The minimized region still remains but the increase in error is way less drastic outside the range compared to having full \(B_{Earth}\). The error can be suppressed under 0.7% over the whole range. For the one without \(B_{Earth}\), the NN is trained at 2.5 G. The performance of the network is very consistent over the whole range with errors around or below 0.3%.
side a vacuum chamber, the proposed method also serves as a quick indicator for magnetic field change. For instance, when sensors around the vacuum chamber give unexpected values, it indicates a change in a magnetic field. If all 6 sensors are changed along the same direction, then an external magnetic field is suddenly introduced to the system. On the other hand, if these 6 sensors change in a different direction, then it is likely because the position of a device within the area is changed. The former can be solved by compensating the external magnetic field by coils, while the latter requires to re-train the NN. The change of sensor value signals a change in the environmental magnetic field, which provides important information and is easy to monitor.
## Conclusion
In conclusion, a novel method based on NN to precisely determine magnetic fields in an isolated vacuum environment is demonstrated. An artificial NN is trained such that it outputs the magnetic field at the center of the vacuum chamber based on the magnetic fields surrounding the chamber. The effect of the number of sensors and the magnitude of the magnetic field on the performance of the trained NN are evaluated.
After training, the relative error of magnetic field magnitude below 0.3% is achieved under a wide range of magnetic fields, which is sufficient to calibrate the magnetic field in our erbium quantum gas apparatus, where many narrow Feshbach resonances exist even in the low field regime. Besides, experiments with other atomic species can benefit from this method. Furthermore, as no special setup is required, the established method can be extended to other magnetic field-sensitive experiments conducted in an isolated environment.
AcknowledgmentsWe acknowledge support from the RGC through 16306119, 16302420, 16302821, 16306321, 16306922, C6009-20G, N-HKUST636-22, and RFS2122-6S04.
|
2310.10776 | Correcting model misspecification in physics-informed neural networks
(PINNs) | Data-driven discovery of governing equations in computational science has
emerged as a new paradigm for obtaining accurate physical models and as a
possible alternative to theoretical derivations. The recently developed
physics-informed neural networks (PINNs) have also been employed to learn
governing equations given data across diverse scientific disciplines. Despite
the effectiveness of PINNs for discovering governing equations, the physical
models encoded in PINNs may be misspecified in complex systems as some of the
physical processes may not be fully understood, leading to the poor accuracy of
PINN predictions. In this work, we present a general approach to correct the
misspecified physical models in PINNs for discovering governing equations,
given some sparse and/or noisy data. Specifically, we first encode the assumed
physical models, which may be misspecified, then employ other deep neural
networks (DNNs) to model the discrepancy between the imperfect models and the
observational data. Due to the expressivity of DNNs, the proposed method is
capable of reducing the computational errors caused by the model
misspecification and thus enables the applications of PINNs in complex systems
where the physical processes are not exactly known. Furthermore, we utilize the
Bayesian PINNs (B-PINNs) and/or ensemble PINNs to quantify uncertainties
arising from noisy and/or gappy data in the discovered governing equations. A
series of numerical examples including non-Newtonian channel and cavity flows
demonstrate that the added DNNs are capable of correcting the model
misspecification in PINNs and thus reduce the discrepancy between the physical
models and the observational data. We envision that the proposed approach will
extend the applications of PINNs for discovering governing equations in
problems where the physico-chemical or biological processes are not well
understood. | Zongren Zou, Xuhui Meng, George Em Karniadakis | 2023-10-16T19:25:52Z | http://arxiv.org/abs/2310.10776v1 | # Correcting model misspecification in physics-informed neural networks (PINNs)
###### Abstract
Data-driven discovery of governing equations in computational science has emerged as a new paradigm for obtaining accurate physical models and as a possible alternative to theoretical derivations. The recently developed physics-informed neural networks (PINNs) have also been employed to learn governing equations given data across diverse scientific disciplines, e.g., in biology and fluid dynamics. Despite the effectiveness of PINNs for discovering governing equations, the physical models encoded in PINNs may be misspecified in complex systems as some of the physical processes may not be fully understood, leading to the poor accuracy of PINN predictions. In this work, we present a general approach to correct the misspecified physical models in PINNs for discovering governing equations, given some sparse and/or noisy data. Specifically, we first encode the assumed physical models, which may be misspecified in PINNs, and then employ other deep neural networks (DNNs) to model the discrepancy between the imperfect models and the observational data. Due to the expressivity of DNNs, the proposed method is capable of reducing the computational errors caused by the model misspecification and thus enables the applications of PINNs in complex systems where the physical processes are not exactly known. Furthermore, we utilize the Bayesian physics-informed neural networks (B-PINNs) and/or ensemble PINNs to quantify uncertainties arising from noisy and/or gappy data in the discovered governing equations. A series of numerical examples including reaction-diffusion systems and non-Newtonian channel and cavity flows demonstrate that the added DNNs are capable of correcting the model misspecification in PINNs and thus reduce the discrepancy between the physical models encoded in PINNs and the observational data. In addition, the B-PINNs and ensemble PINNs can provide reasonable uncertainty bounds in the discovered physical models, which makes the predictions more reliable. We also demonstrate that we can seamlessly combine the present approach with the symbolic regression to obtain the explicit governing equations upon the training of PINNs. We envision that the proposed approach will extend the applications of PINNs for discovering governing equations in problems where the physico-chemical or biological processes are not well understood.
keywords: model uncertainty, physics-informed neural networks, model misspecification, uncertainty quantification, non-Newtonian flows, symbolic regression +
Footnote †: journal:
## 1 Introduction
Accurate governing equations or physical models are essential for understanding and quantifying physical processes across diverse scientific disciplines, such as the Navier-Stokes equations in fluid dynamics and the transport equations in geosciences. Generally, the governing equations are derived based on established theories such as the conservation of mass and momentum, thermodynamic laws, and so on. However, there are also complex systems in which the governing equations are difficult to obtain theoretically due to the lack of understanding for certain physical processes, e.g., the constitutive relation in highly nonequilibrium flows [1], the forcing caused by solar and volcanic variability in climate problems [2], non-equilibrium reactions, etc. How to obtain analytical expressions that represent physical phenomena in nature and engineering remains an open question.
With the rapid growth of available data and computing power, data-driven discovery of governing equations based on machine learning algorithms has emerged as a new paradigm for obtaining accurate physical models as the alternative to theoretical derivations [3; 4; 5; 6; 7; 8; 9; 10; 11]. To name a few examples, Schmidt et al. presented a symbolic regression approach to discover equations from experimental data [3]; Brunton et al. proposed the sparse identification of nonlinear dynamical systems (SINDy) to learn both ordinary and partial differential equations from data [4; 5]. In addition, Chen et al. employed deep neural networks (DNNs), more specifically the physics-informed neural networks (PINNs), which encode the physical laws into DNNs via automatic differentiation [12], to discover governing equations from scarce data [6]. In most of the aforementioned approaches, a library of candidate terms that describe all the possible physical processes is required as prior knowledge [3; 4; 5; 6]. However, mathematical expressions or models for certain physical processes are quite challenging to obtain in some complex real-world applications, e.g., chemical reactions in combustion, the constitutive relation in highly partially ionized plasmas, etc. It is probable that we miss some details of the physical process, hence misspecifying the physical models in such scenarios [13]. An incomplete or non-comprehensive library leads to the discrepancy between the observational data and the assumed governing equations. In other words, the discovered governing equations may not be accurate enough to describe the physical processes represented by the observational data.
Several recent approaches have been developed to reduce the discrepancy between the observational data and the assumed physical models. For instance, Ebers et al. [14] proposed to learn the _missing physics_ or the discrepancy between the imperfect models and the measurements using a plurality of methods, e.g. Gaussian process regression (GPR), DNNs, etc.; Chen et al. also employed DNNs to recover the _missing physics_ in prior models [15]; in [16; 17], combinations of PINNs and symbolic regression were employed to identify the missing physics in biological problems, e.g. Alzheimer's disease; Saurabh et al. [18] and Zhu et al. [19] discovered source terms of dynamic systems in a "gray-box" fashion with NN-based numerical integrators. However, most of the existing work focuses on resolving the missing physics. Investigations on scenarios that we _misspecify the physical models_ are rare to the best of our knowledge. Further, the observational data from sensors in real-world applications are generally noisy, and may also be incomplete due to the limitation of data collection
techniques. Such noisy and incomplete or gappy data lead to uncertainties in the predictions of machine learning algorithms or more specifically the scientific machine learning algorithms [20; 21]. In addition, as widely discussed in the climate change modeling [13; 22; 23], the incomplete understanding of physical processes will also result in uncertainties in simulations. We point out that quantifying uncertainties arsing from noisy/gappy data as well as the incomplete understanding of the physical processes is an important issue but has been largely ignored in the existing literature on data-driven discovery of governing equations. In the current study, we refer to the uncertainties in the discovered physical models as _model uncertainty_.
In the present work, we aim to address the issue of misspecifying physical models as well as quantifying model uncertainty in the data-driven discovery of governing equations. Specifically, PINNs are employed as the backbone to discover the governing equations or physical models from data due to their effectiveness as well as easy implementations. In particular, we first encode the assumed physical models that may be misspecified in PINNs, and add another DNN as the correction to the misspecified model to alleviate the discrepancy between the imperfect models and the observational data. We note that the library of candidate terms used in [3; 4; 5; 6] is not required here since the added DNN can serve as a universal approximator to model the discrepancy. The Bayesian physics-informed neural networks (B-PINNs) [24] and/or ensemble PINNs [20; 21] are used to quantify the uncertainties in the discovered governing equations or physical models arising from the noisy and/or gappy data. Further, the symbolic regression is utilized to obtain the analytical expression for the unknown physics based on the trained DNNs that are used to correct the model misspecifications in PINNs.
The rest of this paper is organized as follows. In Sec. 2, we introduce the model misspecification in PINNs as well as our approach to correct it, and also the methods for quantifying uncertainties in PINN predictions. In Sec. 3, we conduct four numerical experiments to demonstrate the effectiveness of the proposed approach in addressing the model misspecification issue in PINNs. We summarize the present work in Sec. 4.
## 2 Model misspecification in PINNs for learning governing equations
### Learning governing equations from data using PINNs
Consider the following nonlinear ODE/PDE describing a physical system:
\[\mathcal{F}_{\lambda}[u](x) =f(x),x\in\Omega, \tag{1a}\] \[\mathcal{B}_{\lambda}[u](x) =b(x),x\in\partial\Omega, \tag{1b}\]
where \(\Omega\) is the domain, \(\lambda\) is the model parameter, \(u\) is the sought solution, \(f\) is the source term, \(b\) is the boundary term, \(\mathcal{F}_{\lambda}\) and \(\mathcal{B}_{\lambda}\) are operators parameterized by \(\lambda\), defining the equation and the boundary condition. The PINN method is capable of solving ODEs/PDEs given initial/boundary conditions as well as learning governing equations and identifying model parameters from data [24; 21; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42]. A schematic view of the PINN method is presented in Fig. 1, in which a DNN with \(\sigma\) as the activation function and \(\theta\) as the parameter is built to approximate \(u\) with \(u_{\theta}\) and \(f,b\) with \(f_{\theta},b_{\theta}\), respectively, via automatic differentiation [12].
In the present study, we focus on using PINNs to learn governing equations from data. The dataset for learning governing equations with PINNs is denoted by \(\mathcal{D}\), which can be expressed as \(\mathcal{D}=\mathcal{D}_{u}\cup\mathcal{D}_{f}\cup\mathcal{D}_{b}\), where \(\mathcal{D}_{u}=\{x_{i}^{u},u_{i}\}_{i=1}^{N_{u}}\) is the set of data for \(u\), and \(\mathcal{D}_{f}=\{x_{i}^{f},f_{i}\}_{i=1}^{N_{f}}\) and \(\mathcal{D}_{b}=\{x_{i}^{b},b_{i}\}_{i=1}^{N_{b}}\) are sets of data for the physics. The model parameter \(\lambda\), which defines the equation, is then obtained by optimizing the following PINN loss function with the gradient-descent method and its variants [43]:
\[\mathcal{L}(\theta)=w_{u}\mathcal{L}_{\mathcal{D}_{u}}(\theta)+\mathcal{L}_{PDE }(\theta), \tag{2}\]
where
\[\mathcal{L}_{\mathcal{D}_{u}}(\theta) =\frac{1}{N_{u}}\sum_{i=1}^{N_{u}}||u_{\theta}(x_{i}^{u})-u_{i}|| _{2}^{2},\] \[\mathcal{L}_{PDE}(\theta) =\frac{w_{f}}{N_{f}}\sum_{i=1}^{N_{u}}||\mathcal{F}[u_{\theta}]( x_{i}^{f})-f_{i}||_{2}^{2}+\frac{w_{b}}{N_{b}}\sum_{i=1}^{N_{b}}||\mathcal{B}[u_{ \theta}](x_{i}^{b})-b_{i}||_{2}^{2},\]
\(w_{u},w_{f},w_{b}\) are belief weights for balancing different terms, and \(||\cdot||_{2}\) is the \(\ell^{2}\)-norm for finite-dimensional vector. We note that in certain cases we may not have data corresponding to the boundary/initial conditions, i.e., the last term in \(\mathcal{L}_{PDE}(\theta)\), and we can directly drop this term in the computations.
### Correcting the model misspecification in PINNs
The computational accuracy of PINNs in learning governing equations strongly depends on the encoded physical models, i.e. \(\mathcal{F}_{\lambda}\) and/or \(\mathcal{B}_{\lambda}\) in Eq. (1). However, the correct specification of the physics is not always guaranteed, leading to significant discrepancy between
Figure 1: A schematic view of PINNs for learning governing equations from data. On the left, a general framework of PINNs is displayed where \(\sigma\) denotes the activation function, \(\theta\) denotes the DNN parameters, \(u_{\theta}\) denotes the output of the DNN that is used to approximate \(u\), and \(f_{\theta}/b_{\theta}\) are computed via automatic differentiation [12]. An example on learning governing equations with PINNs is illustrated on the right panel. Here a 1D Burgers equation is learned (\(\lambda_{1}=0,\lambda_{2}=1,\lambda_{3}=-\nu\) where \(\nu\) denotes the viscosity) given data on \(u\).
the observational data and the assumed governing equations. An example is demonstrated in Fig. 2 where the flow is non-Newtonian but the physical model is misspecified as the one for Newtonian flows (see Sec. 3.3.1 for details). As a result, the physics and the data cannot be fitted well at the same time: the training does not land on the region "IV", where both the losses for the data as well as the residual of the equations are close to zero, regardless of the choice of belief weights in the loss function Eq. (2), i.e. \(w_{f}\) for the physics and \(w_{u}\) for the data. Three results from PINNs are sampled and displayed in Fig. 2: when \(u_{\theta}\) fits the data (which come from a non-Newtonian flow), it does not satisfy the governing equations for Newtonian flows; when \(u_{\theta}\) is forced to follow the Newtonian flow by setting large \(w_{PDE}\)
Figure 2: In (a), a breakdown of total uncertainty in PINNs is presented. In (b), results from PINNs encountering model misspecifications are displayed, where a physical model of Newtonian fluid is assumed but the real physics, which generates the data, is non-Newtonian. The PINN method is employed to identify the homogeneous viscosity with different belief weights in the loss function, i.e. pairs of \(w_{PDE}\) and \(w_{\mathcal{D}_{u}}\) in (2), resulting in different pairs of PDE loss \(\mathcal{L}_{PDE}\) and data loss \(\mathcal{L}_{\mathcal{D}_{u}}\), shown in the left of (b). Due to the model misspecification, the PDE loss \(\mathcal{L}_{PDE}\) and the data loss \(\mathcal{L}_{\mathcal{D}_{u}}\) cannot be minimized simultaneously. Three results are presented in the right of (b), corresponding to three cases in the left, + (data are fitted well but the physics is not satisfied), * (the physics is satisfied but data are not fitted), and x (neither data nor physics are fitted). We note that the boundary condition is hard-encoded in the modeling. Details can be found in Sec. 3.3.1.
it presents Newtonian behavior and thus cannot fit the data from the non-Newtonian flow well.
The discrepancy between observational data and the assumed governing equations is caused by the physics/model misspecification. In this regard, we propose a general approach developed upon the original PINN framework to quantify the discrepancy as well as to correct the model misspecification. In this work, we focus on the misspecification of the differential operator \(\mathcal{F}_{\lambda}\), but the proposed approach can be easily generalized to the boundary/initial operator. Here, we denote the misspecified differential operator by \(\tilde{\mathcal{F}}_{\lambda}\) and then the discrepancy can be quantified as follows:
\[s(x)=f(x)-\tilde{\mathcal{F}}_{\lambda}[u](x). \tag{3}\]
Inspired by the PINN method, in which the sought solution \(u\) is modeled with a DNN parameterized by \(\theta\), we employ another DNN parameterized by \(\psi\) to model \(s(x)\), denoted as \(s_{\psi}(x)\). The source term \(f(x)\) is approximated by \(\tilde{\mathcal{F}}_{\lambda}[u_{\theta}](x)+s_{\psi}(x)\), and hence the term penalizing the violation of the physics in the loss function (2) is reformulated as:
\[\mathcal{L}_{PDE}(\theta,\psi)=\frac{w_{f}}{N_{f}}\sum_{i=1}^{N_{u}}||\tilde{ \mathcal{F}}_{\lambda}[u_{\theta}](x_{i}^{f})+s_{\psi}(x_{i}^{f})-f_{i}||_{2 }^{2}+\frac{w_{b}}{N_{b}}\sum_{i=1}^{N_{b}}||\mathcal{B}_{\lambda}[u_{\theta} ](x_{i}^{b})-b_{i}||_{2}^{2}. \tag{4}\]
We note that \(s_{\psi}\) plays two important roles in our approach handling model misspecification: (1) it approximates the function quantifying the model discrepancy, i.e. \(s\), and (2) it allows both the data term \(\mathcal{L}_{\mathcal{D}_{u}}(\theta)\) and the PDE term \(\mathcal{L}_{PDE}(\theta,\psi)\) to be minimized to very small
Figure 3: An overview of the proposed approach for correcting the misspecified physics. Here we assume that the nonlinear differential operator \(\mathcal{F}\) is misspecified as \(\tilde{\mathcal{F}}_{\lambda}\) such that \(\tilde{\mathcal{F}}_{\lambda}[u]\neq f\), which leads to poor performance of PINNs in learning governing equations. Compared to the conventional PINN method shown in Fig. 1, an additional DNN, parameterized by \(\psi\), is used to model the discrepancy. We note that \(s_{\psi}\) serves two roles: (1) correcting the equation such that data of \(u\) can be explained, and (2) modeling the discrepancy of the physics. With posterior samples of \(\psi\), we obtain the prediction as well as the model uncertainty of the discrepancy. We also note that DNNs in this figure, namely \(u_{\theta}\) and \(s_{\psi}\), are deterministic NNs in ensemble PINNs and Bayesian NNs (BNNs) in B-PINNs.
values simultaneously in the training of PINNs such that data are fitted by \(u_{\theta}\) and the (corrected) physics are satisfied.
### Uncertainty quantification
As mentioned in Sec. 2.2, a key step in the present work is to learn the correction term, i.e., \(s_{\psi}\), that is used to fix the issue of model misspecifications in PINNs. Generally, the measurements in real-world applications are noisy and can also be incomplete, leading to the uncertainties in predictions from PINNs [20; 21]. We refer to the uncertainties in \(s_{\psi}\) as _model uncertainty_. Here, two typical approaches are employed for quantifying the uncertainties in \(s_{\psi}\), i.e., ensemble PINNs [20; 21] and B-PINNs [24]. The former is computationally efficient but has difficulties preventing overfitting when the data are noisy, while the latter is able to handle the noisy data well but is computationally more expensive. We therefore utilize the ensemble PINNs and B-PINNs for cases with clean and noisy data, respectively.
We now briefly review the ensemble PINNs and B-PINNs. The ensemble PINN method follows the deep ensemble method, proposed in [44], and employs multiple standard training of PINNs independently. That is, PINNs with the PDE loss term in Eq. (4) are trained multiple times with random initialization of the DNN parameter. Each random initialization corresponds to one initial guess for minimizing the PINN loss function (2) and each training is finding one global minimum. As a result, ensemble PINNs can be considered as identifying different maximum a posteriori (MAP) estimates of the PINN loss function [20]. On the other hand, the B-PINN method focuses on estimating the posteriors for \(\theta\) and \(\psi\) based on the Bayes' rule as follows:
\[p(\theta,\psi|\mathcal{D})\propto p(\mathcal{D}|\theta)p(\theta). \tag{5}\]
Here \(p(\theta|\mathcal{D})\) is the density function for the posterior distribution, \(p(\mathcal{D}|\theta)\) for the likelihood, and \(p(\theta)\) for the prior. Given independent and identically distributed (i.i.d.) data, the density function for the likelihood distribution can be written as follows:
\[p(\mathcal{D}|\theta)=p(\mathcal{D}_{u}|\theta)p(\mathcal{D}_{f}|\theta)p( \mathcal{D}_{b}|\theta)=\prod_{i=1}^{N_{u}}p(u_{i}|x_{i}^{u},\theta)\prod_{i= 1}^{N_{f}}p(f_{i}|x_{i}^{f},\theta)\prod_{i=1}^{N_{b}}p(b_{i}|x_{i}^{b},\theta). \tag{6}\]
If we further assume that the data for \(u\), \(f\) and \(b\) follow Gaussian distribution with mean zero and standard deviation, \(\sigma_{u}\), \(\sigma_{f}\) and \(\sigma_{b}\), respectively, then correcting it follows that:
\[p(u_{i}|x_{i}^{u},\theta) =\frac{1}{\sqrt{2\pi}\sigma_{u}}\exp(-\frac{||u_{\theta}(x_{i})- u_{i}||_{2}^{2}}{2\sigma_{u}^{2}}), \tag{7a}\] \[p(f_{i}|x_{i}^{f},\theta) =\frac{1}{\sqrt{2\pi}\sigma_{f}}\exp(-\frac{||\mathcal{F}_{ \lambda}[u_{\theta}](x_{i}^{f})+s_{\psi}(x_{i}^{f})-f_{i}||_{2}^{2}}{2\sigma_{ f}^{2}}),\] (7b) \[p(b_{i}|x_{i}^{b},\theta) =\frac{1}{\sqrt{2\pi}\sigma_{b}}\exp(-\frac{||\mathcal{B}_{ \lambda}[u_{\theta}](x_{i}^{b})-b_{i}||_{2}^{2}}{2\sigma_{b}^{2}}),. \tag{7c}\]
We note that \(\sigma_{u}\), \(\sigma_{f}\) and \(\sigma_{b}\) are often treated as scales of the additive Gaussian noise in measuring data of \(u\), \(f\)and \(b\), respectively. Various methods, e.g. Markov Chain Monte Carlo (MCMC) and variational inference, have been proposed to tackle the posterior distribution (5). In this work, we choose to use Hamiltonian Monte Carlo (HMC) method to estimate the
posterior distribution by sampling \(\theta\) and \(\psi\) from it. We also note that here we only consider the homogeneous noise, while more complicated noise models, e.g. the heteroscedastic noise [20], are also compatible with the framework.
In both the ensemble PINN and B-PINN methods, we obtain samples of the NN parameters, from which statistics of the prediction are estimated. In general, the mean and the standard deviation are often chosen to represent the prediction and the uncertainty [20], respectively. Specifically in this work, we obtain samples of \(\theta\) and \(\psi\) from ensemble PINNs and/or B-PINNs, denoted by \(\{\theta_{i}\}_{i=1}^{M}\) and \(\{\psi_{i}\}_{i=1}^{M}\), respectively, where \(M\) denotes the number of samples. Then, the predicted means of the sought solution \(u\) and the discrepancy \(s\) are estimated as follows:
\[\hat{u}(x)\approx\frac{1}{M}\sum_{i=1}^{M}u_{\theta_{i}}(x),\ \hat{s}(x)\approx \frac{1}{M}\sum_{i=1}^{M}s_{\psi_{i}}(x), \tag{8}\]
where \(\hat{u}\) and \(\hat{s}\) denote the mean of \(u\) and \(s\), respectively. Similarly, the predicted uncertainties are:
\[Var[u](x)\approx\frac{1}{M}\sum_{i=1}^{M}(u_{\theta_{i}}(x)-\hat{u}(x))^{2},\ Var[s](x)\approx\frac{1}{M}\sum_{i=1}^{M}(s_{ \psi_{i}}(x)-\hat{s}(x))^{2}, \tag{9}\]
where \(Var[u]\) and \(Var[s]\) denote the variance of \(u\) and \(s\), respectively. Interested readers are directed to [20] for a review and [21] for a comprehensive Python library termed NeuralUQ.
We note that upon the computation of \(s_{\psi}(x)\), it can be used in various way: (1) its value represents the model discrepancy and its uncertainty represents the uncertainty in the physical model which we refer to as _model uncertainty_; (2) it corrects the misspecified physical models and identifies a system which explains the data; and (3) it could also be used to obtain mathematical expressions for the missing details of certain physical process via the symbolic regression [16; 17; 45]. Specifically in the present work, we employ the symbolic regression method developed in [46] and the associated Python library PySR to regress analytic mathematical expressions for the unknown physical processes (See Sec. 3.1 for a demonstration example).
## 3 Results and discussion
In this section, we conduct four numerical experiments, i.e., (1) an ODE system, (2) a one-dimensional (1D) reaction-diffusion equation, (3) a two-dimensional (2D) channel flow, and (4) a 2D cavity flow. In the first two examples, reaction models are possibly misspecified, and in the last two examples, we assume that we misspecify the constitutive relation in non-Newtonian flows as Newtonian ones. In all the test cases, we employ DNNs to correct the misspecified models as discussed in Sec. 2. Details for the computations (e.g., training steps, NN architectures, etc.) can be found in A.
### Pedagogical example: An ODE system
We first consider the following one-dimensional ODE:
\[\begin{split}\frac{du}{dt}&=f(t)+\lambda u(1-u),\ t\in[0,1],\\ u(t&=0)=u_{0},\end{split} \tag{10}\]
where \(\lambda\) is a non-negative constant, \(u_{0}\) is the initial condition, and \(f\) is the source term. In certain real-world applications, the reaction term, i.e., \(\lambda u(1-u)\) in the right hand side of Eq. (10), may not be exactly known and herein it is misspecified as an empirical model. In this regard, we study the effect of different prior knowledge about the reaction model in the discovery of governing equation. In particular, the following cases are considered:
1. Case (A): The reaction term is correctly specified as \(\lambda u(1-u)\), where \(\lambda\geq 0\) is unknown.
2. Case (B): The reaction term is misspecified as \(\lambda\cos(u)\), where \(\lambda\geq 0\) is unknown.
3. Case (C): The reaction term is misspecified as \(\lambda\cos(u)\) (with known \(\lambda=0.2\)) but we add \(s(t)\) to correct it, i.e., \(\lambda\cos(u)+s(t)\).
We denote the reaction term as \(\phi\), and the target here is to identify \(\phi\) given data on \(u/f\) as well as differently specified models. In Case (A) and (B), the problem degenerates to identifying the model parameter \(\lambda\), and \(\phi\) in these two cases can then be computed by \(\tilde{\lambda}u_{\theta}(1-u_{\theta})\) and \(\tilde{\lambda}\cos(u_{\theta})\), respectively, where \(\tilde{\lambda}\) is the PINN estimate of \(\lambda\). In Case(C), \(\phi\) is computed by \(\tilde{\lambda}\cos(u_{\theta})+s_{\psi}(t)\) where \(s_{\psi}\) is the estimation of the discrepancy induced by the model misspecification. We note that in Case (C) we assume \(\lambda\) is known and set \(\lambda=0.2\) for simplicity. This is because the discrepancy is already modeled by \(s(t)\) and hence treating \(\lambda\) as unknown does not make any difference in correcting the misspecified model.
To reduce the effect of the noisy/gappy data on the predicted accuracy, we first test the case with clean and sufficient data. The training data are generated by solving Eq. (10) with \(f(t)=\sin(2\pi t)\) and \(u_{0}=0\) using _MATLAB ode45_[47]. We assume that we have 101 data for \(u\) and \(f\), uniformly sampled from \(t\in[0,1]\), and we employ ensemble PINNs [20, 21] to learn the governing equation and also quantify the uncertainties in predictions. In all the test cases here, we choose a DNN with two hidden layers, each of which has 50 neurons and is equipped with hyperbolic tangent as the activation function, to approximate \(u\). The equations are then encoded in PINNs via the automatic differentiation [12]. Further, we utilize an additional DNN to correct the model misspecification in Case (C), which has the same architecture as the one used for approximating \(u\). The Adam optimizer [48] is applied to train the ensemble PINNs.
As shown in Fig. 4(a), PINNs with correct physical model (Case (A)) enable accurate inference of \(\phi\) and the surrogate model \(u_{\theta}\) fits the data and satisfies the ODE simultaneously. In contrast, PINNs with misspecified physical model (Case (B)) yield completely wrong estimate of \(\phi\) as displayed in Fig. 4(b). In Case (C), we add a DNN to correct the misspecified reaction model used in Case (B). As illustrated in Fig. 4(c) as well as Table 1, the predicted
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline \hline & \(\tilde{\lambda}\) & Error of \(\phi\) & Error of \(u\) & Error of \(f\) & Error of \(\tilde{u}\) \\ \hline Case (A): Known model & \(1.5000\pm 0.0001\) & 0.00\% & 0.00\% & 0.06\% & 0.01\% \\ \hline Case (B): Misspecified model & \(0.2254\pm 0.0001\) & 41.46\% & 1.73\% & 6.87\% & 4.58\% \\ \hline Case (C): Corrected model & 0.2 & 0.55\% & 0.01\% & 0.10\% & 0.02\% \\ \hline \hline \end{tabular}
\end{table}
Table 1: ODE system: Predictions for \(\phi\), \(u\) and \(f\) and the reconstruction \(\tilde{u}\). The metric is relative \(L_{2}\) error, and errors are computed between the reference and the mean of 20 PINNs trained independently. The exact value of \(\lambda\) is 1.5 and \(\tilde{\lambda}\) denotes the PINN estimate of \(\lambda\) (\(\tilde{\lambda}=0.2\) is fixed in Case (C)). We note that \(\tilde{u}\) is reconstructed by solving the identified ODE system with the identified \(\phi\), the inferred \(f\) and the initial condition \(u_{0}\). Cubic spline data interpolation is employed to approximate the function \(f\) from the prediction of \(f\) on a uniform grid.
accuracy for \(\phi\) improves significantly as compared to the result in Case (B). In addition, the computational accuracy for \(u\), \(f\) and \(\phi\) in Case (C) is quite similar as in the Case (A), where the physical model is correctly specified. The above results demonstrate the effectiveness of the present approach. We note that the predicted uncertainties in all cases are quite small, which is reasonable since we use sufficient data for both \(u\) and \(f\) to determine the unknown reaction term here. Furthermore, we present further results as well as the related discussion for the case with clean but gappy data in B.1.
We now keep the same setup as in Case (A) - (C) but assume that the training data are both noisy and gappy. In these cases, 50 data of \(f\) and \(u\) are randomly sampled from \(t\in[0,1]\) and corrupted by independent additive Gaussian noises with known noise scales (0.05 for \(f\) and 0.01 for \(u\)). As discussed in Sec. 2, we will employ the B-PINNs to quantify
Figure 4: ODE system: Predictions for \(u\), \(f\) and \(\phi\) from ensemble PINNs with clean and sufficient data. Ensemble PINNs with (a) the correct physical model, (b) a misspecified physical model, and (c) the misspecified physical model plus a correction. Blue circles: Representative measurements for \(u\) and \(f\); Red dashed line: Predicted mean; Black solid line: Reference solution.
the uncertainties arising from the noise as well as the incompleteness of the training data. As shown in Fig. 5, (1) B-PINNs with correct physical model can obtain accurate predictions for both \(u\), \(f\), and \(\phi\), and the computational errors for \(\phi\) are generally bounded by the predicted uncertainties (Case (A), Fig. 5(a)); (2) B-PINNs with the misspecified physical model fails to provide accurate inference for \(u\), \(f\), or \(\phi\) (Case (B), Fig. 5(b)); and (3) B-PINNs with a DNN as correction to the misspecified model is able to improve the predicted accuracy for \(u\), \(f\), and \(\phi\), as compared to the results in Case (B) (Case (C), Fig. 5(c)). Similar as in Case (A), the computational errors for \(\phi\) are bounded by the predicted uncertainties here.
Figure 5: ODE system: Predictions for \(u\), \(f\) and \(\phi\) from B-PINNs with noisy and gappy data. B-PINNs with (a) correct physical model, (b) misspecified physical model, and (c) misspecified physical model plus a correction. Blue circles: Noisy measurements for \(u\) and \(f\); Red dashed line: Predicted mean from B-PINNs; Black solid line: Reference solution. Our approach alleviates the model misspecification issue significantly, providing more accurate predictions. Compared to correctly specified model, the increase in the predicted uncertainty is caused by the lack of knowledge in the physical model and could be considered as model uncertainty.
In addition to the relation between the computational errors and the predicted uncertainties for \(\phi\) discussed above, we would like to further discuss that: (1) it is interesting to observe in Case (A) and Case (C) that the predicted uncertainty near the right end is larger than that near the left end. We note that the results here are reasonable since we do not have data for \(f\) on the right end; (2) the predicted uncertainties of \(\phi\) for \(t\in[0,1]\) in Case (C) are larger than those in Case (A) although we use the same training data as well as prior for the BNN that is used to approximate \(u\). We remark that the difference between the uncertainties in these two cases is caused by the lack of knowledge on the physical model and reflects the uncertainty in physical models.
We further verify the learned governing equation from the proposed approach. Specifically, we solve the equation \(du/dt=f(t)+\phi(t)\) with the inferred \(f\), \(\phi\), and the initial condition \(u_{0}=0\), using MATLAB ode45 to see if the solution for \(u\) agrees with the reference solution. We denote the solution to the identified system by \(\tilde{u}\). In particular, 1000 posterior samples of \(f\) and \(\phi\) from the B-PINNs are used to compute \(\tilde{u}\), as displayed in Fig. 6. We further illustrate the relative \(L_{2}\) errors between the mean for \(\tilde{u}\) and the reference solution in Table 2. As we can see, PINNs with correct physical model leads to the lowest error in \(\tilde{u}\). Adding a DNN to correct the misspecified model can significantly improve the accuracy in \(\tilde{u}\) as compared to the case with misspecified model, and the computational errors are slightly larger than the ones from the correct physical model. We also present the computational errors for \(\tilde{u}\) from the ensemble PINNs in Table 1, which will not be discussed again since the results are similar as the above ones. Finally, we would like to point out that (1) the
\begin{table}
\begin{tabular}{c|c|c} \hline \hline & \(\tilde{\lambda}\) & Error of \(\tilde{u}\) \\ \hline Case (A): Known model & \(1.4867\pm 0.0488\) & 1.56\% \\ \hline Case (B): Misspecified model & \(0.0000\pm 0.0000\) & 12.73\% \\ \hline Case (C): Corrected model & 0.2 & 2.61\% \\ \hline \hline \end{tabular}
\end{table}
Table 2: ODE system: Relative \(L_{2}\) error for the reconstructed \(u\) from B-PINNs. Here the computational errors are calculated using the predicted mean from B-PINNs.
Figure 6: ODE system: Solving the identified system with the inferred \(f\) and \(\phi\) from B-PINNs with the initial condition \(u_{0}=0\). The inferred \(f\) and \(\phi\) come from (a) correct physical model, (b) misspecified physical model, and (c) our approach. Red dashed line: Predicted mean; Black solid line: Reference solution. It shows that the misspecified physical model leads to wrong reconstruction \(\tilde{u}\) while our approach is able to correct it. Due to the lack of knowledge of the model, our approach yields larger uncertainty than the correct physical model.
uncertainties in the reconstructed \(u\) (i.e., \(\tilde{u}\)) are caused by the predicted uncertainties for \(f\) and \(\phi\) from B-PINNs, and (2) the computational errors for the mean of \(\tilde{u}\) in Case (A) and (C) are of the same order as the noise in the measurements, suggesting the reasonableness of the current results.
As discussed in Sec. 2, the discovered physical models or governing equations up to now are expressed using the trained DNNs (i.e., \(s_{\psi}\)), which are not explicitly presented. We can then combine the proposed method with the symbolic regression as the further step to obtain the explicit governing equations. Here we take the results in Case (C) from both ensemble PINNs and B-PINNs to demonstrate how to get the analytical expressions for the trained NNs, i.e., \(s_{\psi}\). Recall that with the present methodology the reaction term is now expressed as \(\phi(t)=\tilde{\lambda}\cos(u_{\theta}(t))+s_{\psi}(t)\). Assume that we now have the knowledge that the reaction term is autonomous with respect to the solution, i.e. \(\phi=s(u)\). We can then adopt the symbolic regression developed in [46] to find the analytical expression of \(s(u)\) from \(u_{\theta}\) and \(s_{\psi}\). The data for the symbolic regression are generated as \(\{u_{\theta}(t_{i}),s_{\psi}(t_{i})+\tilde{\lambda}\cos(u_{\theta}(t_{i}))\}_ {i=1}^{N}\), where \(t_{i},i=1,...,N\) are 101 uniform grids points on \([0,1]\), and \(u_{\theta}\) and \(s_{\psi}\) are the predicted means from the trained ensemble PINNs/B-PINNs. As displayed in Table 3, the recovered reaction model via symbolic regression agrees well with the reference solution as we have sufficient clean data. However, for the case with noisy and gappy data, the expression for the reaction model from symbolic regression does not match the reference solution. The result is expected since we can see in Fig. 5(c) that the predicted mean for \(\phi\) does not agree well with the reference solution well and the predicted uncertainties are generally large. Generally, increasing the number of training data is able to reduce the predicted uncertainty in B-PINNs [20; 21]. Here we also show that the learned reaction model agrees better with the reference solution as we increase the training data from 50 to 101 in B-PINNs. A recommendation here is to use the predicted uncertainty as guidance, i.e., we use the symbolic regression for discovering equation when the predicted uncertainty is small to achieve better accuracy. More results for the symbolic regression can be found in B.2.
### Reaction-diffusion equation
We now consider the following time-dependent PDE that describes a reaction-diffusion system:
\[\begin{split}&\frac{\partial u}{\partial t}=D\frac{\partial^{2}u }{\partial x^{2}}+\lambda g(u)+f(x,t),\ x\in[0,1],\ t\in[0,1],\\ & u(x,0)=0.5\sin^{2}(\pi x),\ x\in[0,1],\\ & u(0,t)=u(1,t)=0,\ t\in[0,1],\end{split} \tag{11}\]
where \(D=0.01\) denotes the diffusion coefficient, \(g(u)=u(1-u)\) is the reaction model, \(\lambda\) is the reaction rate, and \(f(x,t)=0\) is the source term. Similar as in Sec. 3.1, we test the following three cases:
\begin{table}
\begin{tabular}{c|c|c|c} \hline \hline Reference & Clean and sufficient data & Noisy and gappy data & Noisy and sufficient data \\ \hline \(1.5u(1-u)\) & \(1.4980u-1.4929u^{2}\) & \(u/(u+0.6213)\) & \(1.3540u-u^{2}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: ODE system: Discovered equations from \(\phi(u)\) using a symbolic regression method [46] (see Fig B.14 for plots of the discovered models). In all three cases, the symbolic regression is performed on the predicted mean from ensemble PINNs and B-PINNs.
1. Case (A): The reaction model is correctly specified and the reaction rate \(\lambda\geq 0\) is unknown.
2. Case (B): The reaction model is misspecified as \(u^{2}\) and the reaction rate \(\lambda\geq 0\) is unknown.
3. Case (C): The reaction model is misspecified as \(u^{2}\) and we correct it with a DNN, i.e., \(\lambda u^{2}+s\) (with known \(\lambda=1\)).
We denote the reaction term as \(\phi\) and we would like to identify \(\phi\) from data of \(u\), \(f\) and different physical models. In Case (A)/(B)/(C), the estimate of \(\phi\) is \(\tilde{\lambda}u_{\theta}(1-u_{\theta})/\tilde{\lambda}u_{\theta}^{2}/\tilde {\lambda}u_{\theta}^{2}+s_{\psi}\), where \(\tilde{\lambda}\) is the estimate of \(\lambda\). We note that in Case (C) we assume \(\lambda=1\) is known because the model misspecification induced by wrong \(\lambda\) can be fully covered by \(s\).
Here we only consider a general scenario, i.e., the training data are noisy and gappy. In this case, we employ B-PINNs to identify \(\phi\) with uncertainties. We assume that we have 121 measurements on \(u\) and 195 measurements of \(f\), which are evenly distributed in the temporal-spatial domain \((x,t)\in[0,1]\times[0,1]\). In addition, the measurement errors for both \(u\) and \(f\) are assumed to be Gaussian, and the noise scales are 0.02 and 0.05 for \(u\) and \(f\), respectively. Further, the reference solution is obtained by solving Eq. (11) with \(g(u)=u(1-u)\), \(\lambda=2\), and \(f(x,t)=0\). Similar as in Sec. 3.1, we use one DNN to model the sought solution \(u\) and another DNN to approximate the discrepancy \(s\) in Case (C). Details of the B-PINN method, e.g. hyperparameters of NNs and HMC, can be found in A.
As show in Fig. 7, we observe that: (1) the predicted means for \(u\) and \(\phi\) agree the best with the reference solutions as the reaction model is correctly specified (Case (A), second column in Fig. 7); (2) the predictions for \(u\) and \(\phi\) show significant discrepancy as compared to the reference solution when the reaction model is misspecified (Case (B), third column in
Figure 7: Time-dependent reaction-diffusion system: Predicted mean for \(u\) and \(\phi\) from B-PINNs with noisy data of \(u\) and \(f\) and differently specified reaction models. Misspecified model (\(u^{2}\)) impairs the performance of B-PINNs significantly in terms of the accuracy in predicting \(u\) and \(\phi\). and the proposed approach is able to alleviate the issue caused by model misspecification; see Fig. 8 for UQ.
Fig. 7); (3) the predicted mean for \(u\) in Case (C) is similar as that in Case (A), in which the reaction model is correctly specified; and (4) the predicted mean for \(\phi\) in Case (C) is not as good as the one in Case (A), but it is much better than the result in Case (B) where the reaction model is misspecified. All the results demonstrate that the added DNNs are able to correct the misspecified model which in turn enhance the computational accuracy in PINNs.
We further illustrate the predicted uncertainties for both \(u\) and \(\phi\) in Fig. 8. In particular, we only depict the uncertainties at one representative time for simplicity. As we can see, (1) the computational errors for both \(u\) and \(\phi\) are generally bounded by the predicted uncertainties in Case (A) (first column in Fig. 8) and Case (C) (third column in Fig. 8), while they cannot be bounded by the predicted uncertainties in the case with misspecified reaction model i.e. Case (B) (second column in Fig. 8), and (2) the predicted uncertainties for \(u\) and \(\phi\) are generally larger in Case (C) (third column in Fig. 8) than those in Case (A) (first column in Fig. 8), which are similar as the results in Sec. 3.1. Again, we attribute the difference between the predicted uncertainties for Case (A) and (C) to the model uncertainty.
### 2D non-Newtonian flows
In this section, we test the capability of the present approach in non-Newtonian flows where we misspecify the constitutive relation as the Newtonian one, and then we employ DNNs to correct the misspecified models. In particular, we consider the steady flows in a
Figure 8: Time-dependent reaction-diffusion system in slices (\(t=0.5,0.75\)): Predicted uncertainty for \(u\) and \(\phi\) from B-PINNs with noisy data of \(u\) and \(f\) and differently specified reaction models. Representative predicted uncertainties for (a) \(u\), and (b) \(\phi\). Red dashed line: Predicted mean from B-PINNs; Black solid line: Reference solution. Our approach is able to improve the the predicted mean and the predicted uncertainty significantly. The decrease in the accuracy of predicted mean and increase in the predicted uncertainty of our approach are caused by the lack of knowledge over the reaction model.
2D channel and a 2D cavity, and the employed non-Newtonian fluids are power-law ones. The constitutive relation in a power-law fluid is expressed as \(\mu=\mu_{0}|\mathbf{S}|^{n-1}\), where \(\mu\) is the dynamic viscosity of a fluid, \(\mu_{0}\) is a constant, \(\mathbf{S}=\nabla\mathbf{u}+(\nabla\mathbf{u})^{T}\) denotes the shear rate (\(|\mathbf{S}|=\sqrt{(\mathbf{S}:\mathbf{S})/2}\)), \(\mathbf{u}\) is the fluid velocify, and \(n\) is the power law index. Without loss of generality, the fluids are shear-thinning (pseudoplastic, \(n<1\)) and shear-thickening (dilatant, \(n>1\)) in the channel and cavity flows, respectively.
#### 3.3.1 Channel flow
The following 2D non-Newtonian channel flow is considered here and the corresponding governing equation is expressed as [49]:
\[\begin{split}&\frac{\partial}{\partial y}(\mu(y)\frac{\partial u}{ \partial y})-\frac{\partial P}{\partial x}=f(y),\ y\in[-H/2,H/2],\\ & u(-H/2)=u(H/2)=0,\end{split} \tag{12}\]
where \(H=1\), \(\frac{\partial P}{\partial x}=c\) is a constant, and \(f(y)=0\). The analytical solution reads as
\[u(y)=\frac{n}{n+1}(-\frac{1}{\mu_{0}}\frac{\partial P}{\partial x})^{1/n}[( \frac{H}{2})^{1+1/n}-|y|^{1+1/n}]. \tag{13}\]
For this specific case, we set \(n=0.25,c=-1,H=1,\mu_{0}=0.5\) to generate the reference solution as well as training data.
In the first test case, we assume that we have 30 clean measurements of \(u\) and 51 clean measurements of \(f\), randomly and uniformly sampled from \([-H/2,H/2]\), respectively. We assume the fluid is a Newtonian one and the viscosity is an unknown constant. We then employ the PINN method to identify the unknown viscosity \(\mu_{1}\) in the following equation:
\[\frac{\partial}{\partial y}(\mu_{1}\frac{\partial u}{\partial y})-\frac{ \partial P}{\partial x}=f(y), \tag{14}\]
which describes Newtonian flows. Specifically, the boundary condition is hard-encoded in the equation for \(u\) so that we only have two terms in the PINN loss function (2), i.e., \(\mathcal{L}_{\mathcal{D}_{u}}\) for the data of \(u\) and \(\mathcal{L}_{PDE}\) for the equation. As shown in Fig. 2(b), we cannot minimize \(\mathcal{L}_{\mathcal{D}_{u}}\) and \(\mathcal{L}_{PDE}\) simultaneously in PINNs, regardless of the choice of belief weights for those two terms, indicating that the NN surrogate model \(u_{\theta}\) is not able to fit the non-Newtonian data and satisfy the Newtonian physical law at the same time.
Similar as in previous two examples in this section, we can add a DNN as the correction in Eq. (14) to address the model misspecification issue in PINNs. The equation with the correction term can now be expressed as follows:
\[\frac{\partial}{\partial y}(\mu_{1}\frac{\partial u}{\partial y})-\frac{ \partial P}{\partial x}+s(y)=f(y), \tag{15}\]
where \(s\) denotes the correction term. Here we treat \(\mu_{1}\) as known and set \(\mu_{1}=0.1\) since we have a DNN to model the discrepancy. We present the results from the PINNs with a correction in Fig. 9(a), and we can see that the current predictions for \(u\) and \(f\) agree with the reference solutions much better than in the case where the model is misspecified.
We note that in this specific case we can also employ other approaches to address the issue that we cannot fit the data and satisfy the equation simultaneously in PINNs if the physical model is misspecified. A natural way is to treat the viscosity in Eq. (14) as an unknown function and employ a DNN to directly learn the constitutive relation. We present the corresponding results in Fig. 9(b), in which we employ the same setup, e.g., training data, NN architecture in PINNs, as used in Fig. 2 as well as Fig. 9(a). As shown, PINNs with a DNN for learning the viscosity now can fit \(u\) and \(f\) much better than the results in Fig. 2. We then compare the these results with the ones in Fig. 9(a). As shown, the predictions for \(u\) are similar, but the PINN with a DNN as correction is able to provide more accurate predictions for \(f\), i.e., less fluctuations are observed in Fig. 9(a) than in Fig. 9(b). Furthermore, we illustrate the loss histories for training PINNs in these two approaches in Fig. 9(c). As we can see, the training loss decreases faster and converges to smaller values in the case where we add a DNN as correction to the misspecified model, as compared to the approach in which we model the viscosity as an unknown function. We conjecture that the superior performance of the proposed approach can be attributed to the following reason: modeling the viscosity as an unknown function via a DNN requires the accurate estimate of the derivatives in Eq. (14), which is quite challenging since \(n<1\) results in very sharp gradients for the viscosity around \(y=0\). However, the correction term, i.e., \(s\) in Eq. (14), is a quite smooth function, as shown in Fig. 10. Hence, we can expect the easier training as well as better accuracy in the proposed method.
We now move to the noisy-data case and employ B-PINNs to quantify the model uncertainty. The same 30 noisy measurements of \(u\) and 51 noisy measurements of \(f\) are sampled
Figure 9: 2D non-Newtonian channel flow: Correcting the misspecified physics with two different approaches, i.e. (a) modeling the discrepancy in the equation \(s(y)\) with a DNN, and (b) modeling the unknown space-dependent viscosity \(\mu(y)\) with a DNN. In (c), the PINN training losses are presented and they converge to smaller values for both \(\mathcal{L}_{\mathcal{D}_{u}}\) and \(\mathcal{L}_{PDE}\) in the case where we add a DNN as correction as compared to the one in which we use a DNN to approximate the viscosity.
but now they are corrupted by additive Gaussian noise with \(0.005\) and \(0.05\) noise scales, respectively. In B-PINNs, we test two specific cases, i.e., (1) we assume the fluid is Newtonian and encode Eq. (14) in B-PINNs. The objective is then to infer the unknown viscosity given data, similar as the case in Fig. 2; and (2) we are not sure if the fluid is Newtonian or not, we then add a DNN as the correction to the possible misspecified model, i.e., Eq. (15). Also, we note that in this specific case \(s\) can be computed analytically by plugging the analytic solution of the power-law fluid into Eq. (15): \(s(y)=4\mu_{1}c^{4}|y|^{3}/\mu_{0}^{4}+c\), which serves as the reference in what follows.
The results from B-PINNs are illustrated in Fig. 10. As observed, the NN surrogate \(u_{\theta}\) is not able to fit the data and satisfy the misspecified physical model at the same time (Fig. 10(a)). In addition, with the additional DNN for correcting the misspecified physical model in the previous case, \(u_{\theta}\) can now fit the data and satiafy the corrected physical model simultaneously, as shown in Fig. 10(b). Also, we can see that the computational errors between the predicted mean and the reference solution for \(s\) are bounded by the predicted uncertainties in the entire computational domain.
#### 3.3.2 Cavity flow
Next, we consider steady non-Newtonian flow in a two-dimensional lid-driven square cavity (i.e., \(x,y\in[0,1]\)), which can be described by the incompressible Navier-Stokes equations
Figure 10: 2D non-Newtonian channel flow: Predictions for \(u\), \(f\) and the discrepancy \(s\) from B-PINNs with noisy data. B-PINNs with (a) misspecified model, and (b) misspecified model plus a correction.
as
\[\nabla\cdot\mathbf{u} =0, \tag{16a}\] \[\mathbf{u}\cdot\nabla\mathbf{u} =-\nabla P+\nabla\cdot[\nu(\nabla\mathbf{u}+(\nabla\mathbf{u})^{T})]+\mathbf{f}, \tag{16b}\]
where \(\mathbf{u}=(u,v)\) denotes the velocity in \(x-\) and \(y-\) directions, respectively; \(P\) is the pressure; and \(\nu\) is the kinematic viscosity. In particular, the boundary condition for the upper wall is expressed as
\[u=U\left(1-\frac{\cosh[r(x-\frac{L}{2})]}{\cosh(\frac{rL}{2})}\right),\;v=0, \tag{17}\]
where \(U\), \(r\) and \(L\) are constant. Specifically, \(r=10\), \(L=1\) is the length of the cavity. In addition, the remaining walls are stationary. Here, the viscosity model is the same as the one used in channel flow, i.e., \(\mu=\rho\nu=\mu_{0}|\mathbf{S}|^{n-1}\). The objective is to learn the governing equations, which explain the data given measurements on \(\mathbf{u}\), \(P\), and \(\mathbf{f}\). In particular, the training data as well as the reference solution in this case are generated by solving Eq. (16a) using the lattice Boltzmann equation model (LBE) in [49] with \(n=1.5\). Details for the computations in LBE can be found in C.
Similar as in Sec. 3.3.1, suppose we misspecify the governing equations as the ones for Newtonian flows, and we then employ the present approach to correct the model misspecification as follows:
\[\nabla\cdot\mathbf{u} =0, \tag{18a}\] \[\mathbf{u}\cdot\nabla\mathbf{u} =-\nabla P+\nu_{0}\nabla^{2}\mathbf{u}+\mathbf{f}+\mathbf{s}, \tag{18b}\]
where \(\mathbf{s}=(s_{x},s_{y})\) are the two components correcting the momentum equations in \(x-\) and \(y-\) directions, respectively, and \(f_{x}(x,y)=f_{y}(x,y)=0\) for \(x,y\in[0,1]\). The continuity equation Eq. (18a) is correctly specified and hence does not need correction.
We assume that we have 400 random measurements of \(u\), 400 random measurements of \(v\), and \(81\times 81\) uniformly distributed measurements of \(P\) from the reference solution. Here we only consider the case with clean data and the ensemble PINNs are employed for quantifying the model uncertainty. We would like to mention that we do not use B-PINNs here since we need to utilize several hundred training data to achieve meaningful results, which is computationally expensive as the HMC is used for posterior estimation. Similarly, we test two specific cases, i.e., (1) the viscosity in Eq. (18b) is an unknown constant and we do not use the correction \(\mathbf{s}\) in PINNs, and (2) we have an initial guess for the viscosity in Eq. (18b) and we add \(\mathbf{s}\) in PINNs for the correction for the possible misspecification. Specifically, we use one DNN to model the velocity \(\mathbf{u}\), one DNN to approximate the pressure \(p\), and one DNN to model the discrepancy in Eq. (18b), i.e. \(\mathbf{s}\). In particular, we set \(\nu_{0}=0.0001\) as the initial guess in Case (2) which will not affect the computational accuracy in PINNs since the model discrepancy is already leveraged by \(\mathbf{s}\).
We present the predictions for \(\mathbf{u}\) in Fig. 11. As we can see, PINNs with misspecified model lead to inaccurate inference for both \(u\) and \(v\) (Fig. 11(a)). The proposed approach, however, achieves much better results by adding a DNN to correct the model misspecification. We further illustrate the computational errors for both cases in Table 4, which clearly shows the effectiveness as well as superiority of the present method. In addition, we show
some representative 1D slices for the predicted \(u,v,f_{x}\), and \(f_{y}\) with uncertainties in Fig. 12. Similarly, the PINNs with misspecified model cannot fit the data and satisfy the physical model at the same time (Fig. 12(a)), while this issue can be handled well as we add a DNN to correct the misspecified model, as shown in Fig. 12(b).
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline \hline & Error of \(u\) & Error of \(v\) & Error of \(f_{x}\) & Error of \(f_{y}\) \\ \hline Misspecified model & 7.79\% & 11.28\% & \(3.9153\times 10^{-4}\) & \(3.1466\times 10^{-4}\) \\ \hline Misspecified model with correction & 1.33\% & 2.29\% & \(7.8389\times 10^{-7}\) & \(8.2283\times 10^{-7}\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: 2D non-Newtonian Cavity flow: Errors of the predicted means from ensemble PINNs. The error of \(u/v\) is defined as the relative \(L_{2}\) error, while the error of \(f_{x}/f_{y}\) the mean squared error since the reference solutions are \(f_{x}=0\) and \(f_{y}=0\).
Figure 11: Non-Newtonian cavity flow: Predictions from ensemble PINNs for \(u\) and \(v\) in contours. Red dashed line: Predicted mean; Black solid line: Reference solution.
Figure 12: Non-Newtonian cavity flow: Predictions for \(u,v,f_{x},f_{y}\) in 1D slices. Red dashed line: predicted mean; black solid line: reference solution; uncertainty bound: predicted uncertainty.
## 4 Summary
We presented a general approach to improve the accuracy of physics-informed neural networks (PINNs) for discovering governing equations from data when the physical model is misspecified. In particular, we first assume the mathematical models for the specific problem we considered and encode them in PINNs via automatic differentiation. We note that the assumed models may be misspecified in certain complex systems since not all the physical processes are fully understood here. We then utilize another deep neural networks (DNN) to model the discrepancy between the imperfect models and the observational data in order to correct the misspecified model. In addition, to quantify uncertainties arising from noisy and/or gappy data, we employ two typical uncertainty-induced PINNs methods, i.e., the Bayesian physics-informed neural networks (B-PINNs) and ensemble PINNs. A series of numerical examples are conducted to show the effectiveness of the proposed method. Specifically, we considered (1) a reaction-diffusion system where we misspecified the reaction model, and (2) non-Newtonian flows in two-dimensional channel and cavity but misspecified the fluid viscosity. The results show that the present approach can achieve good accuracy since the added DNN is able to correct the misspecified physical models encoded in PINNs. Also, the B-PINNs and/or ensemble PINNs can provide reasonable uncertainty bounds for the discovered physical models, which reflects the uncertainties in physical models, i.e., _model uncertainty_. Furthermore, we showcase that we can seamlessly combine the proposed method with the symbolic regression to obtain explicit governing equations based on the trained NNs. Finally, we expect the proposed approach to be a promising tool in a wider class of applications for solving problems where the physical processes are not fully understood.
## Acknowledgement
This work was supported by the MURI/AFOSR project (FA9550-20-1-0358), the DOE-MMICS SEA-CROGS project (DE-SC0023191), the NIH grant Neural Operator Learning to Predict Aneurysmal Growth and Outcomes (R01HL168473), and the NIH grant CR-CNS: Waste-clearance flows in the brain inferred using physics-informed neural networks (R01AT012312). Z.Z. thanks Dr. Khemraj Shukla from Brown University for helpful discussion.
|
2306.00283 | Autism Disease Detection Using Transfer Learning Techniques: Performance
Comparison Between Central Processing Unit vs Graphics Processing Unit
Functions for Neural Networks | Neural network approaches are machine learning methods that are widely used
in various domains, such as healthcare and cybersecurity. Neural networks are
especially renowned for their ability to deal with image datasets. During the
training process with images, various fundamental mathematical operations are
performed in the neural network. These operations include several algebraic and
mathematical functions, such as derivatives, convolutions, and matrix
inversions and transpositions. Such operations demand higher processing power
than what is typically required for regular computer usage. Since CPUs are
built with serial processing, they are not appropriate for handling large image
datasets. On the other hand, GPUs have parallel processing capabilities and can
provide higher speed. This paper utilizes advanced neural network techniques,
such as VGG16, Resnet50, Densenet, Inceptionv3, Xception, Mobilenet, XGBOOST
VGG16, and our proposed models, to compare CPU and GPU resources. We
implemented a system for classifying Autism disease using face images of
autistic and non-autistic children to compare performance during testing. We
used evaluation matrices such as Accuracy, F1 score, Precision, Recall, and
Execution time. It was observed that GPU outperformed CPU in all tests
conducted. Moreover, the performance of the neural network models in terms of
accuracy increased on GPU compared to CPU. | Mst Shapna Akter, Hossain Shahriar, Alfredo Cuzzocrea | 2023-06-01T01:59:17Z | http://arxiv.org/abs/2306.00283v1 | Autism Disease Detection Using Transfer Learning Techniques: Performance Comparison Between Central Processing Unit vs Graphics Processing Unit Functions for Neural Networks
###### Abstract
Neural network approaches are machine learning methods that are widely used in various domains, such as healthcare and cybersecurity. Neural networks are especially renowned for their ability to deal with image datasets. During the training process with images, various fundamental mathematical operations are performed in the neural network. These operations include several algebraic and mathematical functions, such as derivatives, convolutions, and matrix inversions and transpositions. Such operations demand higher processing power than what is typically required for regular computer usage. Since CPUs are built with serial processing, they are not appropriate for handling large image datasets. On the other hand, GPUs have parallel processing capabilities and can provide higher speed. This paper utilizes advanced neural network techniques, such as VGG16, Resnet50, Densenet, Inceptionv3, Xception, Mobilenet, XGBOOST-VGG16, and our proposed models, to compare CPU and GPU resources. We implemented a system for classifying Autism disease using face images of autistic and non-autistic children to compare performance during testing. We used evaluation matrices such as Accuracy, F1 score, Precision, Recall, and Execution time. It was observed that GPU outperformed CPU in all tests conducted. Moreover, the performance of the neural network models in terms of accuracy increased on GPU compared to CPU.
Keywords: Autism Disease, Neural Network, CPU, GPU, Transfer Learning;
## I Introduction
Nowadays, GPU technology efficiently trains and tests deep learning architectures through parallelizable mathematical procedures. GPUs consist of multiple cores that accelerate complex computational operations [1]. The number of processing units (cores) that can operate independently is crucial in determining the level of parallelization possible. There is a substantial difference between GPUs with thousands of cores and CPUs with only four or eight cores [2]. More cores available means an increase in the amount of parallelization possible. When GPUs have a lot of cores, CPU cores operate at a higher frequency. For mathematical procedures in Neural Networks, GPUs are vital [3]. Neural Networks have proven to be very successful in solving several real-life problems. Ensuring high accuracy while working with diseases, such as Autism Spectrum Disorder (ASD), is crucial [4]. ASD brings developmental disability to the brain and is also associated with genetic conditions. Some causes of ASD are known, while others are still unknown [5]. Patients with ASD exhibit different behavior, communication, interaction, and learning styles compared to ordinary people [6]. While some patients can live and work like ordinary people without any support, many patients require assistance from others to live their life. Some patients may have advanced conversation skills, while others may be nonverbal. Usually, ASD starts to develop before the age of three, but some children may show symptoms early in the 12-month period. In the first 12 or 24 months period, they gain knowledge and skills, but later stop learning, making it difficult to communicate with peers and adults, make new friends, and understand complex concepts. People with ASD are also more likely to experience serious issues such as anxiety, depression, and attention-deficit/hyperactivity disorder compared to those without ASD [7]. Early detection of ASD is crucial as it significantly decreases symptoms and has a high chance of improving the quality of life. Medical tests such as blood tests and symptom checkings are the most common ways of detecting ASD. The symptoms are usually checked by parents or teachers, and healthcare providers evaluate the similarity score between possible signs and symptoms provided by the parents.
This study compares the performance of CPU and GPU on different devices analyzed for autism disease using face images. We evaluate well-known deep learning frameworks such as VGG16, Resnet50, Densenet, Inceptionv3, Xception, Mobilenet, XGBOOST-VGG16, and our proposed models using energy-related metrics and offer precise power measurements for each of these frameworks using a collection of carefully chosen networks and layers. We assess various processor architectures, including a Dell Gaming Laptop with an 11th Gen Intel Core i7 operating at 2.30GHz, 16.0 GB of RAM, and 512 GB of SSD storage, as well as a cloud-based service (Google Colab) for testing. We also investigate the impact of various hardware configurations on performance and energy efficiency.
Our contributions can be summarized as follows:
1. We classify Autism disease using advanced Neural Network models such as VGG16, Resnet50, Densenet, Inceptionv3, Xception, Mobilenet, and XGBOOST-VGG16, and show which model performs better on this dataset.
2. We propose our Stacking Neural Network model using the aforementioned six transfer learning models and provide a comparative analysis with traditional models.
3. We conduct a comparison study on the power usage of well-known pre-trained CNN frameworks on various hardware (i.e., Dell Gaming laptop and Google Colab). Our findings provide insights into how the application's
## II Background and Literature Review
Limited work has been done in analyzing the performance of CPU and GPU on various devices using a Neural Network approach. To gain insight into this, we reviewed various research papers in the field of machine learning and deep learning. In addition, Autism Spectrum Disorder (ASD) is a crucial topic in healthcare, and many researchers have investigated it using various approaches with up-to-date models. We summarize some of the relevant research papers below:
Raj and Masood [8] proposed machine learning techniques for the analysis and detection of autism spectrum disorder. They focused on three age groups: children, adolescents, and adults, using classifiers such as Support Vector Machine (SVM), KNN, Naive Bayes, Logistic Regression, Neural Networks, and Convolutional Neural Networks. Their dataset includes parameters such as nationality, problems at birth, any family member suffering from pervasive development disorders, sex, screening application used by the user before or not, and screening test type. They achieved 99.53%, 98.30%, and 96.88% accuracy for adults, children, and adolescents, respectively. Al-diabat [9] proposed a fuzzy data mining approach for the autism classification of children using the ASDtest app dataset. They compared the performance of fuzzy data mining algorithms (FURIA), JRIP, RIDOR, and PRISM. The accuracy of their proposed models was less than 90 percent. Xie et al. [10] proposed a deep learning model for detecting atypical visual attention in ASD using a two-stream end-to-end network. They used the VGG16 model, including 13 Conv layers, five max-pooling layers, and two fully connected layers, and their proposed model achieved an accuracy of 95%. Liu et al. [11] created a benchmark dataset and proposed multimodal machine learning for ASD detection. The dataset included the spontaneous interaction of adults and children, and the experimental work was carried out using machine learning models such as SVM and RF, achieving 70% accuracy. Duan et al. [12] proposed a system for predicting ASD using human faces and analyzed the visual attention of 300 human faces and their corresponding eye movement images using the CASNET model. De Campos Souza and Guimaraes [13] proposed a fuzzy neural network to predict children with autism and developed a mobile application that uses question and answer-based inputs to generate predictions. Xu et al. [14] proposed child vocalization composition as discriminant information for automatic autism detection, achieving 85% to 90% accuracy using speech datasets. Tao and Shyu et al. [15] proposed a CNN-LSTM-based ASD classification model using observer scan paths, achieving 74.22% accuracy with 300 eye movement datasets for ASD children. Akter et al. [16] proposed a machine learning-based model for early-stage ASD detection for children, adolescents, and adults using SVM, AdaBoost, and GLMboost models. The Adaboost model showed the best performance for children, Glmboost for adolescents, and Adaboost for the adult dataset. Hangun and Eyecioglu [17] analyzed the performance between CPU and GPU resources for image processing operations using Otsu's method and edge detection, finding that GPU resources provide better performance than CPU. Similarly, Buber and Banu [18] compared the performance of CPU and GPU for deep learning techniques using the Recurrent Neural Network (RNN), word embedding, and transfer learning models. They adjusted some hyperparameters to ensure that the models were optimized for each device. The experimental work was carried out on a Tesla k80 GPU and an Intel Xeon Gold 6126 CPU. They found that the GPU was generally faster than the CPU for all three models, with speedup ranging from 1.9x to 3.3x for the RNN and transfer learning models, respectively. The word embedding model showed the highest speedup, with the GPU being 4.2x faster than the CPU. Asano et al. [19] compared the performance of FPGA, GPU, and CPU for image processing tasks. They used three different techniques, including two-dimensional filters, stereo-vision, and k-means clustering, to evaluate the performance of the different devices. They found that while FPGA had a lower operational frequency than CPU and GPU, it could still achieve extremely high performance for image processing tasks. Baykal et al. [20] compared the performance of CPU and GPU on big data using deep learning models, including artificial neural networks (ANN) with a multi-layered structure, deep learning, and convolutional neural networks (CNN). They found that the TensorFlow platform allowed for an extremely effective parallel execution environment on GPUs, reducing execution time by 4 times compared to CPU execution. These studies demonstrate the importance of considering hardware configurations when developing and implementing deep learning models. Understanding the performance of different devices can help researchers and practitioners optimize their models and improve their accuracy, efficiency, and speed.
## III Materials and Methods
This paper proposes the generalized Neural Network architectures for classifying ASD, shown in Figure 1.
Images are preprocessed using popular techniques by calling the Keras and TensorFlow library and fetched into the classical neural networks such as VGG16, Resnet-50, Mobilenet, Densenet, Inceptionv3, Xception, XGBOOST-VGG16 and our proposed model.
Fig. 1: Proposed Architecture for Autism Disease classification.
### _Dataset_
This paper utilizes a publicly available dataset from a popular Kaggle website, consisting of 2936 facial images accurately divided between ASD and TD children. The contributor collected the dataset through an internet search, stating that they could not obtain ASD images from clinics or verified sources [21]. The original dataset contained more than 3000 images, including incorrect ASD images. Therefore, the data used in this study mainly comprise appropriate data for experimental work [21]. The dataset includes 89% white children and 11% brown/black children, and the purpose of using these images is to efficiently detect ASD disease using facial features.
### _Preprocessing_
Most image datasets come with improper shapes, which are inappropriate for inputting into Neural Networks. Transfer learning algorithms follow size consistency for each input, which is 224x224 pixels or less. Therefore, we prepared the dataset by transforming it from random sizes to 224x224 pixels, while keeping the channel as 3, and not lowering it below 224x224, as it can negatively impact the model's performance. We retained the noises as they allow the model to learn along with the noises, which helps to accurately recognize real-life ASD.
### _Classification Models and Fine Tuning_
Convolutional Neural Network (CNN) is a subfield of conventional Neural Network, which comprises three elements: one or more convolutional layers, a pooling layer, and one fully connected layer. The first element is the fundamental building block of the CNN model and calculates the dot product between the kernel and a portion of the input image. The kernel size is relatively smaller than the size of the input image, while the kernel dimension is similar to the input image size. The pooling layer, the second component of the CNN model, aids in reducing computational complexity by excluding some parameters. Lastly, the Softmax function of the fully connected layers provides the probability of input belonging to a particular class.
#### Iii-C1 Inceptionv3
Szegedy et al. [22] first released Inceptionv3 in 2015. It is a Convolutional Neural Network used to analyze image datasets for tasks such as segmentation, object localization, and image classification [23]. Inceptionv3 is an expanded form of the GoogleNet paradigm. This model reduces the number of parameters that must be trained by concatenating several convolutional filters of varying sizes into a single filter, which eliminates computational complexity. Therefore, classification performance remains good even with fewer parameters. Inceptionv3 has gained popularity among researchers as it can be trained well over large datasets. To fine-tune the model on our dataset, we downloaded the base model and added a GlobalAveragePooling2D layer and a dropout (0.5) model. We also included a dense layer with a "sigmoid" activation, Dropout, and Softmax layers with outputs at the bottom of the architecture. Finally, we fine-tuned the model on 2936 sample images for 30 epochs using the Stochastic gradient descent (SGD) optimizer and a learning rate of 0.0001. The trainable parameters of the Inceptionv3 model are 21,770,401.
#### Iii-C2 Xception
The Xception model was created by Chollet [24], a Google researcher. Xception is a deep convolutional neural network that uses a linear combination of residual connections and depth-wise separable convolution. The Inception model was used as a foundation for this unique deep neural network design, which substitutes depth-wise separable convolution for the Inception model's inception layers. The architecture is less complex and more effective than existing deep convolutional neural networks due to the idea of developing it with fully depth-wise separable convolution. To further optimize the model, we included the Flatten() layer, BatchNormalization(), a Dense layer with 128 neurons and Relu function, followed by BatchNormalization(), and a final Dense layer with one neuron and sigmoid function, after downloading the base model. Finally, using the Stochastic gradient descent (SGD) optimizer and a learning rate of 0.0001 on 2936 sample images for 30 epochs, we fine-tuned the model. The trainable parameters of the Xception model are 33,853,225.
#### Iii-C3 Densenet
Gao Huang, Zhuang Liu, and their colleagues introduced the Densely Connected Convolutional Network (Densenet) in 2017 [25]. This deep convolutional neural network connects each layer to every other layer in a feed-forward fashion, taking advantage of dense connections between the layers. Densenet requires fewer parameters for training the model and mitigates the vanishing gradient problem, which makes it ideal for use in computer vision. After downloading the base model, we added the GlobalAveragePooling2D layer and a dropout model (with a rate of 0.5). Additionally, we included a dense layer with "sigmoid" activation, Dropout, and Softmax layers with outputs at the bottom of the architecture to fine-tune the model on the dataset. Finally, using the Stochastic gradient descent (SGD) optimizer and a learning rate of 0.0001 on 2936 sample images for 30 epochs, we fine-tuned the model.
Fig. 2: Sample of Dataset
The Densenet model has 6,954,881 trainable parameters.
#### Iv-B4 Mobilenet
The MobileNet architecture was first introduced by Andrew G. Howard et al. [26]. It is a shallow deep neural network model that is good for reducing computation time and cost. MobileNet substitutes depthwise separable convolutions for conventional convolutions to minimize the model size and processing. Using a factorization technique called depthwise separable convolution, a normal convolution is split into two convolutions: a depthwise convolution and a pointwise convolution. A 1x1-dimensional convolution is a pointwise convolution. While pointwise convolution attempts to integrate, depthwise convolution aims to filter. The process of depthwise separable convolution is the result of adding depthwise and pointwise convolution. Except for the top layer, MobileNet's architecture is composed of separable convolutions. After downloading the base model, we included the GlobalAveragePooling2D layer and a dropout (0.5) model. To further fine-tune the model on the dataset, we included a dense layer with "sigmoid" activation, Dropout, and Softmax layers with outputs at the bottom of the architecture. Finally, we fine-tuned the model using the Stochastic gradient descent (SGD) optimizer and a learning rate of 0.0001 on 2936 sample images for 30 epochs. The trainable parameters of the MobileNet model are 3,208,001.
#### Iv-B5 Resnet-50
Residual Neural Networks (ResNet) are a type of Artificial Neural Network (ANN) that stack leftover blocks to create a network. The ResNet-34, ResNet-50, and ResNet-101 are the most well-known ResNet networks, with variations distinguished by the number of layers. ResNet-50, for instance, employs 50 layers of neural networks. Using many layers to address complex issues is beneficial because each layer can be trained for a variety of activities. However, the deeper the network, the more likely it is to encounter a degradation issue, which is problematic. The initialization of the network, an optimization algorithm, or a problem with vanishing or exploding gradients typically causes this problem. The ResNet model seeks to prevent these problems by using skip connections, which are at the center of the residual blocks. Skip connections are the strength of the ResNet model, and two methods are used. First, they create a different route around the gradient to mitigate the vanishing gradient problem. Second, they give the model the opportunity to learn an identity function that ensures that the upper levels function nearly identically to the lower layers. The model can successfully classify the limited dataset since it has been trained on more than a million images. After downloading the base model, we included the Flatten() layer, BatchNormalization(), a Dense layer with 128 neurons and Relu function, again BatchNormalization(), and a final Dense layer with one neuron and sigmoid function. Lastly, using the Stochastic gradient descent (SGD) optimizer and a learning rate of 0.0001 on 2936 sample images for 30 epochs, we fine-tuned the model. The trainable parameters of the ResNet-50 model are 23,796,993.
#### Iv-B6 Vgg-16
VGG-16 is the most advanced deep neural network for interpreting visual input. It is considered one of the best architectures for vision models to date, and it won the 2014 ILSVR (Imagenet) competition. The network has a whopping 138 million parameters, and its name comes from the fact that it has 16 weighted layers. VGG16's distinctive feature is that it retains a convolution layer of 3x3 filters with a stride 1 and the same padding and a max pool layer of 2x2 filters with a stride 2 throughout the entire architecture. The model's final two layers consist of two FC (fully connected) layers, followed by a softmax as the output. After downloading the base model, we included the GlobalMaxPooling2D(), a Dense layer with 512 neurons followed by a Relu function, a Dropout(0.5) layer, and a final Dense layer with one neuron and sigmoid function. Lastly, using the Stochastic gradient descent (SGD) optimizer and a learning rate of 0.0001 on 2936 sample images for 30 epochs, we fine-tuned the model. The trainable parameters of the VGG16 model are 14,977,857.
#### Iv-B7 Proposed model
We utilized the stacking ensemble learning approach to develop our proposed system, which involved logistic regression as the meta-learner and heterogeneous weak learners as the base models. Inceptionv3, Xception, Densenet, Mobilenet, Resnet-50, and VGG-16 were used as the heterogeneous base models, which took in three-dimensional images as inputs. The data separation process differed from that of traditional models, with 60% of the data being used for training, 10% for validation, and 30% for testing. This modification was necessary to avoid overfitting during the meta-learner training phase. The predicted dataset from Level 0 already contained a probability of expected values, enabling the meta-learner to provide accurate probabilities from Level 0. To prevent overfitting, the final model (meta-learner) was trained using both the validation dataset and the outputs. The level-1 prediction was the ultimate result. The stacking ensemble model produced a final model (meta-learner) by using the predicted results from several other models. The disadvantage of the typical stacking ensemble method is that it employs the same various models for the base model, which leads to similar estimates. If the basic model performs poorly on the dataset, there is a high possibility that the final output will be inferior. However, the single neural network model demonstrates bias and volatility toward the dataset. As a result, different models were chosen when constructing the base model.
The architecture is divided into two levels: Level 0 and Level 1, as shown in Figure 9. Inceptionv3, Xception, Densenet, Mobilenet, Resnet-50, and VGG-16 models are used to construct Level 0. The six submodels each make six predictions concurrently after learning the data pattern. Each model employed in Level 0 contributes equally to the overall model. Logistic regression is used to build Level 1, also known as the meta-learner. The meta-learner at Level 1 is fed the Level 0 projected outputs as input. Based on the anticipated Level 0 outputs, the meta-learner estimates the weighted outputs best. A model that can quickly learn a pattern or adjust to different datasets with a small amount of training data is referred to as a "meta learner." It learns patterns of the outputs derived from the six models. As a result, the model can learn completely new data quite effectively and produce acceptable output. The parameter of this model is a combination of six transfer learnings' learnable parameters.
#### Iii-C8 XGBoost-VGG16
At the beginning of our process, we effectively extracted the desired features using the traditional VGG16 model. We then used a popular classifier called XGBoost to classify the features. Extreme Gradient Boosting (XGBoost) is a distributed, scalable gradient-boosted decision tree (GBDT) machine learning framework. It offers parallel tree boosting for regression, classification, and ranking issues. In supervised machine learning, a model is trained using algorithms to discover patterns in a dataset of features and labels, and the model is then used to predict the labels on the features of a new dataset. Gradient boosting is a supervised learning approach that combines an ensemble of estimates from a variety of weaker models to anticipate a target variable appropriately. The XGBoost algorithm outperforms machine learning challenges because of its efficient handling of a wide range of data types, relationships, distributions, and adjustable hyperparameters. Regression, classification (binary and multiclass), and ranking issues can all be addressed using XGBoostIt is important to evaluate the performance of a model to understand how well the predicted results match the actual ones [27]. Evaluation metrics are used to measure a model's performance, but the choice of metrics depends on the type of model [28]. There are two types of models: classification and regression. Regression is used when predicting a numerical value, while classification is used when predicting a discrete value. The error measure is used to evaluate regression models [29], while classification models are evaluated using the accuracy metric. In our study, we aim to classify ASD patterns of face shape [30], and therefore, we utilized accuracy, F1 score, precision, and recall as our evaluation metrics.
### _Evaluation Metrics_
It is important to evaluate the performance of a model to understand how well the predicted results match the actual ones [27]. Evaluation metrics are used to measure a model's performance, but the choice of metrics depends on the type of model [28]. There are two types of models: classification and regression. Regression is used when predicting a numerical value [31], while classification is used when predicting a discrete value [32], [33]. The error measure is used to evaluate regression models [29], while classification models are evaluated using the accuracy metric. In our study, we aim to classify ASD patterns of face shape [30], and therefore, we utilized accuracy, F1 score, precision, and recall as our evaluation metrics.
**Precision**: Precision measures how accurately the model predicts a positive value when it is actually positive. It is used when False Positives are prevalent. If the model's classification accuracy is low, many non-ASD images will be mistakenly identified as having ASD. Conversely, if the accuracy is high, the model will learn from false alarms and ignore the False Positive results. The following formula is used to calculate precision:
\[\text{Precision}=\frac{\textit{TP}}{\textit{TP}+\textit{FP}} \tag{1}\]
Here, TP refers to True Positive values, and FP refers to False Positive values.
**Recall**: Recall is the inverse of Precision. It measures how well the model correctly identifies a positive value when it is actually positive. It is used when the False Negatives (FN) are high. If the model provides limited recall, many ASD images will be classified as non-ASD in the ASD classification problem. Conversely, if it provides high recall, the model will learn from false alarms and disregard the false negative results. The following formula is used to calculate recall:
\[\text{Recall}=\frac{\textit{TP}}{\textit{TP}+\textit{FN}} \tag{2}\]
**F1 score**: The F1 score combines Precision and Recall to determine the model's overall accuracy. The F1 score ranges from 0 to 1. The F1 score returns 1 if the predicted value agrees with the expected value and 0 if none of the values agree with the expected value. The following formula is used to calculate the F1 score:
\[\text{F1 score}=\frac{\textit{2}\cdot\text{precision}\cdot\text{recall}}{ \text{precision}+\text{recall}} \tag{3}\]
**Accuracy**: Accuracy measures how well the predicted output matches the actual value.
\[\text{Accuracy}=\frac{\textit{TP}\ +\textit{TN}}{\textit{TP}+\textit{TN}+ \textit{FP}+\textit{FN}} \tag{4}\]
Here, TN refers to True Negative, and FN refers to False Negative.
### _Experimental settings_
This paper uses two devices: physical device and online device. We referred physical device as device 1, and online device as device 2.
Fig. 3: Proposed Architecture for Autism Disease classification.
#### Iii-C1 Device 1
**Device Specification details:** Device name: Dell - G15 15.6" FHD 120Hz Gaming Laptop Processor: 11th Gen Intel(R) Core(TM) i7-11800H @ 2.30GHz 2.30 GHz Installed RAM: 16.0 GB (15.7 GB usable) System type: 64-bit operating system, x64-based processor Memory: 512GB Solid State Drive GPU: NVIDIA GeForce RTX 3050
**GPU Configuration:** NVIDIA released the GeForce RTX 3050 8 GB on January 4, 2022, as a performance-segment graphics card [34]. The card supports DirectX 12 Ultimate and is based on the GA106 graphics processor in its GA106-150-KA-A1 variant. It is manufactured using an 8 nm process. This guarantees that GeForce RTX 3050 8 GB will be able to run all current games. Future video games will include hardware raytracing, variable-rate shading, and other features due to the DirectX 12 Ultimate capabilities. The GA106 graphics processor has a die area of 276 mm2 and 12,000 million transistors, making it a chip of ordinary size [35]. NVIDIA has disabled some shading units on the GeForce RTX 3050 8 GB to reach the product's target shader count, in contrast to the fully unlocked GeForce RTX 3060 8 GB, which uses the same GPU but has all 3584 shaders activated. It has 32 ROPs, 80 texture mapping units, and 2560 shading units. Eightty tensor cores are also present, which help machine learning applications run faster. Twenty acceleration cores for raytracing are included on the GPU. The GeForce RTX 3050 8 GB and 8 GB of GDDR6 RAM from NVIDIA are connected via a 128-bit memory interface. Memory is running at 1750 MHz, and the GPU is functioning at 1552 MHz, with a boost to 1777 MHz (14 Gbps effective). The NVIDIA GeForce RTX 3050 8 GB is a dual-slot card that uses one 8-pin power connector with a maximum power draw of 130 W. 1x HDMI 2.1 and 3x DisplayPort 1.4a are available as display outputs. A PCI-Express 4.0 x8 interface links the GeForce RTX 3050 8 GB to the rest of the hardware. The card has a dual-slot cooling system and measures 242 mm long by 112 mm wide [35].
#### Iii-C2 Device2
Google Colaboratory (CoLab) is an open-source platform provided by Google Research. Colab is particularly well-suited to machine learning, data analysis, and education. It enables anyone to create and execute arbitrary Python code through the browser [36]. Moreover, Colab is a hosted Jupyter notebook service that requires no installation and offers free access to computer resources such as GPUs [37].
**Device Specification details:**
Device name: Google Colab CPU: Intel (R) Xeon (R) CPU @ 2.20GHz Memory: 12 GB Storage: 86GB GPU: Tesla T4
**GPU Configuration:**
NVIDIA introduced the Tesla T4 professional graphics card on September 13, 2018. The card supports DirectX 12 Ultimate and is based on the TU104 graphics processor in its TU104-895-A1 model. It was manufactured using a 12 nm processor with 13,600 million transistors and a die area of 545 mm2. The TU104 graphics engine is a large chip. NVIDIA has disabled some shading units on the Tesla T4 to reach the product's target shader count, in contrast to the fully unlocked GeForce RTX 2080 SUPER, which utilizes the same GPU but has all 3072 shaders enabled. The Tesla T4 has 64 ROPs, 160 texture mapping units, and 2560 shading units. It also includes 320 tensor cores, which accelerate machine learning applications, and forty acceleration cores for raytracing. The Tesla T4 is coupled with 16 GB of GDDR6 memory from NVIDIA using a 256-bit memory interface. The memory runs at 1250 MHz, while the GPU operates at 585 MHz and can be clocked up to 1590 MHz (10 Gbps effective) [38]. The NVIDIA Tesla T4 is a single-slot card that does not require a separate power connector, and its maximum power usage is 70 W. As it is not intended to have monitors connected to it, this device lacks display connectivity. The Tesla T4 is linked to the rest of the hardware through a PCI-Express 3.0 x16 interface. The card has a single-slot cooling system and measures 168 mm in length [39].
## IV Results and Discussions
We have shown results derived from Neural Network models, such as Inceptionv3, Xception, Densenet, Mobilenet, Resnet-50, VGG-16, XGBoost-VGG-16, and our proposed model with parameters 21,770,401, 6,954,881, 3,208,001, 23,796,993, and 14,977,857, respectively. Since the trainable parameters of each model are different, the accuracy and execution time differ from model to model. We used evaluation metrics such as Accuracy, Precision, Recall, and F1 Score. All the experiments have been carried out on two devices with and without GPU. The two devices are: Dell - G15 15.6" FHD 120Hz Gaming Laptop and Google Colab. We referred to the Dell - G15 15.6" FHD 120Hz Gaming Laptop as Device 1 and Google Colab as Device 2. Moreover, we denoted Device 1 with GPU as \(D_{1}\), Device 1 without GPU as \(D_{1}\), Device 2 with GPU as \(D_{2}\), and Device 2 without GPU as \(D_{2}\). Table 1, Table 2, Table 3, and Table 4 show classification results derived from different Neural Network models for the two devices with and without GPU. From the experiments, we found that while running the Neural Networks on Device 1, the execution time is comparatively lesser than on Device 2, both in the case of GPU support and without GPU support. Moreover, GPU accelerates the execution time significantly for both devices. For Device 1 without GPU, VGG16, Resnet50, Densenet, Inceptionv3, Xception, Mobilenet, XGBoOST-VGG16, and our proposed model took 5h 27min, 3h 2min, 4h 42min, 2h 9min, 3h 20min, 24min 29s, 31min 23s, and 6h 55min, respectively. While on Device 1 with GPU, VGG16, Resnet50, Densenet, Inceptionv3, Xception, Mobilenet, XGBoOST-VGG16, and our proposed model took 33min 42s, 25min 9s, 23min 5s, 18min 30s, 25min 33s, 5min 58s, 8min 49s, and 45min 22s, respectively. Therefore, Device 1 with GPU support takes 4h 54min, 2h 6min, 4h 19min, 1h 51min, 2h 55min, 19min, and 6h 10min less time than VGG16, Resnet50, Densenet, Inceptionv3, Xception, Mobilenet, XGBoOST-VGG16, and our proposed model. Moreover, the accuracy also increases when training the Neural Networks with GPU. For Device 1 without GPU, VGG16, Resnet50, Densenet, Inceptionv3, Xception, Mobilenet, XGBoOST-VGG16, and our proposed model provide an accuracy of 0.87, 0.90, 0.89, 0.92, 0.90, 0.89, 0.95, and 0.93, respectively.
\begin{table}
\begin{tabular}{|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|} \hline Models & Accuracy & precision & Recall & F1 Score & Execution time \\ \hline VGG16 & 0.87 & 0.90 & 0.91 & 0.93 & 33 min 42s \\ \hline Resnet50 & 0.93 & 0.94 & 0.93 & 0.96 & 25min 9s \\ \hline Densenet & 0.92 & 0.91 & 0.95 & 0.90 & 23min 5s \\ \hline Inceptionv3 & 0.94 & 0.92 & 0.95 & 0.92 & 18min 30s \\ \hline Xception & 0.90 & 0.90 & 0.99 & 0.95 & 25min 3s \\ \hline Mobilenet & 0.91 & 0.90 & 0.94 & 0.93 & 5min 58s \\ \hline XGBOOST- VGG16 & 0.94 & 0.98 & 0.97 & 0.99 & 8min 49s \\ \hline Proposed Model & 0.97 & 0.99 & 1.00 & 0.99 & 45min 22s \\ \hline \end{tabular}
\end{table} TABLE I: Autism Disease Classification Results Training Different Neural Network Models Without GPU support for Device1
\begin{table}
\begin{tabular}{|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|} \hline Models & Accuracy & precision & Recall & F1 Score & Execution time \\ \hline VGG16 & 0.87 & 0.90 & 0.89 & 0.85 & 41 min 38s \\ \hline Resnet50 & 0.89 & 0.92 & 0.90 & 0.92 & 35min 19s \\ \hline Densenet & 0.88 & 0.85 & 0.86 & 0.93 & 31min 41s \\ \hline Inceptionv3 & 0.83 & 0.87 & 0.88 & 0.89 & 19min 1s \\ \hline Xceptionv3 & 0.87 & 0.88 & 0.89 & 0.92 & 32min 9s \\ \hline Mobilenet & 0.83 & 0.86 & 0.88 & 0.88 & 12min 2s \\ \hline XGBOOST- VGG16 & 0.93 & 0.91 & 0.95 & 0.97 & 19min 1s \\ \hline Proposed Model & 0.92 & 0.97 & 0.99 & 0.95 & 1hr 3min \\ \hline \end{tabular}
\end{table} TABLE IV: Autism Disease Classification Results Training Different Neural Network Models With GPU support for Device2
\begin{table}
\begin{tabular}{|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|} \hline Models & Accuracy & precision & Recall & F1 Score & Execution time \\ \hline VGG16 & 0.87 & 0.90 & 0.91 & 0.92 & 33 min 42s \\ \hline Resnet50 & 0.93 & 0.94 & 0.93 & 0.96 & 25min 9s \\ \hline Densenet & 0.92 & 0.91 & 0.95 & 0.90 & 23min 5s \\ \hline Inceptionv3 & 0.94 & 0.92 & 0.95 & 0.92 & 18min 30s \\ \hline Xception & 0.90 & 0.90 & 0.99 & 0.95 & 25min 3s \\ \hline Mobilenet & 0.91 & 0.90 & 0.94 & 0.93 & 5min 58s \\ \hline XGBOOST- VGG16 & 0.94 & 0.98 & 0.97 & 0.99 & 8min 49s \\ \hline Proposed Model & 0.97 & 0.99 & 1.00 & 0.99 & 45min 22s \\ \hline \end{tabular}
\end{table} TABLE II: Autism Disease Classification Results Training Different Neural Network Models With GPU support for Device1
\begin{table}
\begin{tabular}{|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|} \hline Models & Accuracy & precision & Recall & F1 Score & Execution time \\ \hline VGG16 & 0.87 & 0.90 & 0.89 & 0.85 & 41 min 38s \\ \hline Resnet50 & 0.89 & 0.92 & 0.90 & 0.92 & 35min 19s \\ \hline Densenet & 0.88 & 0.85 & 0.86 & 0.93 & 31min 41s \\ \hline Inceptionv3 & 0.83 & 0.87 & 0.88 & 0.89 & 19min 1s \\ \hline Xceptionv3 & 0.87 & 0.88 & 0.89 & 0.92 & 32min 9s \\ \hline Mobilenet & 0.83 & 0.86 & 0.88 & 0.88 & 12min 2s \\ \hline XGBOOST- VGG16 & 0.93 & 0.91 & 0.95 & 0.97 & 19min 1s \\ \hline Proposed Model & 0.92 & 0.97 & 0.99 & 0.95 & 1hr 3min \\ \hline \end{tabular}
\end{table} TABLE IV: Autism Disease Classification Results Training Different Neural Network Models With GPU support for Device2
\begin{table}
\begin{tabular}{|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|} \hline Models & Accuracy & precision & Recall & F1 Score & Execution time \\ \hline VGG16 & 0.87 & 0.90 & 0.91 & 0.92 & 5h 27min \\ \hline Resnet50 & 0.90 & 0.98 & 1.00 & 0.92 & 3h 22min \\ \hline Densenet & 0.89 & 0.91 & 0.92 & 0.90 & 4h 42min \\ \hline Inceptionv3 & 0.92 & 0.99 & 0.94 & 0.91 & 2h 9min \\ \hline Xception & 0.90 & 0.90 & 1.00 & 0.92 & 3h 20min \\ \hline Mobilenet & 0.89 & 0.90 & 0.91 & 0.87 & 24min 29s \\ \hline XGBOOST- VGG16 & 0.95 & 1.00 & 1.00 & 0.99 & 31min 123s \\ \hline Proposed Model & 0.93 & 0.99 & 1.00 & 0.97 & 6h 18min \\ \hline \end{tabular}
\end{table} TABLE III: Autism Disease Classification Results Training Different Neural Network Models Without GPU support for Device2
of autistic children's faces. Furthermore, we investigated the impact of different CPU and GPU hardware configurations on the functionality and energy usage of CNNs. Our findings show that GPUs outperform CPUs in terms of accuracy and required time in all cases. These findings can be applied to the development of energy-efficient neural network designs and deep CNN frameworks. The measurements reveal improved performance due to the parallelized nature of GPU functions.
|
2309.02563 | Evaluation Kidney Layer Segmentation on Whole Slide Imaging using
Convolutional Neural Networks and Transformers | The segmentation of kidney layer structures, including cortex, outer stripe,
inner stripe, and inner medulla within human kidney whole slide images (WSI)
plays an essential role in automated image analysis in renal pathology.
However, the current manual segmentation process proves labor-intensive and
infeasible for handling the extensive digital pathology images encountered at a
large scale. In response, the realm of digital renal pathology has seen the
emergence of deep learning-based methodologies. However, very few, if any, deep
learning based approaches have been applied to kidney layer structure
segmentation. Addressing this gap, this paper assesses the feasibility of
performing deep learning based approaches on kidney layer structure
segmetnation. This study employs the representative convolutional neural
network (CNN) and Transformer segmentation approaches, including Swin-Unet,
Medical-Transformer, TransUNet, U-Net, PSPNet, and DeepLabv3+. We
quantitatively evaluated six prevalent deep learning models on renal cortex
layer segmentation using mice kidney WSIs. The empirical results stemming from
our approach exhibit compelling advancements, as evidenced by a decent Mean
Intersection over Union (mIoU) index. The results demonstrate that Transformer
models generally outperform CNN-based models. By enabling a quantitative
evaluation of renal cortical structures, deep learning approaches are promising
to empower these medical professionals to make more informed kidney layer
segmentation. | Muhao Liu, Chenyang Qi, Shunxing Bao, Quan Liu, Ruining Deng, Yu Wang, Shilin Zhao, Haichun Yang, Yuankai Huo | 2023-09-05T20:24:27Z | http://arxiv.org/abs/2309.02563v1 | Evaluation Kidney Layer Segmentation on Whole Slide Imaging using Convolutional Neural Networks and Transformers
###### Abstract
The segmentation of kidney layer structures, including cortex, outer stripe, inner stripe, and inner medulla within human kidney whole slide images (WSI) plays an essential role in automated image analysis in renal pathology. However, the current manual segmentation process proves labor-intensive and infeasible for handling the extensive digital pathology images encountered at a large scale. In response, the realm of digital renal pathology has seen the emergence of deep learning-based methodologies. However, very few, if any, deep learning based approaches have been applied to kidney layer structure segmentation. Addressing this gap, this paper assesses the feasibility of performing deep learning based approaches on kidney layer structure segmentation. This study employs the representative convolutional neural network (CNN) and Transformer segmentation approaches, including Swin-Unet, Medical-Transformer, TransUNet, U-Net, PSPNet, and DeepLabv3+. We quantitatively evaluated six prevalent deep learning models on renal cortex layer segmentation using mice kidney WSIs. The empirical results stemming from our approach exhibit compelling advancements, as evidenced by a decent Mean Intersection over Union (mIoU) index. The results demonstrate that Transformer models generally outperform CNN-based models. By enabling a quantitative evaluation of renal cortical structures, deep learning approaches are promising to empower these medical professionals to make more informed kidney layer segmentation.
Kidney mice cortex, Segmentation, Deep learning, CNNs, Transformer models Corresponding author: Yuankai Huo: E-mail: [email protected]
## 1 Introduction
Digital pathology has proven its prowess in quantifying digitized tissues using whole slide images (WSIs). It is acknowledged for facilitating remote analysis of diseased tissues, replacing labor-intensive manual pathology assessments[1] with computer-assisted methodologies. Region segmentation is of particular significance[2], a specialized branch of semantic segmentation, vital for precise diagnosis and treatment. Yet, the task of segmenting WSIs, particularly the intricate tissue structure of the kidney layer structures, presents challenges due to cell similarity across distinct regions.
This study aims to bridge this gap by conducting a comparative analysis of three widely used CNN models (U-Net[3], PSPNet[4], DeepLabv3+[5]) and three Transformer models (Swin-Unet[6], TransUNet[7], Medical-Transformer[8]) on kidney layer segmentation task. We compare the widely used approach to offer an overview of the functioning of deep learning in kidney layer segmentation. To our knowledge, we are among the pioneers in evaluating deep learning's segmentation capabilities for kidney layer structures.
To underscore the validation aspect of this research, we conducted a comprehensive series of experiments, each designed with precision and thoroughness, including: comprehensive evaluations of different models, comparing CNNs and Transformer models, testing against varying complexities within the dataset. By comparing the effectiveness of CNNs and Transformer models and applying them to mice kidney cortex segmentation, our study not only opens avenues for future research in this realm but also provides references for upcoming researchers grappling with model selection in kidney layer segmentation applications.
## 2 Method
### Overall framework
Fig.1 shows the overall framework of this study, including five main steps: (1) Annotations are imported onto the tissue image, exported as labels in GeoJSON format, and simultaneously, tissue images are exported in JPG format. (2) The exported labels are assigned different pixel values. Both labels and whole slide images (WSIs) are then downsampled by a factor of 3. Subsequently, the view is segmented into patches of 1024\(\times\)1024 pixels, using a half-step sliding window for smoother results. Patches with no pixel information are discarded, and incomplete patches are resized to maintain uniformity. (3) The page patches and label patches are matched and collated. (4) The collated patches are fed into various models for training, with temporary best-performing models saved every 10 epochs for potential later use. (5) The top-performing model is then used for prediction on new patches of size 1024\(\times\)1024, downsampled by a factor of 1. The model's predictions are overlaid and merged, yielding the final predicted WSIs.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Model & Acronym & Year & Backbone & Download URL \\ \hline U-Net[3] & U-Net & 2015 & VGG[9] & Link \\ \hline DeepLabv3+[5] & v3+ & 2018 & MobileNetV2[10] & Link \\ \hline Pyramid Scene Parsing Net[4] & PSPNet & 2016 & ResNet50[11] & Link \\ \hline \hline \end{tabular}
\end{table}
Table 1: CNN-Based Models
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Model & Acronym & Year & Backbone & Download URL \\ \hline TransUNet[7] & TUnet & 2021 & Vision Transformer[12] & Link \\ \hline Swin-Unet[6] & SUnet & 2021 & Swin Transformer[13] & Link \\ \hline Medical- & MedT & 2021 & Gated Axial-Attention[8] & Link \\ Transformer[8] & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 2: Transformer-Based Models
Figure 1: This figure shows general steps of the experiment. The WSIs are first tiled into patches. Then, six prevalent deep learning segmentation approaches are applied to all patches. Last, the patch-level results are assembled to WSI-level.
### CNN-Based methods
Three CNNs based models and three Transformer based models, which open-source code is available, are utilized in this study:
1. **U-Net3**: This network has a U-shaped design that uses convolutions, pooling, and up-convolutions. While it can handle segmentation tasks and learn from few labeled images, it requires a lot of computing power and can be expensive.
2. **PSPNet4**: This network uses a pyramid pooling module for segmentation, which may enhance performance under complex conditions. It excels at grasping the overall context without overlooking important details as it processes various regions of the image. However, it demands significant computational resources, which can increase memory usage and slow down processing.
3. **DeepLabv3+5**: It employs atrous convolutions and atrous spatial pyramid pooling (ASPP) to grasp the image context across various scales. Atrous convolution enables effectively capturing a broader context without adding computational demands. While ASPP integrates different scales, this increases the computational load, balancing out the benefits of using atrous convolution.
### Transformer-Based methods
1. **TransUNet7**: This framework uses a mix of CNN-Transformer architecture, blending the global contextualization benefits of transformers with the high-resolution spatial information strengths of CNNs.
Footnote 7: [https://github.com/faceface/](https://github.com/faceface/)
over 100 epochs, during which the entire set of approximately 4000 training patches were fed into the network in batches of 8. After each epoch, the total loss (Total_Loss) and validation loss (Val_Loss) were calculated, with the model achieving the lowest validation loss being saved for later evaluation. The performance of this model was then assessed against the complete test dataset using the mean Intersection over Union (mIoU) metric, as defined in Equation 1. Alongside mIoU, it is equally imperative to consider the Dice Similarity Coefficient (DSC), a critical metric to evaluate the spatial overlap accuracy between the predicted segmentation and the manual segmentation, ensuring a comprehensive evaluation of the model's performance. This score would further provide a robust assessment of the segmentation accuracy. Furthermore, to corroborate the robustness of each model, tests were conducted on a unseen dataset of 20 whole slide images (WSIs). The DSC indices generated from these WSIs for each model were employed to calculate descriptive statistical measures, including mean, median, and standard deviation, shown in the Table 3. These measures furnished a nuanced understanding of the performance consistency and reliability exhibited by each model. All relevant data have been tabulated and will be discussed in the following sections.
\[\text{mIoU}=\frac{1}{n}\sum\frac{A\cap B}{A\cup B} \tag{1}\]
where \(n\) is the number of classes. \(A\) is the area of prediction for a given class. \(B\) is the area of the manual segmentation for that class. \(\cap\) represents intersection (common area between prediction and manual segmentation). \(\cup\) represents union (total area covered by both prediction and manual segmentation). \(\sum\) represents the sum over all classes[14]. mIoU index is often used as a standard of performance by models in semantic segmentation, given its ability to take both false positives and false negatives into account, providing a comprehensive view of the model's performance. We compared the above three CNNs with these three transformer models.
## 4 Results
The performance metrics of each of the selected models for kidney layer segmentation are presented in Table 1 and Table 2. Each mIoU index displayed in Table 3 reported represents an average across all ten test slides. The DSC of six models is visualized in Fig. 3. Additionally, Fig. 2 provides a comparison of the predicted WSIs against their corresponding manual segmentation for four selected test slides. Based on mIoU index values, three Transformer models in this study generally outperform three CNN models. Among transformer models, Swin-Unet exhibits the best performance with an accuracy of 92.2%. This superior performance may be attributed to the model's unique architecture which effectively combines the benefits of the Transformer's global attention mechanism with the strong local feature extraction ability of U-Net. Moreover, Trans-Unet has the most stable performance as indicated by the standard deviation of 12.8. During the training process, Trans-Unet could have reached a better, more stable region in the solution space, leading to more consistent results. It's also possible that the specific features of the kidney mice cortex dataset are particularly well-suited to the Trans-Unet model, leading to more consistent performance. Furthermore, while Swin-Unet is purely Transformer-based network, Trans-Unet employs a hybrid CNN-Transformer architecture. It means that whether hybrid or purely Transformer-based architectures is not a single influential factor in the model's performance.
\begin{table}
\begin{tabular}{l c c c c} \hline Model & mIoU & median DSC & mean DSC & standard deviation DSC \\ \hline U-Net & 83.7 & 65.5 & 61.8 & 14.2 \\ \hline PSPNet & 83.7 & 71.0 & 67.5 & 15.9 \\ \hline DeepLabv3+ & 81.6 & 61.9 & 60.9 & 14.9 \\ \hline MedT & 88.5 & 72.3 & 70.1 & 17.0 \\ \hline Swin-Unet & **92.2** & **81.0** & **77.4** & 14.1 \\ \hline Trans-Unet & 91.9 & 80.1 & 76.0 & **12.8** \\ \hline \end{tabular}
\end{table}
Table 3: Comparison of Model Performances (%)
## 5 Conclusions
This research conducts a comparison between prevalent CNN and Transformer-based segmentation algorithms for kidney layer segmentation. The outcomes reveal that Transformer Models generally outperform CNNs in terms of performance. Notably, the Swin-Unet model attains the highest overall accuracy. This investigation underscores the significance of selecting appropriate models when handling complex datasets, showcasing the promising capabilities of Transformer models. It is anticipated that this study will serve as a point of reference for forthcoming research on the novel task of kidney layer segmentation, thus providing valuable assistance to future endeavors in this domain.
Figure 3: This figure shows mIoU scores on 20 WSIs of all six models
Figure 2: This figure shows comparisons between manual segmentation and predictions.
## 6 Acknowledgments
This work has not been submitted for publication or presentation elsewhere. This work is supported in part by NIH R01DK135597(Huo), DoD HT9425-23-1-0003(HCY), and NIH NIDDK DK56942(ABF).
|
2302.09977 | Dynamic Graph Neural Network with Adaptive Edge Attributes for Air
Quality Predictions | Air quality prediction is a typical spatio-temporal modeling problem, which
always uses different components to handle spatial and temporal dependencies in
complex systems separately. Previous models based on time series analysis and
Recurrent Neural Network (RNN) methods have only modeled time series while
ignoring spatial information. Previous GCNs-based methods usually require
providing spatial correlation graph structure of observation sites in advance.
The correlations among these sites and their strengths are usually calculated
using prior information. However, due to the limitations of human cognition,
limited prior information cannot reflect the real station-related structure or
bring more effective information for accurate prediction. To this end, we
propose a novel Dynamic Graph Neural Network with Adaptive Edge Attributes
(DGN-AEA) on the message passing network, which generates the adaptive
bidirected dynamic graph by learning the edge attributes as model parameters.
Unlike prior information to establish edges, our method can obtain adaptive
edge information through end-to-end training without any prior information.
Thus reduced the complexity of the problem. Besides, the hidden structural
information between the stations can be obtained as model by-products, which
can help make some subsequent decision-making analyses. Experimental results
show that our model received state-of-the-art performance than other baselines. | Jing Xu, Shuo Wang, Na Ying, Xiao Xiao, Jiang Zhang, Yun Cheng, Zhiling Jin, Gangfeng Zhang | 2023-02-20T13:45:55Z | http://arxiv.org/abs/2302.09977v1 | # Dynamic Graph Neural Network with Adaptive Edge Attributes for Air Quality Prediction
###### Abstract
Air quality prediction is a typical Spatio-temporal modeling problem, which always uses different components to handle spatial and temporal dependencies in complex systems separately. Previous models based on time series analysis and Recurrent Neural Network (RNN) methods have only modeled time series while ignoring spatial information. Previous GCNs-based methods usually require providing spatial correlation graph structure of observation sites in advance. The correlations among these sites and their strengths are usually calculated using prior information. However, due to the limitations of human cognition, limited prior information cannot reflect the real station-related structure or bring more effective information for accurate prediction. To this end, we propose a novel Dynamic Graph Neural Network with Adaptive Edge Attributes (DGN-AEA) on the message passing network, which generates the adaptive bidirected dynamic graph by learning the edge attributes as model parameters. Unlike prior information to establish edges, our method can obtain adaptive edge information through end-to-end training without any prior information. Thus reduced the complexity of the problem. Besides, the hidden structural information between the stations can be obtained as model by-products, which can help make some subsequent decision-making analyses. Experimental results show that our model received state-of-the-art performance than other baselines.
keywords: Air quality prediction, Adaptive graph learning, Dynamic Graph, Message Passing Neural Networks +
Footnote †: journal: Computer Science
## 1 Introduction
Air quality has a significant impact on our daily lives. People who breathe clean air sleep better and are less likely to die prematurely from diseases such as cardiovascular and respiratory disorders, as well as lung cancer [1; 2]. One of the key factors that decreases air quality is PM\({}_{2.5}\) (atmospheric particulate matter (PM) having a diameter of 2.5 \(\mu\)m or less), which can easily be inhaled and cause damage to the human body. Thus, monitoring and forecasting the PM\({}_{2.5}\) concentrations are critical for improving air quality. With the rapid development of industry, a significant amount of energy is consumed, resulting in massive PM\({}_{2.5}\) emissions [3]. According to [4], in 2016, 38 days were heavily polluted due to PM\({}_{2.5}\) emissions in Beijing. High PM\({}_{2.5}\) concentrations may cause serious adverse health impacts and diseases [5], such as cardiac and pulmonary disease [6], detrimental effects on birth outcomes [7], and infant mortality [8]. Fortunately, to monitor and record air quality data, a large number of low-cost air quality sensors have been deployed, which makes it possible for researchers to perform accurate air quality prediction tasks. Accurate air quality predictions are useful, for example, individual activity arrangements and government pollution restrictions can benefit from that [9]. When the predicted PM\({}_{2.5}\) concentrations are too high, people can avoid going out and politicians can modify policies accordingly.
Conventional air quality prediction approaches can be generally divided into two categories: the knowledge-driven approach and the data-driven approach. In the past two decades, the knowledge-driven approach has been widely adopted for air quality prediction. The representations of this approach include the community multiscale air quality (CMAQ) [10], comprehensive air quality model with extensions (CAMx) [11], weather research, and forecasting/chemistry (WRF-Chem) [12] modeling systems. This knowledge-driven approach does not necessitate a large number of historical observations, and its prediction performance is determined by how well the model fits the actual conditions. However, in the real world, air quality data change dramatically, which makes it not easy to make this model fit the conditions. In addition, the air quality is affected by various features, such as the weather, altitude, and the wind field. Under these circumstances, the knowledge-driven approaches often do not yield good predictions.
Recently, data-driven approaches have shown great performance for air quality prediction. As a conventional data-driven approach, statistical methods have been widely adopted for their simple structure. Yi et al. [9] applied the autoregressive integrated moving average (ARIMA) model to capture the trend of
air quality time series in New Delhi. Naveen et al. [13] then adopted the seasonal autoregressive integrated moving average (SARIMA), which can capture the seasonal feature of time series, to predict the air quality in Kerala. However, due to the complexity and uncertainty of air quality prediction tasks, it is difficult for the statistical methods to perform well for long-term predictions. Different from the statistical methods, machine learning methods are non-parametric methods that can automatically learn new patterns, and thus can handle the complex non-linearity of temporal data. In recent years, machine learning methods have been widely employed for air quality prediction, including the support vector regression (SVR) [14], the extreme gradient boosting (XGBoost) [15] algorithm, and the random forest approach [16], etc. However, these methods do not take into account the Spatio-temporal correlations and thus limiting their prediction performance.
To extract the Spatio-temporal correlations, deep learning methods have been applied for air quality forecasting. Wen et al. [17] proposed a spatiotemporal convolutional long short-term memory (LSTM) neural network to capture the temporal and spatial dependencies. The temporal patterns were captured through the LSTM networks and the spatial dependencies were extracted by the convolutional neural networks (CNNs). Zhang et al. [18] then modeled the Spatio-temporal correlations with the CNN model and the gated recurrent units (GRUs). The above methods can provide satisfactory prediction results, nevertheless, the CNN model is not suitable to model the non-Euclidean structure data and thus the spatial relationships between air sensors cannot be effectively modeled.
Most recently, graph-based deep learning methods have gained popularity since they can process the non-Euclidean structure data by modeling it to a graph for training. Wang et al. [19] and Zhang et al. [18] separately employed graph convolutional networks (GCNs) to model the contextual relationships among air quality stations and further predict the air quality in the future. In a relatively short period, this modeling approach was very successful.
Models based on GCN need to construct the graph structure in advance. Traditional methods for constructing graph structures are usually based on prior knowledge, which can be divided into three categories: methods based on geographic distance, time-series similarity [19], and wind field information [20]. However, we cannot exhaustively enumerate all factors previously. Besides, parallel learning of too many graphs may result in too many parameters and high computational costs. In conclusion, inaccurate prior information may lead us to incorrectly connect two unrelated stations or lose links between two related stations. Moreover, the contextual relationships are constantly changing due to the impacts of the wind fields and other factors. Therefore, the dynamic graph is more suitable to model the relationships among stations in the real world. To overcome these limitations, we develop a new method that learns the dynamic links between two stations automatically.
In this paper, we propose to construct a Dynamic Graph Neural Network with Adaptive Edge Attributes (DGN-AEA). Firstly, to address the shortcomings of prior information, we propose a method that uses self-use dynamic graph learning. However, the dynamic adjacency matrix represents the connection relationships between nodes will change with time, which will bring instability and difficulty to the training of the model. So, we divide the adjacency matrix into two parts, the connection relation (topology) matrix, and the weight matrix, and propose to use an adaptive edge attributes (weights) matrix. Experiments show that the adaptive edge attributes can improve the prediction result. Secondly, in order to solve the physical consistency problem of many existing deep learning models, we designed a dynamic edge connection construction method using wind field information and combined it with adaptive edge connection through the method of multi-graph stitching. Through this way, these learnable edges can be used as a correction of prior information, which can help the model to get rid of the one-sidedness of prior knowledge. Thirdly, we also calculate the outbound and inbound directions respectively when aggregating the neighbor node's information. Through this way, the inflow and outflow processes during the diffusion of pollutants are simulated. In summary, the contributions of this paper are listed as follows.
* We introduce the adaptive dynamic graph learning unit to learn the dual-path weighted edges automatically, to solve the problem of correlation graph modeling in non-Euclidean space.
* The wind field data can be integrated into our model as a type of directed dynamic connection by a Multi-Graph Process Block (MGP). The physical consistency of the model is improved in this way.
* For each node, we calculate its in-degree and out-degree separately to model convolution calculations on weighted directed graphs, which is more suitable for complex systems in the real world.
* The proposed DGN-AEA model improves the prediction capabilities and achieves state-of-the-art prediction accuracy.
The remainder of this paper is organized as follows. In Section 2, we introduce the method to construct the adaptive dynamic graph, and our proposed DGN-AEA model. In Section 3, we describe the data used in our research and how we design experiments to verify the performance of DGN-AEA on the real-world air quality dataset. In Section 4, we show results of experiments and try to discuss what makes DGN-AEA performs better. Finally, we conclude and discuss the future works in Section 5.
## 2 Method used
In this section, we firstly give the mathematical definition of the air quality prediction. Next, we describe how we construct the two kinds of dynamic graphs. Then, as illustrated in Figure 2, we introduce the DGN-AEA model which is designed to solve the adaptive graph learning problem. We show the details of how we leverage the framework of GCNs on the
spatial domain to handle message passing on directed edges. We also use the spatial block Dynamic Multi-Graph Process Block(MGP) to combine the adapted edge-attributes and the wind graph with MLPs, and the temporal block GRU. Finally, we form the stacked GCNs, which need spatial and temporal blocks working together to capture the spatio-temporal dependencies among cities.
### Problem definition
Air quality prediction can be seen as a typical spatial-temporal prediction problem. Let \(X^{t}\in R^{N}\) denote the observed PM\({}_{2.5}\) concentrations at time step \(t\). The method based on GCNs usually models the changing spatial correlations among different cities by the dynamic directed graph \(G=(V,E,t)\), where \(V\) is the set of nodes and it is always the number of \(N\). \(E^{t}\) is the set of weighted edges representing the potential interactions among cities where its weight may change over time. Let \(S^{t}\in R^{N\times s}\) denote the nodes' attribute and \(Z^{t}\in R^{N\times c}\) denote the edges' attribute at time step \(t\), where \(s\) and \(z\) represent the variable dimensions of node features and edge features, respectively. The problem aims to predict the next \(T\) steps of PM\({}_{2.5}\) concentrations \([X^{t+1},...,X^{t+T}]\). based on the nodes' attribute \([S^{t+1},...,S^{t+T}]\) and the edges' attributes \([Z^{t+1},...,Z^{t+T}]\). The mapping among the input and output can be shown as follows:
\[\left[X^{t};S^{t+1},\ \cdots,\ S^{t+T};Z^{t+1},\cdots,Z^{t+T}\right]\overset{f( \cdot)}{\longrightarrow}\left[\hat{X}^{t+1},\ \cdots,\ \hat{X}^{t+T}\right] \tag{1}\]
where \(\hat{X}^{t}\) represents the predicted vector, and \(f(\cdot)\) is the prediction function based on the DGN-AEA framework.
### Dynamic graph construction
In the air quality forecasting problem, we need to predict the future steps of all the cities. So the number of nodes will not change with time, which is different from some evolving dynamic graph problems [21; 22; 23].
We dfine the weighted adjacency matrix \(A\), which it can be divided into two parts: the topology matrix (\(P\)) of 0 or 1 indicating whether two nodes are connected, and the weight matrix (represents the edge attributes \(Z\)) indicating the strength of mutual influence between nodes. When using a neural network for training, if the adjacency matrix changes with time, it will bring great instability during training. Therefore, we use the adaptive edge attribute to represent this changing node interaction since it does not change the connection relationship between nodes but changes the strength of these connections, which means that the topology matrix will be static but the adjacency matrix will be dynamic. Thus, we suppose that all the correlations among cities are decided by the Euclidean distance (Equation 4) like many previous graph-based air quality prediction approaches.
#### 2.2.1 Topology and adjacency matrix
As we all know, the pollutant (such as PM\({}_{2.5}\), PM\({}_{10}\)) concentrations in one place are strongly affected by other adjacents. Considering that relationships like that in real world is usually sparse and different, to model these two spatial correlation features explicitly, we define the adjacency matrix \(A\) of Graph \(G\) as:
\[A=P\odot Z, \tag{2}\]
where \(\odot\) represents the Hadamard product, \(Z\) represents the edge attributes matrix. The formulation of \(Z\) will show in the following sections.
As to the topology metric \(P\), we introduce the effect of distance on site relevance for the impact of the site is inversely proportional to the distance. And when the altitude between the two stations is too high, the connections will also be blocked out. To consider the above two factors, we use a Heaviside step function to filter out the edges that do not meet the rules as follows:
\[P_{ij}=H\left(d-\xi_{d}\right)\cdot H\left(a-\xi_{a}\right), \tag{3}\]
where \(H(\cdot)\) is the Heaviside step function:
\[H\left(x\right)=\begin{cases}1,&\text{x>0}\\ 0,&\text{otherwise}\end{cases}, \tag{4}\]
where \(a\) represents the altitude of each node. \(d\) is the Euclidean distance calculated by the relative positions between two stations:
\[d=\sqrt{(x_{1}-x_{2})^{2}+(y_{1}-y_{2})^{2}}. \tag{5}\]
Here the Euclidean distance and altitude threshold \(\xi_{d}\) and \(\xi_{a}\) equals 300km and 1.2km respectively. Here we get the topology matrix \(P\).
Figure 1: The distribution of the observation station of the pollutant studied in our work. Each point represents one city-level observation station.
#### 2.2.2 Node attributes
The node attributes \(S^{t}\) are mainly meteorological data. Which includes 17 types of variables same as [20]. We chose 8 of them as the final node attributes: _Temperature_, _Planetary Boundary Layer height_, _K index_, _Relative humidity_, _Surface pressure_, _Total precipitation_ and _the u and v component of wind_. The time interval is consistent of these node attributes with the PM\({}_{2.5}\) concentration data as 3h.
#### 2.2.3 Edge attributes
In our work, there are two kinds of edge attributes: One is from the wind field and another is from the adaptive neural network parameter. We use the advection coefficient as attributes from wind data and calculate it as follows:
\[Z^{t}_{w}=relu\left(\frac{\left|\vec{v}\right|}{d}\cdot\cos\left(\alpha-\beta \right)\right), \tag{6}\]
where \(\vec{v}\) represents the wind speed at time \(t\), \(d\) is the distance between stations, and \(\alpha\) and \(\beta\) are angles of cities and wind directions. \(relu(\cdot)\) is the ReLU activation function.
\(Z^{t}_{u}\in R^{1\times l}\) the adaptive neural network parameters, where \(l\) represents the number of edges, i.e., the number of 1 in the topology matrix \(P\). We set it as one important parameter which can be seen as another kind of useful edge attribute in addition to wind effects in the air quality prediction problem. This parameter can be obtained by continuous iterative optimization through the training stage.
By setting adaptive dynamic edge weights as learnable parameters, such dynamic correlations can be directly learned during the end-to-end training process. Even in practical scenarios where some prior information is missing, the correlation network between sites can still be adaptively learned for spatio-temporal prediction. When using wind field information, we can consider this learnable parameter as a supplement to wind field information. The prediction accuracy can be further improved in this way. The details will be presented in Section 2.
### Dynamic Graph Neural Network with Adaptive Edge Attributes
#### 2.3.1 Graph convolution block
Many dynamic graph neural networks methods are based on the spectral domain. The convolution operation on the graph
Figure 2: Model structure of proposed model DGN-AEA.
is equivalent to the product in the spectral domain after the Fourier transform. The corresponding Fourier transform basis is the eigenvector of the Laplace matrix. And the model Chebnet [24] uses Chebyshev polynomial to approximate the spectral convolution. However, these methods cannot handle the directed graph since the Laplacian matrices are used for undirected graphs [25; 26]. They are not suitable for complex system modeling because many relations in complex systems are directed. Besides, the prediction accuracy is limited by the order of the Chebyshev polynomial fit, and in many cases does not perform as well as spatial GCNs [27]. To solve these problems, we take the spatial domain GCN i.e., the Message Passing Neural Network (MPNN) in use.
MPNN framework can be divided into two stages: message passing stage and readout stage [28]. Compared with spectral-domain GCN which can only model node attributes, MPNN directly aggregates messages from neighbor nodes and can also model edges, which makes it more flexible and intuitive. For node \(i\) at time \(t\), our GCN block with MPNN framework will work as the following equations:
\[\varepsilon_{i}^{t}=\left[\hat{X}_{i}^{t-1},S_{i}^{t}\right], \tag{7}\]
\[m_{ij}^{t}=\varphi\left(\left[\varepsilon_{i}^{t},\varepsilon_{j}^{t},Z_{ij} ^{t}\right]\right), \tag{8}\]
\[\varepsilon_{i}^{t}=\omega\left(\sum_{j\neq N(i)}\left(m_{ij}^{t}-m_{ji}^{t} \right)\right), \tag{9}\]
where \([\cdot,\cdot]\) represent the concat operator that concats two 1-D vector to a single vector. Equation 7 represents the splicing operation of neighbor information and edge connection weights. \(\varepsilon_{i}^{t}\) in Equation 7 represents the result after the concatenation operation of the input matrix. \(\varphi(\cdot)\) in Equation 8 represents one layer of MLP. \(N(i)\) represents the neighbours of node \(i\). The whole Equation 8 represents the process of aggregating neighbor information according to the edge weights. \(m_{ij}\) and \(m_{ji}\) respectively represent the in-degree and out-degree information of the node. In Equation 9, \(\omega(\cdot)\) is another layer of MLP. After Equation 9, we can calculate the increase and decrease after the node's message passing process.
In our proposed method, we use two kinds of edge attributes. For each different edge attribute, \(Z\) may represent \(Z_{w}\) or \(Z_{u}\), then we can get \(e_{w}\) and \(e_{a}\) following Equation 9. Next, we concat the two graph-level embeddings through the transfer layer:
\[\zeta^{t}=\psi\left(\left[\varepsilon_{w}^{t},\varepsilon_{a}^{t}\right] \right). \tag{10}\]
It should be noted that \(Z_{a}\) is a learnable parameter in our model, thus it will be updated through the model training stage. After that, we can get an adaptive edge attribute \(Z_{uEA}\).
Since the PM\({}_{2.5}\) transport graph network is a directed graph, in order to realize the material conservation of source and sink nodes, we calculate the message aggregation process of each node's incoming and outgoing edges respectively. This is consistent with the physical process of pollutant diffusion. It can improve the prediction accuracy of the model. The specific calculation process is shown in the following Figure 3.
#### 2.3.2 Temporal processing block
Our model DGN-AEA in Figure 2 can be seen as the stacked structure: it is processed by MPNN, which is good at processing information on graphs, and then input the aggregated results to the GRU, which is good at iterative prediction of time series. Such a structure can more clearly extract effective information in the temporal and spatial domains, respectively. Here we will introduce the temporal processing block GRU. The input of GRU includes the historical PM\({}_{2.5}\) concentrations of the nodes and future meteorological data, which is embedded in the node embedding \(\varepsilon_{i}^{t}\). We also input the information gathered on the graph to the GRU. The prediction process for node \(i\) at time \(t\) is shown as follows:
\[c_{i}^{t}=\left[\varepsilon_{i}^{t},\varepsilon_{i}^{t}\right]. \tag{11}\]
This concatenates the input variables (results after graph convolution and the meteorological variables) to prepare for subsequent matrix operations:
\[q_{i}^{t}=\sigma\left(W_{q}\cdot\left[h_{i}^{t-1},c_{i}^{t}\right]\right), \tag{12}\]
\[r_{i}^{t}=\sigma\left(W_{r}\cdot\left[h_{i}^{t-1},c_{i}^{t}\right]\right), \tag{13}\]
where \(q_{i}^{t}\) and \(r_{i}^{t}\) in Equation 12 represent the results after operations of update gate and forget gate in GRU respectively. \(\sigma(\cdot)\) represents the sigmoid activate function. \(h_{i}^{t-1}\) is the hidden state of the previous time step, \(c_{i}^{t}\) represents the aggregated input features after Equation 11. The update gate is used to control the degree to which the state information of the previous moment is brought into the current state. The state information brought in at the previous moment is somewhat positively related to the value of the update gate. The reset gate controls how much information from the previous state is written to the current candidate set \(\widetilde{h}_{i}^{t}\). The smaller the reset gate, the less information from the previous state is written. \(\widetilde{h}_{i}^{t}\) is calculated by:
\[\widetilde{h}_{i}^{t}=tanh\left(\widetilde{W}\cdot\left[r_{i}^{t}*h_{i}^{t-1}, c_{i}^{t}\right]\right), \tag{14}\]
where \(W_{q}\), \(W_{r}\), \(\widetilde{W}\) in Equation 12 to Equation 14 are learnable parameters. After the gate control signal is obtained, we
Figure 3: Illustration for our used MPNN to process incoming and outgoing edge processing separately (use node a as example). Different colors of arrows represents the import and export process separately.
first use the reset gate to obtain the data after "reset". After the Hadamard product operation of the reset gate \(r_{i}^{t}\) and the hidden layer information of the previous step \(h_{i}^{t-1}\), then spliced the input signal \(c_{i}^{t}\) of the current step. Then we multiply the result of the above operation by a learnable matrix, and then let the result pass a _tanh_ activation function to scale the result to the range of [-1, 1]. As shown in Equation 14. The input information is added to the current hidden state in a targeted manner, which is equivalent to memorizing the state at the current moment.
Finally, Equation 15 performs the memory and updates operations at the same time, and obtains the updated hidden layer state:
\[h_{i}^{t}=\left(1-q_{i}^{t}\right)*h_{i}^{t-1}+q_{i}^{t}*\overline{h_{i}^{t}}, \tag{15}\]
After the whole operation, we finally get the prediction result of PM\({}_{2.5}\) concentration by:
\[\hat{X}_{i}^{t}=\Omega\left(h_{i}^{t}\right), \tag{16}\]
where \(\Omega\left(\cdot\right)\) is a MLP layer.
#### 2.3.3 Proposed learning algorithm
We use the stacked spatiotemporal prediction structure. As shown in Figure 2, we first use MPNN to perform a convolution operation on the PM\({}_{2.5}\) concentration of each site on the graph according to the edge weight. Two MPNN are used to process the wind field information map and adaptive dynamic map respectively. Then the two graph processing results are aggregated through a layer of MLP (as shown in Equation 7 to Equation 10). Then, the output of the entire graph convolution part and the historical meteorological data are input into the GRU for time series iterative processing as Equation 11 to Equation 15. Finally, we get the future forecast result as Equation 15. The whole process of the proposed DGN-AEA is shown in Algorithm 1.
```
0: Historical PM\({}_{2.5}\) concentrations \(X^{0}\); Node's attributes \(S=[S_{1},\cdots,S_{T}]\); Edge's attributes(by wind) \(Z_{w}=[Z_{w}^{1},\cdots,Z_{w}^{T}]\); Randomly initialized adaptive edge's attributes \(Z_{a}=[Z_{a}^{1},\cdots,Z_{a}^{T}]\);
0: Future PM2.5 concentrations \(\hat{X}=[\hat{X}^{1},\...,\ \hat{X}^{1+T}]\); Learned adaptive edge attributes \(Z_{AEA}=[Z_{AEA}^{1},\...,\ Z_{AEA}^{1+T}]\); Evaluation metric MAE and RMSE.
1:\(\hat{X}^{0}=X^{0}\)
2:\(h^{0}=0\)
3:for each time step \(t\in[1,T]\)do
4:for all\(v_{i}\in\mathbf{V}\)do
5:\(\zeta_{i}^{t}=\psi\left(MPNN1\left(\hat{X}_{i}^{0},S_{i}^{t},Z_{w}^{t}\right), MPNN2\left(\hat{X}_{i}^{0},S_{i}^{t},Z_{a}^{t}\right)\right)\);
6:\(\hat{X}_{i}^{t}=GRU(\varepsilon_{i}^{t},\zeta_{i}^{t},h_{i}^{t-1})\);
7:\(\hat{X}=[\hat{X},\hat{X}_{i}^{t}]\).
8:endfor
9:endfor
10:Calculate MAE and RMSE follow the Equation 18 and Equation 19;
11:return\(\hat{X},Z_{AEA}\), MAE, RMSE
```
**Algorithm 1** PM\({}_{2.5}\) Prediction Algorithm
## 3 Data and experiment design
In this section, we will show the details of our selected dataset and experiment settings.
### Experiment settings
Our experiments are conducted on Linux system with CPU: Intel(R) Xeon(R) Gold 5218 CPU @ 2.30GHz and GPU: NVIDIA Corporation Device 2204 (rev a1) The batch size of model training, validation, and test data are all 32. All models are trained up to 50 epochs by early stop rules with 10 steps and use RMSProp optimizer. The learning rate is 5e-4 and weight decay is also set to 5e-4. All the prediction results are the average result after 10 repetitions. In the training stage, we aim to minimize the Mean Square Error (MSE) Loss function as the following equation:
\[MSE\;Loss=\;\frac{1}{T}\sum_{t=1}^{T}\left(\frac{1}{N}\sum_{i=1}^{N}\left( \hat{X}_{i}^{t}-X_{i}^{t}\right)^{2}\right), \tag{17}\]
where \(T\) is the length of the prediction time step, and \(N\) represents the number of samples. \(\hat{X}\) and \(X\) represent the predicted value and the ground truth of PM\({}_{2.5}\) concentrations respectively.
To evaluate the prediction accuracy between models, we adopt two evaluation metrics: Mean Absolute Error (MAE) and Root Mean Square Error (RMSE).
\[MAE=\frac{1}{N}\times\sum_{i=1}^{N}\left|\hat{y}_{i}-y_{i}\right|, \tag{18}\]
\[RMSE=\sqrt{\frac{1}{N}\times\sum_{i=1}^{N}\left(\hat{y}_{i}-y_{i}\right)^{2}}, \tag{19}\]
where \(y\) is ground truth and \(\hat{y}\) represents the prediction results given by models. These two are commonly used indicators to evaluate the accuracy of time series forecasting.
### Data used
To examine the ability of the model to solve real problems, we conduct experiments on the real-world datasets from the previous work[9]. This dataset is collected from MEE1 and EAR52. It contains three types of data: Sites' geographic information, meteorological data, and pollutant concentration (PM\({}_{2.5}\)) data. The last two types of variables are time series data ranging from 2015-1-1 00:00:00 to 2018-12-31 23:59:59, with 3 hours for each time step. The dataset includes 184 city-level observation stations as shown in previous Figure 1.
Footnote 1: [https://english.mee.gov.cn/](https://english.mee.gov.cn/)
Footnote 2: [https://climate.copernicus.eu/climate-reanalysis](https://climate.copernicus.eu/climate-reanalysis)
To test the predictive ability of the model under different circumstances. We divide the dataset into three parts by time. The
training set of the first dataset is the data for two years in 2015 and 2016, and the test set and validation set are the data for the whole year of 2017 and 2018, respectively. The training, validation, and testing of data set 2 were sequentially intercepted in the winter of 2015 - 2018 for three consecutive years (November 1st - February 28th of the following year). This is because winter is usually the season of high PM\({}_{2.5}\) pollution in China, and the average value of the data is higher. Dataset 3 uses the 2016 autumn and winter 4 months (September 1st - December 31st) for training, uses the winter data of the following two months for verification (December 1st to December 31st) and test (January 1st to January 31st of the following year). The period of 2016 was chosen because that winter saw almost the worst pollution in Chinese history.
### Baselines
In our work, we consider baselines to examine the model effect. Baselines include classical statistical models, classical spatio-temporal prediction models, and state-of-the-art deep learning models with adaptive graph components.
* **HA:** The Historical Average (HA) model is a typical time series analysis model, which main idea is to use the average of all the values at the corresponding time in history (known data) as the predicted value for future. Therefore, there is no concept of prediction time step. Here we refer to the construction method in the article [29] to calculate the test set, and intercept all the moments of one week to predict the corresponding time points.
* **LSTM:** The long short-term memory (LSTM) [30] model is an improvement of the RNN model, which uses three types of gates to extract more useful related historical data.
* **GC-LSTM:** GC-LSTM [31] is a model which uses two spectral-based GCNs embedded into the long-short term memory model to extract spatio-temporal features from data.
* PM\({}_{2.5}\)**-GNN:** PM\({}_{2.5}\)-GNN [20] is a state-of-the-art prediction model for PM\({}_{2.5}\) concentrations prediction. It also uses the stacked spatio-temporal structure based on GCNs and RNNs.
* **Graph WaveNet (w/o weather):** The Graph WaveNet [32] develops a novel adaptive dependency matrix, which can automatically capture the spatial dependency from data. It uses Temporal Convolutional Network (TCN) as the temporal block. It has achieved state-of-the-art results in many real-world datasets, especially in traffic flow forecasts. Since the original model uses one-dimensional convolution to operate on only one variable, we do not use multi-dimensional meteorological information when reproduce the model. Results will shown in Section 2.
### Ablation Study
In order to further illustrate the role of the adaptive dynamic edge attribute, we compare the results with models with some parts removed, which are:
* **Static:** To demonstrate the role of dynamic graphs, we conduct experiments using only static graph structures based on distance and altitude calculations.
* **Only AEA:** As described before, DGN-AEA integrates wind edge information. Here the wind information is removed and only adaptive edge attributes are used. This can illustrate the important role of wind in modeling PM\({}_{2.5}\) forecasting.
* **Only Wind:** Contrary to **Only AEA**, here we only use the wind field information and remove the adaptive edge attributes. Similar to the control variables approach, this can illustrate the importance of using adaptive edge attributes.
* **W/O weather:** Here we also compare the effect of not inputting the future weather GRU module as known information to illustrate the effect of using future weather.
* **AEA+Wind(ours):** As shown in Figure 2, we will use the multi-graph information of adaptive edge attributes and wind at the same time.
## 4 Results and discussion
### Performance comparisions with baselines
We compare the prediction results with the evaluation metric between our DGN-AEA and baselines in all three datasets. In addition, we set different prediction horizon time steps with 3 (9h), 6 (18h), 12 (36h), and 24 (72h) so that we can compare the predictive ability of various models under different time prediction lengths. The best results are highlighted in bold-face in Table 1.
We divided the data into three datasets. By designing the number of samples and seasons of the training set dataset, it can be considered that the training difficulty on the three datasets is increasing. The training set of data set 3 has the least data and the corresponding value is large, since winter is the high season of haze in China, the value of PM\({}_{2.5}\) is generally higher.
It can be seen that our model always performs the best and the traditional statistical model HA is not always the worst. GC-LSTM performs a little better than LSTM, nevertheless, it does not perform well on our dataset overall. In dataset 3, our model DGN-AEA improves the RMSE of GC-LSTM by 6.81%, 6.11%, 5.04%, and 6.7%, respectively. PM\({}_{2.5}\)-GNN performs better than other baselines, but our model is more accurate. On the RMSE of dataset5, DGN-AEA is 2.98%, 2.94%, 2.46%, 6.3% more accurate than PM\({}_{2.5}\)-GNN.
Compared with another adaptive graph model, Graph WaveNet requires a large number of parameters and has a high computational resource overhead, so the training is slow. Its effect is also not good. Compared with Graph WaveNet, our
proposed model DGN-AEA improves the RMSE of dataset 3 by 47.46%, 45.32%, 44.56%, 43.98%, and 38.08%, 36.05%, 37.61%, 36.99% on MAE. We speculate that it may be due to the construction of the adjacency matrix that changes from time to time brings great difficulty to training on PM\({}_{2.5}\) datasets, making it difficult for the model to grasp the exact topology of the graph.
\begin{table}
\begin{tabular}{c|c|c|c c c c c} \hline \hline \multirow{2}{*}{Dataset} & \multicolumn{2}{c|}{Methods} & \multirow{2}{*}{HA} & \multirow{2}{*}{LSTM} & \multirow{2}{*}{GC-LSTM} & \multirow{2}{*}{PM\({}_{2.5}\)-GNN} & \multirow{2}{*}{DGN-AEA} \\ \cline{5-8} & & & & & & & \\ \hline \multirow{8}{*}{\begin{tabular}{c} **\(\bullet\)** \\ **\end{tabular} } & \multirow{8}{*}{RMSE} & 3 & \multirow{8}{*}{37.26} & 12.17\(\pm\)0.10 & 12.03\(\pm\)0.07 & 11.55\(\pm\)0.11 & **11.34\(\pm\)0.07** \\ & & 6 & & 15.61\(\pm\)0.11 & 15.40\(\pm\)0.07 & 14.76\(\pm\)0.10 & **14.53\(\pm\)0.09** \\ & & 12 & 25.81 & 18.56\(\pm\)0.11 & 18.38\(\pm\)0.09 & 17.70\(\pm\)0.15 & **17.06\(\pm\)0.11** \\ & & 24 & & 20.87\(\pm\)0.16 & 20.81\(\pm\)0.09 & 20.18\(\pm\)0.17 & **19.20\(\pm\)0.15** \\ \cline{2-8} & \multirow{8}{*}{MAE} & 3 & \multirow{8}{*}{37.26} & 9.43\(\pm\)0.09 & 9.31\(\pm\)0.06 & 8.93\(\pm\)0.05 & **8.75\(\pm\)0.05** \\ & & 6 & & 12.44\(\pm\)0.12 & 12.25\(\pm\)0.06 & 11.70\(\pm\)0.10 & **11.50\(\pm\)0.08** \\ & & 12 & 14.88\(\pm\)0.14 & 14.69\(\pm\)0.10 & 14.11\(\pm\)0.17 & **13.56\(\pm\)0.11** \\ & & 24 & & 16.48\(\pm\)0.15 & 16.45\(\pm\)0.08 & 15.90\(\pm\)0.19 & **15.06\(\pm\)0.14** \\ \hline \multirow{8}{*}{\begin{tabular}{c} **\(\bullet\)** \\ **\end{tabular} } & \multirow{8}{*}{RMSE} & 3 & \multirow{8}{*}{18.01\(\pm\)0.17} & 18.30\(\pm\)0.11 & 17.61\(\pm\)0.17 & **17.15\(\pm\)0.07** \\ & 6 & & 23.55\(\pm\)0.22 & 23.65\(\pm\)0.19 & 22.94\(\pm\)0.24 & **22.29\(\pm\)0.09** \\ & & 12 & 28.60\(\pm\)0.23 & 28.547\(\pm\)0.17 & 27.52\(\pm\)0.30 & **26.85\(\pm\)0.11** \\ & & 24 & & 32.82\(\pm\)0.27 & 33.03\(\pm\)0.33 & 31.70\(\pm\)0.29 & **30.79\(\pm\)0.20** \\ \cline{2-8} & \multirow{8}{*}{MAE} & 3 & \multirow{8}{*}{35.84} & 14.01\(\pm\)0.15 & 14.21\(\pm\)0.08 & 13.70\(\pm\)0.15 & **13.33\(\pm\)0.06** \\ & & 6 & & 18.90\(\pm\)0.19 & 18.95\(\pm\)0.19 & 18.38\(\pm\)0.23 & **17.83\(\pm\)0.09** \\ & & 12 & 23.14\(\pm\)0.21 & 22.96\(\pm\)0.16 & 25.26\(\pm\)0.41 & **21.60\(\pm\)0.10** \\ & & 24 & & 26.61\(\pm\)0.32 & 26.37\(\pm\)0.36 & 25.26\(\pm\)0.41 & **24.45\(\pm\)0.22** \\ \hline \multirow{8}{*}{
\begin{tabular}{c} **\(\bullet\)** \\ **\end{tabular} } & \multirow{8}{*}{RMSE} & 3 & \multirow{8}{*}{42.33} & 26.43\(\pm\)0.36 & 26.56\(\pm\)0.20 & 25.51\(\pm\)0.32 & **24.75\(\pm\)0.17** \\ & & 6 & & 33.87\(\pm\)0.41 & 34.06\(\pm\)0.27 & 32.95\(\pm\)0.33 & **31.98\(\pm\)0.36** \\ \cline{1-1} & & 12 & 42.33 & 40.98\(\pm\)0.55 & 40.84\(\pm\)0.66 & 39.76\(\pm\)0.71 & **38.78\(\pm\)0.27** \\ \cline{1-1} & & 24 & & 45.08\(\pm\)1.02 & 44.86\(\pm\)0.70 & 45.04\(\pm\)0.88 & **42.18\(\pm\)0.84** \\ \cline{1-1} \cline{2-8} & \multirow{8}{*}{MAE} & 3 & \multirow{8}{*}{20.52\(\pm\)0.28} & 20.63\(\pm\)0.18 & 19.84\(\pm\)0.27 & **19.22\(\pm\)0.14** \\ \cline{1-1} & & 6 & & 27.14\(\pm\)0.40 & 27.23\(\pm\)0.23 & 26.37\(\pm\)0.32 & **25.53\(\pm\)0.34** \\ \cline{1-1} & & 12 & 29.31 & 33.16\(\pm\)0.59 & 32.92\(\pm\)0.58 & 32.14\(\pm\)0.74 & **31.19\(\pm\)0.24** \\ \cline{1-1} & & 24 & & 36.89\(\pm\)1.01 & 36.62\(\pm\)0.73 & 36.23\(\pm\)0.99 & **34.08\(\pm\)0.78** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Prediction accuracy computed with baselines.
Figure 4: Future PM\({}_{2.5}\) concentrations predicted by DGN-AEA.
the stations.
The prediction fit curves of Linan are also plotted in Figure 4.
The above results are the average of the whole map and fit curves for individual cities. In order to examine the prediction ability of the model at the local regional scale, we select the ground truths and predicted results of the Yangtze River Delta region for a continuous period of time to visualize. The result is shown in the figure 5.
### The function of dynamic graph
To explain why we choose to retain two types of graph edges, we conduct ablation studies to explore the predictive effects of using different edges. We find that our proposed model is the best in most cases, regardless of the dataset or prediction at any time scale (Table 2).
We can see that the models where we do not use dynamic information perform worse than the dynamic graph method in the vast majority of cases.
In addition, the prediction accuracy of _Only Wind_ and _Only AEA_ are similar, and in most cases _Only AEA_ has even better MAE and RMSE metrics. Especially in Dataset 3, our proposed DGN-AEA has an average of 4.3%,4.7%, 3.4%, 7.8% MAE metric decrease than the other two dynamic edge attribute.
### The function of future weather
Our model uses future weather data as known for future air quality prediction. We compare DGN-AEA and DGN-AEA (w/o weather) models respectively. As shown in Figure 6, by using future meteorological data, the prediction accuracy can be improved by 8.87%, 11.76%, 13.65%, 15.15% on RMSE and 9.13%, 12.30%, 15.54%, 17.76% on MAE. This shows that it is very useful to use future weather data.
We compared our results with another another adaptive graph model without future weather, Graph WaveNet. Graph WaveNet requires a large number of parameters and has a high com
Figure 5: Visualization of forecast results in the Yangtze River Delta region. The data is derived from dataset 3 with 24 prediction horizon time steps. (The unit is \(\mu g/m^{3}\))
putational resource overhead, so the training is slow. Whats more, because the TCN model used by the time series module of Graph WaveNet can only perform convolution operations on one-dimensional variables, it cannot input future weather data. Compared with Graph WaveNet, our proposed model DGN-AEA(w/o weather) improves the RMSE of dataset 3 by 42.35%, 38.25%, 35.80%, 33.98%, and 31.86%, 27.10%, 26.13%, 23.39% on MAE. We speculate that it may be due to the construction of the adjacency matrix that changes from time to time brings great difficulty to training on PM\({}_{2.5}\) datasets, making it difficult for the model to grasp the exact topology of the stations.
### Comparison of two attributes
Following the above steps, we can get two kinds of graph edge attributes: one can be calculated by wind field data, and another can be learned after training. To get through the difference and explain why the adaptive edge attribute is useful, we visualize both of them at the same time step shown in Figure 7.
### Complex network analysis on the learned adaptive edges
With DGN-AEA, an adaptive correlation network structure can be obtained after the training phase. The properties of the obtained correlation network can be discussed with the help of some analytical methods and indicators in the field of complex networks. We listed in the supplementary information.
We separately count the sum of the weights on the incoming and outgoing edges of different nodes and calculate the total weight of the connected edges minus the total weight of the incoming and outgoing edges. The positive or negative value of this difference indicates that the node belongs to the type that is more affected by the surroundings or has a greater influence on the surrounding (Figure E). At the same time, we can also compare the relationship between edge weights and degree (Equation E.1), and the relationship between edge weights and the degree centrality (Equation E.2) of the complex network. Shown in Figure E.9 and Figure E.10. We see a clear positive correlation between node weight and degree value. However, there is no obvious correlation between node degree centrality and degree value.
## 5 Conclusions
In this paper, we propose a flexible Dynamic Graph Neural Networks with Adaptive Edge Attributes (DGN-AEA) based on the spatial domain. This method retains edges by the wind to follow the basic prior physical knowledge of air pollution transmission. At the same time, we calculate the transmission
\begin{table}
\begin{tabular}{c|c|c|c c c c} \hline \hline \multirow{2}{*}{Dataset} & \multicolumn{2}{c|}{Methods} & \multirow{2}{*}{Only Wind} & \multirow{2}{*}{Only AEA} & \multirow{2}{*}{Static} & \multirow{2}{*}{W/O weather} & \multirow{2}{*}{AEA+Wind} \\ \cline{5-6} \cline{8-8} & & & 3 & 11.55\(\pm\)0.05 & 11.38\(\pm\)0.04 & 11.58\(\pm\)0.08 & 13.03\(\pm\)0.06 & **11.34\(\pm\)0.06** \\ \cline{5-8} \multirow{5}{*}{\(\text{RMSE}\)} & & 6 & 14.76\(\pm\)0.10 & 14.55\(\pm\)0.06 & 14.72\(\pm\)0.09 & 17.08\(\pm\)0.07 & **14.53\(\pm\)0.09** \\ & & 12 & 17.70\(\pm\)0.15 & 17.36\(\pm\)0.07 & 17.51\(\pm\)0.13 & 20.78\(\pm\)0.10 & **17.06\(\pm\)0.11** \\ & & 24 & 20.18\(\pm\)0.17 & 19.78\(\pm\)0.09 & 19.80\(\pm\)0.19 & 24.41\(\pm\)0.08 & **19.20\(\pm\)0.15** \\ \cline{2-8} \multirow{5}{*}{\(\text{MAE}\)} & & 3 & 8.93\(\pm\)0.05 & 8.78\(\pm\)0.04 & 8.95\(\pm\)0.06 & 10.14\(\pm\)0.05 & **8.75\(\pm\)0.05** \\ & & 6 & 11.70\(\pm\)0.10 & 11.51\(\pm\)0.06 & 11.68\(\pm\)0.08 & 13.73\(\pm\)0.06 & **11.50\(\pm\)0.08** \\ & & 12 & 14.11\(\pm\)0.17 & 13.79\(\pm\)0.08 & 13.97\(\pm\)0.13 & 16.90\(\pm\)0.11 & **13.56\(\pm\)0.11** \\ & & 24 & 15.90\(\pm\)0.19 & 15.55\(\pm\)0.09 & 15.60\(\pm\)0.19 & 18.84\(\pm\)0.09 & **15.06\(\pm\)0.14** \\ \hline \multirow{5}{*}{\(\text{RMSE}\)} & & 3 & 17.61\(\pm\)0.17 & 17.60\(\pm\)0.14 & 17.35\(\pm\)0.16 & 19.62\(\pm\)0.04 & **17.15\(\pm\)0.07** \\ & & 6 & 22.99\(\pm\)0.21 & 23.02\(\pm\)0.20 & 22.34\(\pm\)0.08 & 26.17\(\pm\)0.03 & **22.29\(\pm\)0.09** \\ & & 12 & 27.60\(\pm\)0.30 & 27.71\(\pm\)0.25 & 27.15\(\pm\)0.13 & 32.87\(\pm\)0.07 & **26.85\(\pm\)0.11** \\ & & 24 & 31.70\(\pm\)0.29 & 31.50\(\pm\)0.33 & 31.49\(\pm\)0.34 & 38.89\(\pm\)0.08 & **30.79\(\pm\)0.20** \\ \cline{2-8} \multirow{5}{*}{\(\text{MAE}\)} & & 3 & 13.70\(\pm\)0.15 & 13.69\(\pm\)0.13 & 13.69\(\pm\)0.13 & 15.31\(\pm\)0.04 & **13.33\(\pm\)0.06** \\ & & 6 & 18.41\(\pm\)0.20 & 18.45\(\pm\)0.20 & 18.67\(\pm\)0.08 & 21.12\(\pm\)0.03 & **17.83\(\pm\)0.09** \\ & & 12 & 22.26\(\pm\)0.32 & 22.37\(\pm\)0.23 & 22.35\(\pm\)0.14 & 26.99\(\pm\)0.08 & **21.60\(\pm\)0.10** \\ & & 24 & 25.26\(\pm\)0.41 & 25.06\(\pm\)0.31 & 24.81\(\pm\)0.33 & 31.94\(\pm\)0.11 & **24.45\(\pm\)0.22** \\ \hline \multirow{5}{*}{\(\text{RMSE}\)} & & 3 & 25.51\(\pm\)0.32 & 25.32\(\pm\)0.23 & 24.80\(\pm\)0.28 & 27.16\(\pm\)0.05 & **24.75\(\pm\)0.17** \\ & & 9 & 32.95\(\pm\)0.33 & 32.74\(\pm\)0.46 & 32.13\(\pm\)0.22 & 36.12\(\pm\)0.14 & **31.98\(\pm\)0.36** \\ \cline{1-1} & & 12 & 39.10\(\pm\)0.63 & 39.26\(\pm\)0.26 & 39.31\(\pm\)0.41 & 44.91\(\pm\)0.13 & **38.78\(\pm\)0.27** \\ \cline{1-1} & & 24 & 43.44\(\pm\)0.42 & 42.83\(\pm\)0.52 & 42.32\(\pm\)0.77 & 49.71\(\pm\)0.11 & **42.18\(\pm\)0.84** \\ \cline{1-1} \cline{2-8} \multirow{5}{*}{\(\text{MAE}\)} & & 3 & 19.84\(\pm\)0.27 & 19.68\(\pm\)0.20 & 19.26\(\pm\)0.24 & 21.15\(\pm\)0.05 & **19.22\(\pm\)0.14** \\ \cline{1-1} & & 6 & 26.37\(\pm\)0.32 & 26.17\(\pm\)0.45 & 25.63\(\pm\)0.22 & 29.11\(\pm\)0.14 & **25.53\(\pm\)0.34** \\ \cline{1-1} & & 12 & 31.54\(\pm\)0.64 & 31.51\(\pm\)0.28 & 31.72\(\pm\)0.42 & 36.93\(\pm\)0.16 & **31.19\(\pm\)0.24** \\ \cline{1-1} & & 24 & 35.72\(\pm\)0.45 & 35.08\(\pm\)0.53 & 34.58\(\pm\)0.79 & 41.44\(\pm\)0.13 & **34.08\(\pm\)0.78** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Result of ablation study.
volume on the outgoing edge and incoming edge respectively when doing message transmission and aggregation to the nodes, which simulates the law of conservation of matter in the transport and diffusion of pollutants to some extent. Besides, we fuse adaptive edge attributes by means of the multi-graph structure. Experiment results show that our model achieves a better level of prediction effect on the real world PM\({}_{2.5}\) dataset.
In this way, we can adaptively learn the correlation between real sites and obtain better time series prediction results. However, how much of the network relationship in the real world can be restored in real data sets by this way of adaptively constructing learnable parameters is also worth exploring. There are still some ideas of network reconstruction methods that may be worth learning from.
## Acknowledgment
This work was supported by the National Natural Science Foundation of China (NO.42101027). We thank the support from the Save 2050 Program jointly sponsored by Swarma Club and X-Order.
## Appendix A Related Work about Air Quality Prediction
Air quality prediction issues have been studied for years. These issues were first studied by some conventional statis
Figure 6: Comparison among DGN-AEA, DGN-AEA (w/o weather) and Graph WaveNet(w/o weather) models. The results are the average of ten training sessions.
Figure 7: Difference between the two matrices use in our model.
tical methods, e.g., autoregressive integrated moving average (ARIMA) [9; 33]. However, there are too much uncertainty and non-linearity in air quality prediction, which is not suitable for these statistical models to achieve high prediction accuracy for long-term prediction.
Machine learning methods make use of historical observations to perform accurate predictions. Liu et al. [14] proposed a multi-dimensional collaborative support vector regression (SVR) model for air quality index (AQI) forecasting in the Jingjinji region while considering the weather conditions. Dun et al. [34] adopted the linear regression (LR) and SVR method for short-term air quality prediction. Liu et al. [35] fused the principal component regression (PCR), SVR, and autoregressive moving average (ARMA) models to predict the air quality with six different kinds of pollutants. However, these machine learning methods did not capture the spatio-temporal correlations and thus limiting the prediction performance.
In recent years, deep learning methods are widely employed in air quality prediction issues due to their high prediction accuracy. Ma et al. [36] propose a transfer learning-based stacked bidirectional long short-term memory (LSTM) model which combined deep learning and transfer learning strategies to predict the air quality of some stations based on the data observed by other stations. Wen et al. [17] proposed a spatiotemporal convolutional long short-term memory neural network to capture the temporal and spatial dependencies with LSTM and convolutional neural networks (CNNs), respectively. Zhang et al. [18] proposed a hybrid model (MTD-CNN-GRU) for PM\({}_{2.5}\) concentration prediction. In the MTD-CNN-GRU model, the CNNs were employed to extract the spatial relationships and the gated recurrent units (GRUs) were applied to capture temporal features. In this way, they could capture the spatio-temporal correlations to achieve higher prediction accuracy.
## Appendix B Related Work about Graph-based Prediction Methods
Conventional deep learning methods are not suitable for data processing in non-Euclidean space, which can not model the spatial correlations very well. To solve the problem, graph-based deep learning methods are proposed and have been widely applied to air quality forecasting these years. Wang et al. [19] proposed an Attentive Temporal Graph Convolutional Network (ATGCN) for air quality prediction. The ATGCN encoded three types of relationships among air quality stations including spatial adjacency, functional similarity, and temporal pattern similarity into graphs and aggregated features using gated recurrent units (GRUs). Finally, a decoder was designed to conduct multi-step predictions. Qi et al. [31] then proposed a GC-LSTM model which combined the graph convolutional networks (GCNs) and LSTM to capture spatial terms and temporal attributes and predict the future PM\({}_{2.5}\) concentrations. Wang et al. [20] proposed a PM\({}_{2.5}\)-GNN model, which incorporated the domain knowledge into graph-structure data to model long-term spatio-temporal dependencies, for PM\({}_{2.5}\) concentrations prediction. Since multiple features were considered, this model could achieve excellent prediction performance, especially for long-term predictions.
## Appendix C Related Work about Dynamic Graph Models
Recently, to better model contextual information, dynamic graph models have been employed by some researchers. Zhou et al. [37] modeled a dynamic directed graph based on the wind field among the air quality stations. They then used the GCNs to capture the dynamic relationships among the stations and applied a temporal convolutional network (TCN) to predict the PM\({}_{2.5}\) concentrations. Diao et al. [38] employed a dynamic Laplacian matrix estimator to model the dynamic graph, which can better model the spatial dependencies. Based on the dynamic estimator, they proposed a dynamic spatio-temporal graph convolutional neural network for traffic forecasting and outperformed the baselines. Peng et al. [39] employed the reinforcement learning to generate dynamic graphs and combined the graphs with the LSTM model for long-term traffic flow prediction. They further proved that dynamic graphs reduced the effects of data defects with extensive experiments.
## Appendix D Related Work about Adaptive Graph Learning Models
To overcome the limitations of prior information, Wu et al. [32] developed an adaptive dependency matrix through node embedding to capture the hidden spatial dependency in the data. And the model Multivariate Time Series Forecasting with Graph Neural Networks (MTGNN) [40] also used this method to extract the uni-directed relations among variables. However, this method of changing the adjacency matrix with the time of the event will bring a lot of interference information to the training of the model, thereby affecting the accuracy of the prediction. Therefore, Graph WaveNet does not perform well on real air quality prediction datasets.
\[C_{D}\left(N_{i}\right)=\sum_{j=1}^{g}x_{ij}\left(i\neq j\right) \tag{1}\]
\[C^{\prime}{}_{D}\left(N_{i}\right)=\frac{C_{D}\left(N_{i}\right)}{g-1} \tag{2}\]
|
2310.06841 | Malware Classification using Deep Neural Networks: Performance
Evaluation and Applications in Edge Devices | With the increasing extent of malware attacks in the present day along with
the difficulty in detecting modern malware, it is necessary to evaluate the
effectiveness and performance of Deep Neural Networks (DNNs) for malware
classification. Multiple DNN architectures can be designed and trained to
detect and classify malware binaries. Results demonstrate the potential of DNNs
in accurately classifying malware with high accuracy rates observed across
different malware types. Additionally, the feasibility of deploying these DNN
models on edge devices to enable real-time classification, particularly in
resource-constrained scenarios proves to be integral to large IoT systems. By
optimizing model architectures and leveraging edge computing capabilities, the
proposed methodologies achieve efficient performance even with limited
resources. This study contributes to advancing malware detection techniques and
emphasizes the significance of integrating cybersecurity measures for the early
detection of malware and further preventing the adverse effects caused by such
attacks. Optimal considerations regarding the distribution of security tasks to
edge devices are addressed to ensure that the integrity and availability of
large scale IoT systems are not compromised due to malware attacks, advocating
for a more resilient and secure digital ecosystem. | Akhil M R, Adithya Krishna V Sharma, Harivardhan Swamy, Pavan A, Ashray Shetty, Anirudh B Sathyanarayana | 2023-08-21T16:34:46Z | http://arxiv.org/abs/2310.06841v1 | Malware Classification using Deep Neural Networks: Performance Evaluation and Applications in Edge Devices
###### Abstract
With the increasing extent of malware attacks in the present day along with the difficulty in detecting modern malware, it is necessary to evaluate the effectiveness and performance of Deep Neural Networks (DNNs) for malware classification. Multiple DNN architectures can be designed and trained to detect and classify malware binaries. Results demonstrate the potential of DNNs in accurately classifying malware with high accuracy rates observed across different malware types. Additionally, the feasibility of deploying these DNN models on edge devices to enable real-time classification, particularly in resource-constrained scenarios proves to be integral to large IoT systems. By optimizing model architectures and leveraging edge computing capabilities, the proposed methodologies achieve efficient performance even with limited resources. This study contributes to advancing malware detection techniques and emphasizes the significance of integrating cybersecurity measures for the early detection of malware and further preventing the adverse effects caused by such attacks. Optimal considerations regarding the distribution of security tasks to edge devices are addressed to ensure that the integrity and availability of large-scale IoT systems are not compromised due to malware attacks, advocating for a more resilient and secure digital ecosystem.
Cybersecurity, Data Protection, Deep Neural Networks, IoT Security, Malware Classification, Performance Evaluation
## 1 Introduction
Combating the constant spread of malware continues to be a major concern in the constantly changing world of cybersecurity. The integrity, confidentiality, and availability of digital assets are seriously threatened by malicious software, or malware, making accurate malware categorization a critical component of contemporary security systems. Long used for malware detection, traditional signature-based methods are ineffective against fast-evolving and zero-day malware. This calls for the adoption of more advanced methodologies. Deep neural networks (DNNs), a particularly noteworthy development in deep learning in recent years, have shown significant promise in several areas, including image identification, natural language processing, and autonomous systems. In the field of malware categorization, its capacity to automatically learn sophisticated patterns and features from raw data has attracted interest. DNNs can dramatically improve the precision and effectiveness of malware detection by sifting through malware samples and deriving useful representations.
IoT devices are complex in nature and are subject to a wide variety of cyber-attacks with malware attacks being one of the prominent ones. Additionally with the increasing adoption of IoT devices in industries, these IoT systems will experience a rise in cyber-attacks. Therefore, it is deemed necessary to deploy efficient methodologies to detect and mitigate the adverse effects which would otherwise be caused by such malware attacks. According to Quoc-Dung Ngo et al., IoT malware detection techniques can be broadly classified into two domains, namely static analysis, and dynamic analysis [5].
Dynamic analysis includes having to execute the binaries and monitor for any malicious activity which could potentially infect the real time execution environment. In contrast, static analysis involves analyzing the binaries without executing them. [5] The methodologies explored in this paper leverage deep learning techniques to identify patterns and classify malware binaries without having to execute them.
Additionally, we cover a crucial topic of implementing advanced malware classification algorithms in contexts with limited resources. The computational and memory resources of edge devices, such as Internet of Things (IoT) gadgets and low-powered computer systems, are constrained. For effective and real-time malware detection at the network edge, it is critical to assess the applicability of our deep neural network approach in such
Figure 1: ML based Malware Classification Flow
devices. Therefore, the computation time or the latency to classify malware binaries is measured once the trained model is obtained. This research intends to advance cybersecurity procedures by examining the functionality and applicability of our DNN-based malware classification methodology. By providing security experts with a cutting-edge tool for malware detection that is early and accurate, our research has the potential to increase the resilience of digital ecosystems to the ever-increasing cyberthreats.
## 2 Related Work
Previous research on malware classification can be broadly categorized into two main approaches: they are,
### Non-Machine Learning Models
Traditionally, malware detection relied on non-machine learning techniques, such as static or dynamic signature-based methods. Static analysis involves examining the syntax or structural properties of the program to identify malware before its execution. However, malware developers employ various encryption, polymorphism, and obfuscation techniques to evade these detection algorithms. In the dynamic approach, malware is executed in a controlled virtual environment, and its behavior is analyzed to detect harmful actions during or after execution. While dynamic analysis shows promise, it remains complex and time-consuming. The major drawback of classical signature-based detection is its lack of scalability, and its effectiveness can be compromised with the emergence of new variants of malware. As a result, researchers have turned to intelligent machine learning algorithms as an alternative approach.
Dynamic Analysis:Researchers have made significant efforts to propose behavior-based malware detection methods that capture program behavior at runtime. One approach is to monitor the program's interactions with the operating system through the analysis of API calls. To develop effective and robust systems, some studies consider additional semantic information, such as the sequence of API calls and the use of graph representations. These approaches analyze the temporal order of API calls, the effect of API calls on registers, or extract behavioral graphs based on dependencies between API call parameters. In contrast to program-centric approaches, global, system-wide methods have been proposed, such as an access activity model by Lanzi et al. [8], which captures generalized interactions of benign applications with operating system resources, resulting in a low false positive rate. However, dynamic analysis techniques face challenges in handling execution-driven datasets, security precautions during experimentation, and dynamic anti-analysis defenses used by modern malware to evade detection.
Static Analysis:On the other hand, static approaches perform analysis without executing the program. The research literature demonstrates a wide variety of static analysis methods, with SAFE [11] and SAVE [10] being influential heuristic static malware detection approaches. These works proposed using different patterns to detect malicious content in executable files. Since then, numerous techniques have emerged based on different malware attributes, such as the header or body of the Portable Executable (PE) file, with analysis conducted on bytecode or by disassembling the code to extract opcodes and other relevant information. The main challenge in static analysis is coping with packing and obfuscation. Recently, generic approaches for the automatic de-obfuscation of obfuscated programs have been proposed. Additionally, static techniques have been employed to assess if a detected malware is like a previously seen variant without performing costly unpacking.
### Machine Learning Models
To address the limitations of non-machine learning methods and capitalize on the shared behavior patterns among malware variants, anti-malware organizations have developed sophisticated classification methods based on data mining and machine learning techniques. These methods employ various feature extraction methods to build intelligent malware detection systems, often using SVM-based classifiers, Naive Bayes classifiers, or multiple classifiers [9].
For example, Nataraj et al. [7] proposes a strategy to represent malware as grayscale images and use GIST to compute texture features, which are then classified using a k-nearest neighbor algorithm. However, these shallow learning techniques suffer from scalability issues with the growing number of malware samples and require manual feature engineering. To overcome these challenges, the current research focuses on developing deep learning architectures that are more robust and applicable to various malware samples.
While some techniques target superior performance on specific datasets, like the Microsoft Malware Dataset [12], we aim to construct a more versatile framework applicable to any type of malware sample. For instance, Drew et al. [13], [14] employed a modern gene sequence classification tool for malware classification on the Microsoft Malware Dataset. Ahmadi et al. [15] trained a classifier based on the XGBoost technique, while the winning team of the Microsoft Malware Classification Challenge (BIG 2015) utilized a complex combination of features with the XGBoost classifier.
Another related work proposed in [16] involves the application of a CNN for malware classification. The author experimented with three different architectures by adding
Figure 2: Types of Malwares
an extra block consisting of a convolutional layer followed by a Max-pooling layer each time to the base model. However, their model remains relatively shallow. In contrast, our research delves into exploring deeper CNN architectures for improved malware classification.
## 3 Data Availability and Preparation
For the purpose of demonstrating the effectiveness of DNNs on malware binaries, the dataset chosen was MaleVis [6]. The MaleVis [6] dataset contains 14,226 malware images spanning across 26 classes which also includes 1 cleanware class. From the dataset, 10 malware classes were sampled and a total of 1400 images were further sampled from these classes overall for the purpose of training.
For testing and validation purposes, a total of 550 images were sampled spanning across the 10 classes. The images in the MaleVis [6] dataset was obtained by extracting the binary images from the malware files in 3 channel RGB format. The images are then resized into square sized resolutions of 224x224 and 300x300.
## 4 Implementation Methodology
The experimental setup involved training the models for 10 epochs on a system with RTX 3050 as the GPU. For increasing the effectiveness of the models, pre-trained imagenet weights were imported and applied before initiating the training process. A learning rate of \(10^{-4}\) was used while also being configured to be adaptive in nature during the training process with the minimum allowed learning rate being \(10^{-7}\).
We run extensive tests to gauge the precision and effectiveness of our method in order to evaluate its performance. We do this by comparing the deep neural network's classification accuracy against unseen samples after training it on a broad array of malware samples. One of the key metrics used in the evaluation of resource efficiency is computational latency. This computational latency was measured as the time taken to classify the set of 550 test images. Other metrics such as accuracy, recall and F1 score were also taken into consideration while testing the model and are covered below. The details pertaining to the models utilized are explored as follows.
### ResNetV2
Deep convolutional neural networks (CNNs) present issues, therefore ResNetV2 is an extension of ResNet created to address those challenges. By introducing "bottleneck" blocks that compress feature maps, it can retain efficiency while lowering computational complexity. To reduce deterioration and hasten convergence during training, "pre-activation" modules place batch normalization and ReLU activation before convolutions. ResNetV2 performs better than its predecessor, especially in more complex network topologies, displaying increased training effectiveness and precision. ResNetV2, a pioneering architecture in computer vision research, has been widely used for image classification, object recognition, and semantic segmentation applications. Its breakthroughs advance state-of-the-art in image recognition applications by solving gradient problems and optimizing learning functions in deep CNNs.
### DenseNet201
DenseNet201 is a deep convolutional neural network architecture that extends the DenseNet concept by employing 201 layers. It utilizes dense blocks, where each layer receives feature maps from preceding layers, facilitating feature reuse, and mitigating the vanishing gradient problem. This densely connected structure fosters efficient information flow and parameter sharing, resulting in improved memory utilization and better gradient propagation during training. With its substantial depth, DenseNet201 excels in learning complex patterns and representations from data, making it highly effective for various computer vision tasks such as image classification, object detection, and semantic segmentation. Its exceptional performance on benchmark datasets has solidified DenseNet201 as a leading architecture in the field of deep learning for visual recognition tasks.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Class ID & Family & Malware & Sample Size \\ & & Category & \\ \hline
1 & Adposhel & Adware & 140 \\ \hline
2 & Agent & Trojan & 140 \\ \hline
3 & Allaple & Worm & 140 \\ \hline
4 & Amonetize & Adware & 140 \\ \hline
5 & Androm & Backdoor & 140 \\ \hline
6 & Autorun & Worm & 140 \\ \hline
7 & BrowseFox & Adware & 140 \\ \hline \end{tabular}
\end{table}
Table -1: Classes Sampled for the Purpose of Training
Figure -3: Images pertaining to the classes of the MaleVis dataset [6]
### InceptionNetV3
An improved convolutional neural network architecture called InceptionNetV3, sometimes known as Inception V3, was created for image identification applications. It provides numerous parallel convolutional layers of various filter sizes to effectively capture features at various scales and resolutions, building on the strengths of its forerunners, InceptionNet and Inception V2. In order to capture both fine-grained and global characteristics, the "Inception module" concurrently uses 1x1, 3x3, and 5x5 convolutions. Meanwhile, "Factorized 7x7" convolutions lessen computational complexity without sacrificing the receptive field.
With the use of batch normalization and auxiliary classifiers, it also improves convergence and addresses the vanishing gradient issue. Global average pooling minimizes the number of parameters and avoids overfitting. InceptionNetV3 has been widely used for research and practical applications because of its exceptional performance in picture classification, object identification, and visual recognition tasks.
### Xception
The deep convolutional neural network architecture known as Xception, short for "Extreme Inception," was unveiled by Google and was motivated by the Inception idea. It uses "depth wise separable convolutions," which combine depth wise and pointwise convolutions, to replace conventional standard convolutions while maintaining accuracy.
Xception speeds up training and inference times by improving feature learning and parameter efficiency, making it the best choice for computer vision workloads, especially in contexts with limited resources like mobile devices and edge computing. Xception has established itself as a leading deep learning model and a popular option for image recognition applications thanks to its outstanding performance.
### MobileNet Small
A variation of the MobileNet architecture called MobileNet Small is designed for quick and effective deep learning on devices with limited resources. It significantly decreases the model size and computational complexity by using depth wise separable convolutions, ensuring excellent performance on mobile devices and embedded systems.
In spite of its effectiveness, MobileNet Small retains respectable accuracy in jobs like object detection and image categorization. It is an ideal option for on-device AI applications because of this design decision, which enables real-time processing and reduces computational and energy expenses. MobileNet Small, which is widely used in edge computing applications, demonstrates its usefulness in enhancing deep learning for mobile and embedded devices.
### MobileNet Large
It is a lightweight deep learning architecture designed for efficient image classification on mobile devices. It also utilizes depth wise separable convolutions, a width multiplier, and a resolution multiplier to reduce computational complexity and model size. Despite its efficiency-focused design, MobileNet Large maintains competitive accuracy and is well-suited for real-time applications on resource-constrained devices, making it a significant advancement in the field of computer vision.
In summary, MobileNet Small sacrifices some accuracy for even greater efficiency and compactness, making it ideal for scenarios where minimizing model size and computational requirements are critical, while MobileNet Large strikes a balance between efficiency and accuracy, making it more suitable for general-purpose mobile vision applications on devices with moderate resources.
## 5 Results and Discussions
The above table summarizes the results obtained from testing various DNNs on the MaleVis [6] dataset. The compute latency depicts the time taken in seconds to classify 550 test images sampled from the dataset. It was observed
\begin{table}
\begin{tabular}{|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|} \hline Model & \begin{tabular}{c} Comput \\ e \\ Latency \\ \end{tabular} &
\begin{tabular}{c} Accurate \\ y \\ \end{tabular} & Recall & F1 Score \\ \hline ResNetV 2 & 8.062 & 86.54 & 86.27 & 86.78 \\ \hline DenseNet & 10.87 & 94.54 & 94.43 & 94.42 \\
201 & 8.33 & 91.81 & 91.53 & 91.64 \\
201 & 8.11 & 93.63 & 93.68 & 93.64 \\ \hline MobileNet & 3.51 & 85.63 & 84.52 & 81.77 \\
201 & 6.17 & 88.01 & 87.92 & 87.87 \\
201 & 6.17 & 88.01 & 87.92 & 87.87 \\
202 & & & & \\ \hline \end{tabular}
\end{table}
Table -2: Results Obtained from Training Various DNNs
Figure -5: MobileNet Architecture
that DenseNet201 achieved the highest accuracy in comparison to the other models during the test run, although a tradeoff between the computational latency and accuracy can be significantly noticed.
DenseNet201 showed the highest latency to compute along with an increased model accuracy. MobileNet-small on the other hand showed an accuracy on par with that of ResNetV2 with an exceptional computational latency of just **3.51 seconds**.
This proves that with effective fine tuning of the model, it could be deployed viably in real-world scenarios as well. MobileNet-large showed exceptional results achieving an accuracy higher than that of its smaller counterpart version, but with a slight tradeoff with the computational latency.
Furthermore, the above set of results can be utilized for choosing the right model for deployment in resource constrained scenarios as per the requirement and the availability of computational power in edge devices.
## 6 Conclusion
In this survey article, we have explored the application of deep neural networks (DNNs) for malware classification. Malware detection and classification are critical tasks in today's cybersecurity landscape due to the ever-evolving nature of malicious threats. Traditional non-machine learning methods such as static and dynamic analysis have been widely used but are facing challenges in coping with the increasing complexity and diversity of malware.
The machine learning methods section focused on DNN architectures, namely ResNet, DenseNet, InceptionNet, Xception, MobileNet Small, and MobileNet Large. These DNNs have demonstrated promising results in various computer vision tasks and have shown potential for tackling malware classification as well.
From the performance evaluation, it is evident that DNN architectures can effectively detect and classify malware binaries with high accuracy and improved generalization. DenseNet201 showed the best performance among the models evaluated with an accuracy of 94.5. The ability to handle large-scale datasets and learn intricate patterns allows DNNs to discern even the most sophisticated malware variants. Moreover, transfer learning techniques can be leveraged to adapt pre-trained models on related tasks, reducing the data requirements and training time.
Regarding the applicability in edge devices, the compact nature of some DNNs like MobileNet Small and MobileNet Large allows for efficient deployment on resource-constrained devices, such as IoT devices and smartphones. The ability to perform classification on the edge can enhance real-time threat detection and response, mitigating the need for constant cloud communication and reducing latency.
However, societal concerns also need to be addressed when using DNNs for malware classification. There are ethical and privacy considerations related to data collection, model fairness, and potential misuse of these technologies. It is crucial to adhere to robust privacy policies and ensure the transparency and accountability of the deployed models.
|
2305.07731 | Predicting COVID-19 pandemic by spatio-temporal graph neural networks: A
New Zealand's study | Modeling and simulations of pandemic dynamics play an essential role in
understanding and addressing the spreading of highly infectious diseases such
as COVID-19. In this work, we propose a novel deep learning architecture named
Attention-based Multiresolution Graph Neural Networks (ATMGNN) that learns to
combine the spatial graph information, i.e. geographical data, with the
temporal information, i.e. timeseries data of number of COVID-19 cases, to
predict the future dynamics of the pandemic. The key innovation is that our
method can capture the multiscale structures of the spatial graph via a
learning to cluster algorithm in a data-driven manner. This allows our
architecture to learn to pick up either local or global signals of a pandemic,
and model both the long-range spatial and temporal dependencies. Importantly,
we collected and assembled a new dataset for New Zealand. We established a
comprehensive benchmark of statistical methods, temporal architectures, graph
neural networks along with our spatio-temporal model. We also incorporated
socioeconomic cross-sectional data to further enhance our prediction. Our
proposed model have shown highly robust predictions and outperformed all other
baselines in various metrics for our new dataset of New Zealand along with
existing datasets of England, France, Italy and Spain. For a future work, we
plan to extend our work for real-time prediction and global scale. Our data and
source code are publicly available at https://github.com/HySonLab/pandemic_tgnn | Viet Bach Nguyen, Truong Son Hy, Long Tran-Thanh, Nhung Nghiem | 2023-05-12T19:00:17Z | http://arxiv.org/abs/2305.07731v1 | # Predicting COVID-19 pandemic by spatio-temporal graph neural networks: A New Zealand's study
###### Abstract
Modeling and simulations of pandemic dynamics play an essential role in understanding and addressing the spreading of highly infectious diseases such as COVID-19. In this work, we propose a novel deep learning architecture named Attention-based Multiresolution Graph Neural Networks (ATMGNN) that learns to combine the spatial graph information, i.e. geographical data, with the temporal information, i.e. timeseries data of number of COVID-19 cases, to predict the future dynamics of the pandemic. The key innovation is that our method can capture the multiscale structures of the spatial graph via a learning to cluster algorithm in a data-driven manner. This allows our architecture to learn to pick up either local or global signals of a pandemic, and model both the long-range spatial and temporal dependencies. Importantly, we collected and assembled a new dataset for New Zealand. We established a comprehensive benchmark of statistical methods, temporal architectures, graph neural networks along with our spatio-temporal model. We also incorporated socioeconomic cross-sectional data to further enhance our prediction. Our proposed model have shown highly robust predictions and outperformed all other baselines in various metrics for our new dataset of New Zealand along with existing datasets of England, France, Italy and Spain. For a future work, we plan to extend our work for real-time prediction and global scale. Our data and source code are publicly available at [https://github.com/HySonLab/pandemic_tgnn](https://github.com/HySonLab/pandemic_tgnn).
## 1 Introduction
The Coronavirus Disease started in 2019 (COVID-19) has been and currently is a major global pandemic, challenging every country's population and public health systems. As a fairly water-isolated island country, New Zealand mostly contained the spread of COVID-19 until early 2022, when infection cases surged to more than 2 million confirmed cases by the end of the year (WHO data, [https://covid19.who.int/region/wpro/country/nz](https://covid19.who.int/region/wpro/country/nz)). While New Zealand responded promptly, contained and effectively vaccinated the population to keep the case number low, the sudden rise in infections posed certain challenges to the healthcare system.
In the wake of the spread of COVID-19, many epidemiological modeling and prediction models emerged, seeking to project the progression of the pandemic and inform public health authorities to take measures when appropriate. Traditional dynamics case prediction models, including the family of Susceptible, Infectious, or Recovered models (SEIR) and their variants, have been widely applied to simulate the trajectories of the pandemic [1, 2, 3]. SEIR models are hard to use due to difficulty in estimating parameters and underlying nonlinear systems of ordinary differential equations, alongside embedding restricted assumptions [4]. Other statistical prediction methods have been used, among which are the autoregressive integrated moving average (ARIMA) models [5, 6] and the time-series prediction Prophet model [7]. However, linear statistical models cannot capture the non-linear nature of disease infection progression, especially in the case of New Zealand infection growth patterns. To model non-linear disease growth functions, artificial neural networks and deep learning models have been developed and trained to predict the infection case time series of each health area. The most common types of deep learning models for epidemic modeling are Long Short-Term Memory (LSTM)-based models, in which the architecture is specially designed to learn and represent historical or temporal information [8]. LSTM-based forecasting models can more accurately predict the number of cases and capture non-linear non-monotonous patterns in case data [9], but otherwise cannot exploit crucial spatial or geographical information for predicting the spread of COVID-19 over multiple areas.
It is shown that incorporating geospatial information, including but not limited to movement and connectedness information, helps with the forecasting performance of LSTM-based and deep learning model [10]. One of the classes of deep learning models that can seamlessly embed geospatial information is Graph Neural Networks (GNNs), neural network deep learning models
that can capture topological information in graph- and network-based data [11]. Following in the footstep of previous efforts at COVID-19 forecasting with GNNs and spatial disease features [12], we propose improved spatiotemporal graph neural network models that can accurately learn and forecast COVID-19 case progression in New Zealand. To this end, we gathered and reformated New Zealand COVID-19 case data, and constructed day-to-day disease graphs based on geographical information; graph disease representations are then fed to a hierarchical, multi-resolution temporal graph neural network model that can automatically group multiple disease areas to learn large-scale disease properties [13, 14]. The _Results_ section 2 contains a demonstration of the graph-based New Zealand COVID-19 data format, various disease spread progression information, a performance comparison between different types of forecasting models, and an evaluation of the models' predictive capabilities. The _Discussion_ section 3 discusses the implications of the results on practical applications, as well as potential improvements and precautions. The _Related works_ section 4 presents a literature summary of the most relevant and up-to-date prior works related to our current work.The _Methods_ section 5 provides details on the construction and mechanisms of the experimental epidemiological forecasting models.
## 2 Results
### Descriptive data analysis
The primary component of the dataset is the number of new cases in each region of the 20 district health boards in New Zealand (confirmed localized cases or general quarantine/unknown cases). Preliminary data analysis shows that there are significant variations between the different district health boards in terms of the volume of new cases quantified by mean and standard deviation, but the progression patterns are similar between different boards (Figure 3). Spikes in new case counts are typically 5-7 days apart, indicative of either the disease incubation period [16] or spaced window flaw in data collection. Moreover, spikes between different regions generally coincide (Figure 3), with the highest spikes and surges across the entire sampled time range occurring in March 2022. We confirmed that the case surges demonstrated by the dataset are consistent with existing pandemic reports during the same time frames [17].
### Experiment task description
We comprehensively evaluate the forecasting effectiveness of the models in short-, mid-, and long-term prediction windows. The models are trained and assessed on their predictions 3, 7, 14, and 21 days from the input data. Data from day 1 to day \(T\) is used to train one model at a time, and then predictions are obtained from the model from day \(T+1\) to day \(T+d\), where
Figure 1: New Zealand’s District Health Board map [15].
Figure 3: The number of daily new cases of all district health boards by the number of days since January 1st, 2022.
Figure 2: Left: The total number of COVID-19 cases by each district health board within the examination period. Right: GDP concentration across district health boards of New Zealand.
\(d\) is the prediction window size and \(0<d<22\). Note that each model within a single class of models is trained separately and specifically for a single fixed time window. In other words, two different models are trained to predict days \(T+a\) and \(T+b\), where \(a,b>0\) and \(a\neq b\). The size of the training set gradually increases as time progresses, and for each value of \(T\) the best model is identified via a validation set with no overlapping day with the test set. We trained and validated the models on the time series data from March 4th, 2022 to September 4th, 2022, and performed further model evaluations to examine generalization performance on an out-of-distribution starting from September 4th, 2022 to November 4th, 2022. Additionally, we ran experiments on all previous EU datasets 1-to-1 as mentioned in a previous study [12] to further test the effectiveness of baseline models as well as more succinctly comparing graph spatiotemporal models to alternatives in a variety of disease settings.
### Baselines and Comparisons
Several state-of-the-art COVID-19 forecasting models can be implemented as baseline models to measure the relative performance of our proposal models. Therefore, we compare the different spatio-temporal models with traditional statistical prediction and neural network-based regression models that have been recently applied to the problem of COVID-19 forecasting. Note that models that work with recovery, deaths, and policies such as SEIR are omitted since the dataset only provides the number of confirmed cases.
Simple statistical modelsThe class of most rudimentary statistical models for forecasting. The models examined include (1) AVG: The average number of cases for one region up to the time of the test day (e.g., the prediction for day 13 is based on
\begin{table}
\begin{tabular}{|l|c c c|c c c|c c c|c c|} \hline \multirow{2}{*}{Model} & \multicolumn{3}{c|}{Next 3 Days} & \multicolumn{3}{c|}{Next 7 Days} & \multicolumn{3}{c|}{Next 14 Days} & \multicolumn{3}{c|}{Next 21 Days} \\ \cline{2-13} & MAE & RMSE & R\({}^{2}\) & MAE & RMSE & R\({}^{2}\) & MAE & RMSE & R\({}^{2}\) & MAE & RMSE & R\({}^{2}\) \\ \hline AVG & 247.20 & 325.92 & -3.22 & 258.95 & 340.95 & -3.58 & 277.16 & 362.98 & -4.22 & 292.23 & 379.98 & -4.87 \\ AVG\_WINDOW & 80.88 & 111.15 & 0.76 & 104.09 & 142.37 & 0.55 & 144.88 & 196.63 & -0.02 & 176.82 & 238.39 & -0.79 \\ LAST\_DAY & 118.81 & 158.56 & 0.64 & 73.65 & 102.09 & 0.84 & 120.99 & 164.78 & 0.47 & 156.17 & 211.44 & -0.08 \\ \hline LIN\_REG & 182.46 & 284.61 & 0.31 & 213.53 & 336.56 & -0.01 & 272.77 & 440.60 & -0.77 & 335.95 & 551.23 & -1.81 \\ GP\_REG & 331.43 & 471.17 & -0.89 & 332.08 & 472.45 & -0.98 & 325.55 & 464.20 & -0.97 & 322.08 & 460.23 & -0.96 \\ RAND\_FOREST & 98.97 & 152.96 & 0.80 & 72.85 & 111.69 & 0.89 & 112.02 & 168.81 & 0.74 & 140.77 & 210.73 & 0.59 \\ XGBOOST & 109.68 & 165.36 & 0.77 & **68.51** & **105.91** & **0.90** & 108.45 & 165.17 & 0.75 & 137.45 & 208.13 & 0.60 \\ \hline PROPHET & 119.32 & 642.78 & -0.24 & 148.58 & 770.55 & -1.50 & 222.01 & 526.58 & -0.59 & 292.54 & 407.66 & -0.17 \\ ARIMA & 132.49 & 534.26 & 0.14 & 155.44 & 523.57 & -0.15 & 204.51 & 472.95 & -0.28 & 239.06 & 423.17 & -0.26 \\ LSTM & 186.86 & 242.62 & -0.97 & 168.43 & 222.65 & -0.39 & 140.69 & 192.39 & 0.38 & 128.04 & 182.35 & 0.59 \\ \hline
**MPNN** & 80.33 & 110.75 & 0.84 & 87.45 & 121.23 & 0.79 & 121.41 & 168.34 & 0.53 & 153.62 & 210.69 & 0.15 \\
**MGNN** & 80.87 & 111.67 & 0.83 & 89.77 & 124.56 & 0.74 & 125.30 & 172.46 & 0.46 & 156.25 & 213.55 & 0.06 \\
**MPNN+LSTM** & **75.25** & **104.64** & **0.86** & 85.14 & 117.92 & 0.84 & **88.28** & **121.71** & **0.85** & **99.85** & **137.74** & **0.83** \\
**ATMGNN** & 77.49 & 106.96 & **0.86** & 86.85 & 119.68 & 0.84 & 90.43 & 124.89 & 0.84 & 101.87 & 140.33 & 0.82 \\ \hline \end{tabular}
\end{table}
Table 1: New Zealand: Performance of all experimental model evaluated based on the metrics specified in Section 2.5
Figure 4: The number of total weekly new cases of all district health boards by the number of weeks since January 1st, 2022.
the average number of cases of the last 12 days); (2) AVG_WINDOW: The average number of cases in the past \(d\)-day window for one region (e.g., for \(d=7\), the prediction for day 13 is based on the average number of cases of the last 7 days, from day 6 onwards); and (3) LAST_DAY: The prediction for the next day in one region is the same as the number of cases on the previous day.
Traditional machine learning modelsThe input format for all models in this class is the case history up to the prediction date of each district health board. The models examined include (4) LIN_REG[18]: Ordinary least squares Linear Regression, which fit a line onto seen training samples to predict the number of future cases (linear approximation); (5) GP_REG[19]: Gaussian Process Regressor, a non-parametric based regression model commonly used in predictive analysis that implements Gaussian processes; (6) RAND_FOREST[20]: A random forest regression model that produces case predictions using decision trees, with multiple trees built based on training case data to best average the final results; and (7) XGBOOST[21]: An improved version of the random forest regression model using gradient boosting.
Parameterized regression time-series forecasting modelsThe class of linear regression models with specific components represented as parameters. The models examined include (8) PROPHET[22]: A forecasting model for various types of time series that has also seen extensive use in forecasting COVID-19 where the input is the entire time-series historical number of cases of one region up to before the testing day; (9) ARIMA[5]: A simple autoregressive moving average model, which the input is similar to PROPHET; and (10) LSTM[9]: A two-layer bidirectional LSTM model that takes as input the sequence of new cases in a region for the last 7 days, popular for the forecasting task capable of state-of-the-art performance.
Graph neural network-based modelsThe proposal graph models to be compared to the previous baseline models, with and without temporal components. The models examined include (11) MPNN[23]: Message-passing neural network model with separate layers for each day in the case time series; (12) MGNN[14, 24]: Message-passing neural network model similar to MPNN, but with multiple graph resolutions and learned clustering of different regions; (13) MPNN + LSTM: Message-passing neural network model with long-short term memory neural time series model; and (14) ATMGNN: Multiresolution graph model based on the MGNN model combined with Transformers for modeling time series. All models in this category are described in detail in Section 5.
### Experimental setup
We detail the hyperparameters setup of the deep learning prediction models in our experiments. For all graph-based models (MPNN, MPNN+LSTM, ATMGNN), training lasts for a maximum of 300 epochs with early stopping patience of 50 epochs, and early stopping is only enabled from epoch 100th onward. Models are trained with the Adam optimizer (\(lr=10^{-3}\)), batch size 128. For the neighborhood aggregation layers of the graph models, batch normalization is applied to the output of all layers with dropout applied to 0.5 times the total number of nodes. The LSTM component of the MPNN+LSTM model is
\begin{table}
\begin{tabular}{|l|r r r|} \hline \multirow{2}{*}{Model} & \multicolumn{3}{c|}{Linear decay slope} \\ \cline{2-4} & MAE & RMSE & R\({}^{2}\) \\ \hline AVG & 2.551 & 3.069 & -0.090 \\ AVG\_WINDOW & 5.451 & 7.265 & -0.081 \\ LAST\_DAY & 4.194 & 5.743 & -0.067 \\ \hline LIN\_REG & 8.457 & 14.739 & -0.112 \\ GP\_REG & 2.491 & 3.539 & -0.028 \\ RAND\_FOREST & 3.761 & 5.504 & -0.018 \\ XGBOOST & 3.539 & 5.344 & -0.018 \\ \hline PROPHET & 9.882 & _-17.263_ & _0.025_ \\ ARIMA & 6.474 & -8.078 & -0.015 \\ LSTM & _-3.591_ & _-3.690_ & _0.107_ \\ \hline
**MPNN** & 4.651 & 6.339 & -0.044 \\
**MGNN** & 4.579 & 6.223 & -0.042 \\
**MPNN+LSTM** & **1.134** & **1.511** & **-0.001** \\
**ATMGNN** & 1.212 & 1.655 & -0.002 \\ \hline \end{tabular}
\end{table}
Table 2: New Zealand: The slope of the linear fit to the performance decay graph of each model for all three metrics. For MAE and RMSE, lower slope values indicate better decay performance; higher R\({}^{2}\) slope values indicate better decay performance. The best-performing model is highlighted in **bold**, special exceptions are in _italics_.
implemented with a hidden state size of 64. The multiresolution component of the ATMGNN is configured for two additional coarsening layers with 10- and 5-node clusters, respectively; self-attention is configured with a single head for all regions. The models with the lowest validation loss at each prediction day shift are saved as parameter checkpoints for the sake of further evaluation, and validation information is outputted for further examination. All models are implemented with PyTorch[25] and PyTorch geometric[26].
### Evaluation metrics
We measured the performance of all models with three evaluation metrics: Mean absolute error (MAE), root mean squared error (RMSE), and the coefficient of determination (R\({}^{2}\)-score).
Mean Absolute Error (MAE)The metric measures the sum of the absolute differences between the predicted case count and the actual case count. MAE cannot indicate the degree of under-/over-prediction since all variations have equal weight. The formula to calculate MAE is as follows:
\[\text{MAE}=\frac{1}{N}\sum_{i=1}^{N}|\hat{y}_{i}-y_{i}|\]
where \(\hat{y}_{i}\) represents the forecast case count, \(y_{i}\) represents the ground truth case count, \(N\) represents the total number of days in the case count time series, and \(i\) represents the case count statistic of a single day in the time series.
Root Mean Squared Error (RMSE)The metric measures the square root of the average squared deviation between the forecast and the actual case count. RMSE is a good measure of prediction accuracy, mostly used when the error is highly nonlinear. The formula to calculate RMSE is as follows:
\[\text{RMSE}=\sqrt{\sum_{i=1}^{N}\frac{(\hat{y}_{i}-y_{i})^{2}}{N}}\]
Coefficient of Determination (R\({}^{2}\))The metric represents the proportion of variance of the case count that has been explained by the independent variables in the model. R\({}^{2}\) indicates goodness of fit and measures how well unseen samples are likely to be predicted by the model (through the proportion of explained variance). The formula to calculate R\({}^{2}\)-score is as follows:
\[\text{R}^{2}=1-\frac{\sum_{i=1}^{N}(\hat{y}-y_{i})^{2}}{\sum_{i=1}^{N}(\hat{y }-y_{i})^{2}}\]
where \(\hat{y}=\frac{1}{N}\sum_{i=1}^{N}y_{i}\), or the average of the ground truth time series. The best possible score is 1.0, and the score can be negative (i.e., the model can arbitrarily badly fit the case count time series). A constant model that always predicts the average number of cases over the entire periods with no respect to the inputs would get an R\({}^{2}\) score of 0.0.
\begin{table}
\begin{tabular}{|l|r r r|r r r|r r r|r r r|} \hline \multirow{2}{*}{Model} & \multicolumn{3}{c|}{Next 3 Days} & \multicolumn{3}{c|}{Next 7 Days} & \multicolumn{3}{c|}{Next 14 Days} & \multicolumn{3}{c|}{Next 21 Days} \\ \cline{2-13} & MAE & RMSE & R\({}^{2}\) & MAE & RMSE & R\({}^{2}\) & MAE & RMSE & R\({}^{2}\) & MAE & RMSE & R\({}^{2}\) \\ \hline AVG & 8.15 & 11.39 & -0.14 & 8.50 & 11.77 & -0.35 & 8.97 & 12.14 & -0.80 & 9.32 & 12.60 & -1.39 \\ AVG\_WINDOW & 6.33 & 8.79 & 0.40 & 7.94 & 10.87 & -0.07 & 11.04 & 14.91 & -1.52 & 14.17 & 18.77 & -4.06 \\ LAST\_DAY & 7.12 & 10.45 & 0.19 & 7.33 & 10.49 & 0.19 & 9.83 & 14.13 & -0.90 & 12.76 & 17.85 & -3.01 \\ \hline LIN\_REG & 13.13 & 17.77 & -0.32 & 17.19 & 22.95 & -1.32 & 26.28 & 34.53 & -6.34 & 37.11 & 47.94 & -18.85 \\ GP\_REG & 14.95 & 21.48 & -0.93 & 14.13 & 20.65 & -0.88 & 12.00 & 17.50 & -0.89 & 10.04 & 14.72 & -0.87 \\ RAND\_FOREST & 6.64 & 9.87 & 0.59 & 7.16 & 10.17 & 0.54 & 10.01 & 14.08 & -0.22 & 13.03 & 17.88 & -1.76 \\ XGBoOST & 7.32 & 10.93 & 0.50 & 7.46 & 10.82 & 0.48 & 10.00 & 14.50 & -0.30 & 12.82 & 18.06 & -1.82 \\ \hline PROPHET & 10.79 & 20.78 & -0.06 & 14.45 & 29.28 & -1.29 & 23.43 & 34.29 & -2.99 & 33.59 & 31.72 & -3.66 \\ ARIMA & 8.95 & 20.28 & -0.01 & 9.51 & 13.77 & 0.49 & 9.63 & 13.02 & 0.43 & 9.77 & 11.62 & 0.37 \\ LSTM & 8.61 & 11.88 & -0.36 & 8.20 & 11.24 & 0.12 & 7.86 & 10.66 & 0.47 & 7.09 & 9.95 & 0.65 \\ \hline
**MPNN** & 6.51 & 9.41 & 0.55 & 7.54 & 10.63 & 0.39 & 10.12 & 14.14 & -0.42 & 12.84 & 17.77 & -1.82 \\
**MGNN** & 6.87 & 9.48 & 0.55 & 8.18 & 10.93 & 0.35 & 11.07 & 14.67 & -0.60 & 14.14 & 18.67 & -2.16 \\
**MPNN+LSTM** & 6.73 & 9.55 & 0.57 & 7.08 & 10.13 & 0.57 & 7.68 & 10.89 & 0.57 & 7.95 & 11.36 & 0.58 \\
**ATMGNN** & **6.24** & **8.82** & **0.63** & **6.44** & **9.04** & **0.66** & **6.80** & **9.57** & **0.68** & **6.70** & **9.53** & **0.73** \\ \hline \end{tabular}
\end{table}
Table 3: England: Performance of all experimental model evaluated based on the metrics specified in Section 2.5
### Observations
Performance measurementTable 1 details the performances of all experimental models in the benchmark study. Across the board, MPNN+LSTM is the highest-performing model, with relatively low mean error and root means square error, alongside accurate trend prediction at \(R^{2}\)-score consistently over 0.8. Other baseline methods performed inconsistently across different time ranges, with massive fluctuations in heuristic statistical methods (AVG, AVG_WINDOW, and especially LAST_DAY), owning to these baselines simply forecasting based on rudimentary statistics of the data. However, in near-future time prediction windows (from 1-7 days), simple statistical methods can be competitive compared to more complex models; nevertheless, our goal is to eliminate or mitigate performance decay during long-term predictions. The class of traditional machine learning models performed reasonably well, with tree-based methods RAND_FOREST and XGBoOST outperforming simple statistical methods and parameterized models aside from LSTM. In New Zealand from 14 days prediction length onwards, graph-based temporal models on average see a 20.31% and 25.37% relative reduction in MAE and RMSE, respectively; while correlation metric R\({}^{2}\) relatively improving 9.43%. Similar results are obtained from cross-examining Italy and England COVID datasets, with graph-based temporal models (e.g., MPNN+LSTM, ATMGNN) generally outperforming other baseline models and LSTM models coming second on all metrics. The exception to performance patterns across countries is the overall performance of traditional machine learning models, which perform inconsistently across different countries. France and Spain's tables are included in the Appendix.
Performance decay over long forecasting windowsAcross all models, the AVG model performed the worst when it comes to performance decay relative to the length of forecasting windows, concerning both absolute error and correlation metrics. On the other hand, the two other heuristic statistical methods, AVG_WINDOW and LAST_DAY, outperformed regression-based methods ARIMA and PROPHET with respect to the rate of decay and error increment over longer forecasting windows (Figure 5 and Table 2). We observed anomalies in the results of our LSTM implementation: both the model's absolute error and correlation score \(R^{2}\) do not decay over time, but rather improved over the long run (Figure 5); this phenomenon may be explained considering LSTM-based models performed better in certain long time ranges, or the models have certain "sweet spots". Traditional machine learning models that implement tree-based learning, including RAND_FOREST and XGBoOST, performed reasonably well in terms of performance decay, with lower error increase rate and R\({}^{2}\) decrease rate than most other models aside from graph-based temporal models and ARIMA. Graph- and temporal-hybrid models MPNN+LSTM and ATMGNN maintained a stable performance decay profile with a low decay rate on both error and correlation compared to every other model aside from the LSTM exception, alongside lower values in both metrics across the board. Both temporal graph models started with relatively high performance in terms of all metrics compared to other baselines and mostly maintained the same performance when predicting longer time ranges with minimal decay, resulting in them outperforming all other baseline models. We specifically demonstrated the relative metric and decay stability of the two graph-based temporal models by averaging over several runs and computing the deviation as in Figure 6, showing the performance and decay similarities between these models over time.
\begin{table}
\begin{tabular}{|l|c c c|c c c|c c c|c c c|} \hline \multirow{2}{*}{Model} & \multicolumn{3}{c|}{Next 3 Days} & \multicolumn{3}{c|}{Next 7 Days} & \multicolumn{3}{c|}{Next 14 Days} & \multicolumn{3}{c|}{Next 21 Days} \\ \cline{2-13} & MAE & RMSE & R\({}^{2}\) & MAE & RMSE & R\({}^{2}\) & MAE & RMSE & R\({}^{2}\) & MAE & RMSE & R\({}^{2}\) \\ \hline AVG & 21.13 & 42.80 & 0.53 & 20.31 & 41.88 & 0.49 & 20.28 & 43.23 & 0.43 & **19.19** & 41.35 & 0.39 \\ AVG\_WINDOW & **17.69** & **33.48** & **0.66** & 19.75 & **37.30** & 0.53 & 23.75 & 44.90 & 0.30 & 26.88 & 50.00 & -0.01 \\ LAST\_DAY & 21.21 & 41.99 & 0.45 & 21.83 & 43.36 & 0.37 & 25.45 & 49.97 & 0.13 & 27.73 & 50.85 & 0.03 \\ \hline LIN\_REG & 28.15 & 54.53 & 0.29 & 35.35 & 69.38 & -0.29 & 50.02 & 99.11 & -1.74 & 65.12 & 132.16 & -4.61 \\ GP\_REG & 37.41 & 74.72 & -0.33 & 35.72 & 70.72 & -0.34 & 33.12 & 68.38 & -0.31 & 30.13 & 63.40 & -0.29 \\ RAND\_FOREST & 19.42 & 40.80 & 0.60 & 20.41 & 42.38 & 0.52 & 24.80 & 50.01 & 0.30 & 27.41 & 52.76 & 0.11 \\ XGBoOST & 21.79 & 47.09 & 0.47 & 22.49 & 48.63 & 0.37 & 26.41 & 55.60 & 0.14 & 28.64 & 57.06 & -0.05 \\ \hline PROPHET & 23.03 & 55.65 & 0.46 & 29.18 & 71.44 & 0.15 & 40.95 & 92.95 & -0.76 & 51.92 & 100.06 & -1.64 \\ ARIMA & 22.90 & 76.56 & -0.03 & 27.01 & 68.94 & 0.21 & 28.62 & 56.27 & 0.36 & 25.57 & 45.51 & 0.45 \\ LSTM & 20.98 & 42.01 & 0.51 & 19.80 & 40.17 & 0.59 & **19.56** & 39.91 & 0.60 & 20.18 & 39.44 & 0.66 \\ \hline
**MPNN** & 18.09 & 36.67 & 0.64 & 21.45 & 43.56 & 0.49 & 26.07 & 51.72 & 0.21 & 28.94 & 59.26 & -0.16 \\
**MGNN** & 19.14 & 37.17 & 0.64 & 22.69 & 42.99 & 0.51 & 27.33 & 51.73 & 0.20 & 29.87 & 58.18 & -0.14 \\
**MPNN+LSTM** & 18.50 & 38.43 & 0.60 & **19.48** & 39.98 & 0.59 & 19.72 & 41.89 & 0.56 & 19.84 & 41.22 & 0.58 \\
**ATMGNN** & 18.05 & 36.94 & 0.65 & 19.63 & 39.06 & **0.63** & 19.80 & **39.49** & **0.63** & 18.55 & **37.07** & **0.67** \\ \hline \end{tabular}
\end{table}
Table 4: Italy: Performance of all experimental model evaluated based on the metrics specified in Section 2.5
Figure 5: Performance decay with respect to MAE and R\({}^{2}\) metrics. Models with performance worse than the defined y-axis range are excluded.
Figure 6: Average error and R2-score decay of two temporal graph network models over multiple runs.
Out-of-distribution forecastingWe examined the out-of-distribution performances of two of our best-performing models, the MPNN+LSTM and ATMGNN. The evaluation is done on the number of new cases between September 4th, 2022 and November 4th, 2022, with no overlapping between the evaluation set and the train/validation sets. All models are evaluated as autoregressive models, meaning for the 30-day prediction window the models use the prediction output of the previous day as an input feature for predicting the current day. As demonstrated in Figure 7, ATMGNN outperformed MPNN+LSTM when it comes to prediction error and emulating the spiking dynamics of the number of new cases. The predictions retrieved from the outputs of ATMGNN showed that the model can fairly precisely simulate the case spiking dynamics even when tested on a dataset completely unrelated and separate from the training dataset, demonstrating good generalization performance of the model. All other baseline models are not included in the evaluation after extensive testing showing that their performances are not remotely comparable to the two models demonstrated above. Further testing with different information windows with ground truth case information feed into the model ranging between 3 and 9 days before the target day to predict showed that both models maintained similar forecasting patterns with ATMGNN better conforming to the ground truth and the models maintaining relatively stable predictions given different case information levels.
Economic graph featuresTo further analyze the role of auxiliary features in graph-based models, we integrated economic features into the two best-performing graph-based forecasting models. Details on the source and construction of these features within the dataset are discussed in Section 5.3. As shown in Figure 6 and Table 5, the two models that include economic features (ECON models) slightly underperformed compared to baseline graph-based temporal models, even with all economic features normalized. The decrease in performance is likely due to these economic features remaining constant throughout all prediction periods, indicating that relative economic allocation between different district health boards do not necessarily add any helpful information. Because of either low baseline performance or incompatibility, such as the economic zones and DHBs were not strictly overlapped as per Figure 1, these economic features were not added to other baseline models. Our sensitivity analysis that removed DHBs with inconsistent economic zones suggested there was a small improvement in model performance as expected (see the Appendix).
Demographic graph featuresWe additionally tested different modalities of the original graph models, particularly enhancing the input to the model with separate age group data from the original data source, as well as augmenting the output of the models to predict the number of new cases for each age group of each district health board in New Zealand. Through testing, however, outputting each age group separately, adding age group data to the models' inputs, as well as adding custom demographic
\begin{table}
\begin{tabular}{|l|c c c|c c c|c c c|c c c|} \hline \multirow{2}{*}{Model} & \multicolumn{3}{c|}{Next 3 Days} & \multicolumn{3}{c|}{Next 7 Days} & \multicolumn{3}{c|}{Next 14 Days} & \multicolumn{3}{c|}{Next 21 Days} \\ \cline{2-13} & MAE & RMSE & R\({}^{2}\) & MAE & RMSE & R\({}^{2}\) & MAE & RMSE & R\({}^{2}\) & MAE & RMSE & R\({}^{2}\) \\ \hline MPNN+LSTM & 75.25 & 104.64 & 0.86 & 85.14 & 117.92 & 0.84 & 88.28 & 121.71 & 0.85 & 99.85 & 137.74 & 0.83 \\ ATMGNN & 77.49 & 106.96 & 0.86 & 86.85 & 119.68 & 0.84 & 90.43 & 124.89 & 0.84 & 101.87 & 140.33 & 0.82 \\ \hline
**MPNN+LSTM (ECON)** & 78.30 & 108.11 & 0.85 & 86.63 & 119.16 & **0.85** & 93.61 & 129.13 & 0.84 & 107.37 & 147.57 & 0.80 \\
**ATMGNN (ECON)** & 78.61 & 108.54 & 0.86 & 87.30 & 120.23 & **0.85** & 94.64 & 130.53 & 0.83 & 109.36 & 150.02 & 0.80 \\ \hline \end{tabular}
\end{table}
Table 5: New Zealand: Performance measurements of graph-based models with and without economic features
Figure 7: Sample out-of-distribution new case predictions for two graph-based models.
weighting to each age group during training did not improve the models' performance. For reference, the results of testing demographic-enhanced models are presented in the Appendix.
## 3 Discussion
Interpretation of the main resultsWe provided a comprehensive evaluation of four classes of COVID-19 forecasting models, with a detailed analysis of the models' performance, decay over time, out-of-distribution forecasting, and economic features addition. Generally, graph neural network-based models, specifically the temporal variant of graph-based models outperformed every other baseline model in terms of performance metrics and performance decay over time. This trend is not shared by non-temporal graph-based models, indicating the importance of temporal mechanisms in forecasting models, whether it is attention-based on recurrent network-based (i.e., LSTM). Additionally, for far less computation cost, traditional machine learning models outperformed statistical and parameterized regression time-series forecasting models, even remaining competitive with the neural network-based LSTM model. The results suggest that the spatiotemporal approach to modeling the spread of COVID-19 based on the number of new cases is effective compared to other traditional modeling methods. Intuitively, graph-based models can accurately simulate the change in the number of new cases in one region when given that region's traffic connectivity with its neighbors. Since the spread of COVID-19 in every country, not only in New Zealand, is movement-based in nature, by modeling such geographical connectivity we can find latent information by accounting for human contacts with graph-based models. Moreover, the out-of-distribution performance of multiresolution temporal graph models also demonstrates the utility of modeling the problem of COVID-19 forecasting as a hierarchical system, with spreads localized in adjacent regions that have significant traffic volume.
StrengthGraph-based temporal models can embody the hidden correlations between different district health board regions when forecasting COVID-19. The approach is straightforward with versatility in modeling various connected systems and structures, not only with the task of forecasting disease spread trajectories but also in the task of chemical molecule construction and discrimination [27]. As indicated by the performance metrics, temporal models perform well compared to other traditional models, predicting with relatively low errors and remaining accurate even when predicting further into the future. The multiresolution setting of graph-based models can also accurately model the disease's case dynamics to a reasonable degree when facing completely new and separate data from the training set. Additionally, the models' current implementation relies on comparatively low computational resources, with training and inferencing solely based on an online connection to an instance of Google Colab. Our COVID-19 data were captured during the Omicron waves in New Zealand with a highly vaccinated population. Our data also reflected a unique social experimental event that the New Zealand border was opened in May 2022 after strictly closing for two years (since 25 March 2020).
LimitationsWhile certain metrics of the proposed models are satisfactory, we have identified several weaknesses of the models that were tested. Graph-based models, while powerful, still require a certain amount of computational resources and adequate time for the process of training the models. Data inputs also have to be well-structured and preprocessed carefully to suit the formatting of the models, though this is less of a concern given the availability and accuracy of case datasets such as the New Zealand COVID-19 public dataset. Furthermore, data features can be further enriched with more detailed movement data between regions, traffic density information for all traveling modalities (e.g., land, sea, or air travel), and local movement details within each region. Most importantly, graph-based networks and deep learning models in general are black-boxes, offering little insight into the precise mechanisms of forecasting and modeling disease dynamics for the sake of studying the exact nature of epidemic spread.
Policy implicationsThe out-of-distribution performance and time range availability of up to 30 days show that graph-based multiresolution temporal models can be effectively used in aiding public health policies. With appropriate data processing and extension, the models can be adapted to predicting new cases on coarser temporal resolutions (i.e., weeks or months), potentially becoming a useful tool for epidemic predictive modeling and local or country-level intervention measures simulations.
Future researchFuture directions of research are aplenty, from additional data features and enrichment features readily incorporable into the model as node features (e.g., more fine-grained socioeconomic features) or as edge features (e.g., mobility data), to interpretation methods designed for graph neural networks [28] for the sake of understanding the inner workings of such prediction models. Another interesting direction is to comparatively examine the spatial modeling of deep learning models with other dynamical forecasting models, and the correlation influence of each disease feature/parameter on the final prediction output of each type of model.
ConclusionsOur study suggested that graph neural network-based models outperformed every other baseline model in terms of performance metrics and performance decay over time. Furthermore, our graph neural network-based models can effectively predict the number of COVID-19 cases upto 30 days, and therefore can assist with public health policies planning in order to
control the COVID-19 outbreaks. Finally, our results in terms of model structures and frameworks can be generalized to other countries with similar settings.
## 4 Related works
### Linear and statistical forecasting models
Various traditional statistical and linear models have been employed to forecast the spread of COVID-19 cases. Among these traditional models are the Susceptible, Infectious, or Recovered (SEIR) models, where the dynamics of disease spread is modeled as a function of various population and the interactions between them [1, 2, 3]. While mathematical models such as SEIR can estimate the effect of control measures even before the start of the pandemic, these models cannot make accurate predictions due to a lack of data and their inherent assumptions restricting the class of available learnable disease functions [4, 29].
Some other prevalent classes of forecasting methods are the Autoregressive Integrated Moving Average models (ARIMA) and the time-series prediction Prophet model. ARIMA models are well-known for being able to forecast future points in time series data, especially when the mean of the data is non-stationary; evidently, this family of models has been applied numerous times to forecast COVID-19 cases in several countries [5, 6, 30]. On the other hand, the Meta-developed Prophet model and its variants have also been utilized to tackle the task of predicting the progression of COVID-19 cases with some success, namely in forecasting the number of cases in India [31] and generally for any country using day level case information [7, 32]. Several combinations of the traditional and statistical linear models have been examined [33], effectively achieving better performance using compositions of successful statistical time-series models.
While statistical models are proficient at capturing certain COVID-19 case dynamics and are similarly motivated by repeated case patterns shown in pandemic data, these models are linear in nature and incapable of modeling spatial information as well as higher-level disease functions. Among the above models, ARIMA-based models have been proven to be insufficient even in their specialty of modeling time-series data given the complexities of such type of data in certain dimensions, while also underperformed compared to simple empirical methods [34, 35].
### Neural networks-based time-series forecasting models
Neural networks are a type of machine learning method that is capable of learning arbitrary non-linear functions underlying the data, making them one of the most powerful general learning algorithms for a wide range of tasks [36]. The vanilla neural networks without any additional component have been tested as predictors of COVID-19 outbreaks across several countries, owing to their high capacity modeling of disease patterns and functions when certain assumptions (e.g., disease incubation period) are encoded [37].
Even among powerful Artificial Neural Networks (ANNs), there arises a need to explicitly model the temporal nature of certain types of data (i.e., the time-dependent trends of disease outbreaks) [38]. The most common methods for modeling time-series data, particularly pandemic forecasting input data, are Recurrent Neural Network (RNN) models. RNNs are widely adopted in areas with sequential data; intuitively, RNNs are capable of "memorizing" the nature of patterns within time-series data. Most popular within the class of RNNs are Long-Short Term Memory (LSTM) models, a variant of the traditional RNNs that is capable of modeling long-term dependencies, allowing the models to learn information and patterns from distant past [39]. LSTMs and their variants have been used extensively to forecast COVID-19 case progression, notably in Canada, where the model was tested and modified to accommodate disease-specific information, accurately predicting an exponential surge in the number of cases [40]. LSTM-based models were also used to simulate and forecast the COVID-19 pandemic in several other countries, either independently or in conjunction with various distinct statistical models incorporating spatial features [41, 42, 43].
While time-series models are effective at modeling the evolution of the disease over time in a single specific geographical region, LSTM/RNN-based models inherently cannot incorporate spatial features without employing heuristics that either explicitly change the architecture or interpolate spatial information into the input data itself [44]. To address the shifting topology of the natural geographical map and represent the causal relationship between pandemic regions, a more graphical and hierarchical approach is needed.
### Temporal graph neural networks forecasting models
Graph neural networks (GNNs) utilizing various ways of generalizing the concept of convolution to graphs [45, 46, 47] have been widely applied to many learning tasks, including modeling physical systems [48], finding molecular representations to estimate quantum chemical computation [49, 50, 51, 52], and protein interface prediction [53]. One of the most popular types of GNNs is message passing neural nets (MPNNs) [23] that are constructed based on the message passing scheme in which each node propagates and aggregates information, encoded by vectorized messages, to and from its local neighborhood. In order to capture the dynamic nature of evolving features or connectivity over time, temporal graph neural networks (TGNN) have been proposed by [14, 54] as a generic, efficient deep learning framework that combines graph encoding (e.g., MPNNs) with time-series encoding architectures (e.g., LSTM, Tranformers, etc.). Applications of TGNN include traffic prediction [55, 56, 57] and learning on brain networks [57], etc.
### Temporal architectures
#### 5.1.1 Long Short-Term Memory
Long Short-Term Memory (LSTM), first proposed by [58], is a special kind of Recurrent Neural Networks that was designed for learning sequential and time-series data. LSTM has been widely applied into many current state-of-the-art Deep Learning models in various aspects of Machine Learning including Natural Language Processing, Speech Recognition and Computer Vision. One successful variant of LSTM is the Gated Recurrent Unit (GRU) introduced by [59] as a simplification with less computational cost in the context of sequential modeling. The forward pass of an LSTM cell with a forget gate can be described as follows:
\[\begin{split}& f_{t}=\sigma_{\text{g}}(W_{t}x_{t}+U_{f}h_{t-1}+b_{f}) \in(0,1)^{h}\hskip 113.811024pt\text{is the forget gate's activation vector}\\ & i_{t}=\sigma_{\text{g}}(W_{i}x_{t}+U_{i}h_{t-1}+b_{i})\in(0,1)^{h }\hskip 113.811024pt\text{is the input/update gate's activation vector}\\ & o_{t}=\sigma_{\text{g}}(W_{o}x_{t}+U_{o}h_{t-1}+b_{o})\in(0,1)^{h }\hskip 113.811024pt\text{is the output gate's activation vector}\\ &\tilde{c}_{t}=\sigma_{\text{c}}(W_{c}x_{t}+U_{c}h_{t-1}+b_{c}) \in(-1,1)^{h}\hskip 113.811024pt\text{is the cell input activation vector}\\ & c_{t}=f_{t}\odot c_{t-1}+t_{i}\odot\tilde{c}_{t}\in\mathbb{R}^ {h}\hskip 113.811024pt\text{is the cell state vector}\\ & h_{t}=o_{t}\odot\sigma_{\text{h}}(c_{t})\in(-1,1)^{h}\hskip 113.811024pt \text{is the hidden state (output) vector of the LSTM unit}\end{split}\]
where the initial values are \(c_{0}=0\) and \(h_{0}=0\); the subscript \(t\) indexes the time step; \(d\) and \(h\) refer to the number of input features and number of hidden units, respectively; \(x_{t}\in\mathbb{R}^{d}\) is the input vector to the LSTM unit; \(W\in\mathbb{R}^{h\times d}\), \(U\in\mathbb{R}^{h\times h}\), and \(b\in\mathbb{R}^{h}\) are learnable weight matrices and bias vector; the operator \(\odot\) denotes the element-wise (i.e. Hadamard) product; \(\sigma_{\text{g}}(\cdot)\) is the sigmoid function; \(\sigma_{\text{c}}(\cdot)\) and \(\sigma_{\text{h}}(\cdot)\) are the hyperbolic tangent function.
#### 5.1.2 Transformers
Recently, Transformers have achieved superior performances in various deep learning tasks [60, 61, 62]. Among multiple advantages of Transformers, the ability to capture long-range dependencies and interactions is particularly important for time series modeling. The backbone of Transformers is the self-attention mechanism [62], also called scaled dot-product attention or softmax attention. Self-attention transforms the input sequence \(X=[x_{1},..,x_{L}]^{T}\in\mathbb{R}^{L\times d}\) of length \(L\) into the output sequence \(H=[h_{1},..,h_{L}]^{T}\in\mathbb{R}^{L\times h}\) in the following two steps:
Figure 8: The message-passing mechanism on an example district health board graph-based representation
1. The input sequence \(X\) is projected into the query matrix \(Q=[q_{1},..,q_{L}]^{T}\), the key matrix \(K=[k_{1},..,k_{L}]^{T}\) and the value matrix \(V=[v_{1},..,v_{L}]^{T}\) via three linear transformations: \[Q=XW_{Q}^{T},\quad K=XW_{K}^{T},\quad V=XW_{V}^{T},\] where \(W_{Q},W_{K},W_{V}\in\mathbb{R}^{h\times d}\) are learnable weight matrices.
2. The output sequence \(H\) is then computed as follows: \[H=\text{Attention}(Q,K,V)=\text{softmax}\left(\frac{QK^{T}}{\sqrt{h}}\right)V= AV,\] (1) where the softmax function is applied to each row of the matrix \(QK^{T}\), and \(A\in\mathbb{R}^{L\times L}\) is the attention matrix of \(a_{ij}\) attention scores. Equation 1 can be rewritten as: \[h_{i}=\sum_{j=1}^{L}\text{softmax}\left(\frac{q_{i}^{T}k_{j}}{\sqrt{h}}\right) =\sum_{j=1}^{L}a_{ij}v_{j}.\]
Each output sequence \(H\) forms an attention head. Let \(n\) be the number of heads and \(W_{O}\in\mathbb{R}^{nh\times nh}\) be the projection matrix for the output. In multi-head attention, multiple heads are concatenated to compute the final output defined as follows:
\[\text{Multihead}(\{Q,K,V\}_{i=1}^{n})=\text{Concat}(H_{1},..,H_{n})W_{O}.\]
### Graph Neural Networks
#### 5.2.1 Graph construction
We process the input disease data as graphs, a form of non-Euclidean irregular data that is _permutation invariant_ in nature (i.e., changing the ordering of the nodes in a graph does not change the data that the graph represents). To represent the New Zealand pandemic data as graphs, the entirety of the country is formatted as a single graph \(G=(V,E)\), where \(n=|V|\) is the number of nodes, and each node represents a single district health board in New Zealand. We create a series of graphs \(G^{(1)},G^{(2)},...,G^{(T)}\) corresponding to each day in the case dataset of New Zealand, where the current day \(t\) is within the available day case data for every district health board. The topology (i.e., connecting edges and adjacency matrix) of the graphs remains constant over all time steps. The adjacency matrix \(\mathbf{A}\) represents the connection between edges in the disease graph; we constructed the connections between nodes based on geographical adjacency between any two district health boards. Between any two district health boards \(u\) and \(v\), the edge \((u,v)\) from \(u\) to \(v\) is \(A_{u,v}=2\) if two district health boards share any border length, and \(A_{u,v}=1\) otherwise. For each node or district health board, we denote the features, or the number of cases in the last \(d\) days in the region \(u\), as the vector \(\mathbf{x}_{u}^{(t)}=(c_{u}^{(t-d)},...,c_{u}^{(t)})^{\top}\in\mathbb{R}^{d}\). The number of cases over multiple previous days is used to account for irregular case reporting and the length of the incubation period.
#### 5.2.2 Message-passing neural networks
We model the spatial and geographical spread of COVID-19 in New Zealand using a well-known family of GNNs known as message-passing neural networks (MPNNs) [23]. Vector messages are exchanged between nodes and updated using neural networks; intuitively, the model takes into account the interaction between neighbors in the network to model the spread of the disease. District health boards with shared borders are more connected and see more traffic between them compared to two boards in remote regions. The goal of the model is to generate a vectorized representation for each node in the disease graph.
With hidden embedding \(h_{u}^{(k)}\) representing each node/district health board \(u\in V\), we define the GNN message-passing mechanism based on the Graph Convolutional Network [63] as
\[h_{u}^{(k)}=\sigma\left(W^{(k)}\sum_{v\in\mathcal{N}(u)\cup\{u\}}\frac{h_{v}} {\sqrt{|\mathcal{N}(u)||\mathcal{N}(u)|}}\right) \tag{2}\]
where \(W^{(k)}\) is the trainable parameter matrix of layer \(k\), and \(\sigma\) is an element-wise nonlinearity (i.e., ReLU or tanh function). \(\mathcal{N}(u)\) denotes the set of neighboring nodes to node \(u\); in this case, the set represents all district health boards adjacent to board \(u\). Note that the term \(\frac{h_{v}}{\sqrt{|\mathcal{N}(u)||\mathcal{N}(u)|}}\) represents neighborhood normalization based on the degrees of nodes (i.e., number of shared borders) for the purpose of increasing computational stability. The number of message-passing layers \(k\) represents the number of aggregation operations, as one layer integrates each node with the information from its adjacent neighbors, while two layers add neighbors two steps away from the target node, etc.
Across multiple layers and to account for the vectorization of the node embeddings, we define the neighborhood aggregation scheme as
\[H^{(k)}=\sigma(\tilde{A}H^{(k-1)}W^{(k)}) \tag{3}\]
where \(H^{(k-1)}\) is a matrix containing the generated node embeddings from the previous layer, \(H^{(k)}=(h_{1}^{(k)},h_{2}^{(k)},...,h_{n}^{(k)})^{\top}\) denotes the matrix arrangement of the node embeddings of all nodes in the graph (\(H^{(0)}=X\)), and \(\tilde{A}\) denotes the aforementioned normalized graph Laplacian. Note that Equation 3 represents the matrix vectorization of the MPNN described in Equation 2, and thus the two equations are equivalent representations of the same MPNN. For the sake of brevity, the time index is omitted from both equations; the model is in fact applied to all input graphs \(G^{(1)},G^{(2)},...,G^{(T)}\) in the time series separately. Since the connectivity and adjacency of the disease graphs are constant over time, the matrix \(\tilde{A}\) is shared across all temporal graphs alongside the weight matrices \(W^{(1)},...,W^{(K)}\) for \(K\) message-passing layers, while the node embeddings \(H^{0},...,H^{K}\) are unique for each disease day graph in the time series.
#### 5.2.3 Multiresolution Graph Neural Networks
In field of graph learning, it is important to build a graph neural network that can capture the multiscale and hierarchical structures of graphs. Multiresolution Graph Neural Networks (MGNN) was originally proposed by [24] as a graph encoder in the context of graph generation via variational autoencoder, and adopted by [14] in combination with a temporal architecture to learn and predict the dynamics of an epidemic or a pandemic. Instead of a fixed coarse-graining process, MGNN introduces a learnable clustering algorithm that iteratively constructs a hierarchy of coarsening graphs, also called multiresolution or multiple levels of resolutions (see Def. 2):
1. Based on the node embeddings, we cluster a graph into multiple partitions. Each partition is coarsened into a single node, and all the edges connecting between two partitions are aggregated into a single edge (see Def. 1). This process results into a smaller coarsened graph.
2. We continue to apply message passing on the coarsened graph to produce its node embeddings, and then cluster it further. On each resolution, all the node embeddings are pooled into a single graph-level vectorized representation, i.e. latent. The hierarchy of latents allows us to capture both local information (in the lower levels) and global information (in the higher levels) of a graph.
Figure 9: Overview of the proposed Message-Passing Neural Network architecture on the graph representation of New Zealand. Note that dotted green arrows represent the extraction of historical case counts as node features, and dotted orange arrows represent the geospatial location between two regions extracted as edge features.
**Definition 1**: _A \(k\)-cluster partition on a graph \(G=(V,E)\) partitions its set of nodes into \(k\) disjoint sets \(\{V_{1},V_{2},..,V_{k}\}\). A coarsening of \(G\) is a graph \(\tilde{G}=(\tilde{V},\tilde{E})\) of \(k\) nodes in which node \(\tilde{v}_{i}\in\tilde{V}\) corresponds to a induced subgraph of \(G\) on \(V_{i}\). The weighted adjacency matrix \(\tilde{A}\in\mathbb{N}^{k\times k}\) of \(\tilde{G}\) is defined as:_
\[\tilde{A}_{ij}=\begin{cases}\frac{1}{2}\sum_{u,v\in V_{i}}A_{uv},&\text{if }i=j,\\ \sum_{u\in V_{i},v\in V_{j}}A_{uv},&\text{if }i\neq j,\end{cases}\]
_where the diagonal of \(\tilde{A}\) denotes the number of edges inside each cluster, while the off-diagonal denotes the number of edges between two clusters._
**Definition 2**: _An L-level of resolutions, i.e. multiresolution, of a graph \(G\) is a series of L graphs \(\tilde{G}_{1},..,\tilde{G}_{L}\) in which: **(i)**\(\tilde{G}_{L}\) is \(G\) itself; and **(ii)** For \(1\leq\ell\leq L-1\), \(\tilde{G}_{\ell}\) is a coarsening graph of \(\tilde{G}_{\ell+1}\) as defined in Def. 1. The number of nodes in \(\tilde{G}_{\ell}\) is equal to the number of clusters in \(\tilde{G}_{\ell+1}\). The top level coarsening \(\tilde{G}_{1}\) is a graph consisting of a single node._
The key innovation of MGNN is how the model can learn to cluster graph \(\tilde{G}_{\ell+1}\) into \(\tilde{G}_{\ell}\) in a data-driven manner. Without the loss of generality, we suppose that the number of nodes in \(\tilde{G}_{\ell}\) is \(K\), i.e. \(|\tilde{V}_{\ell}|=K\), meaning that we cluster \(\tilde{G}_{\ell+1}\) into \(K\) partitions. First, we employ a GNN to produce a \(K\)-channel node embedding for each node of \(\tilde{G}_{\ell+1}\). Then, we apply a softmax over the node embedding to compute the probability of assigning each node to one of the \(K\) clusters. However, we want each node to be in a single cluster, i.e. hard clustering, thus we employ the Gumbel-max trick [64, 65, 66] to sample/select the cluster based on the assignment probability while maintaining differentiability for back-propagation. This results into an assignment matrix \(P\in\{0,1\}^{|\tilde{V}_{\ell+1}|\times K}\). The adjacency matrix of \(\tilde{G}_{\ell}\) can be computed as \(\tilde{A}_{\ell}=P^{T}\tilde{A}_{\ell+1}P\). We repeat this clustering process iteratively in order to build multiple resolutions of coarsening graphs.
#### 5.2.4 Spatio-temporal graph neural networks
In this section, we build our spatio-temporal GNNs by combining all the previously defined modules. Suppose that we are given a historical data of \(T\) timesteps which can be modeled by \(T\) input graphs \(G^{(1)},G^{(2)},..,G^{(T)}\). The simplest combination is MPNN+LSTM in which we employ MPNN (see Section 5.2.2) to encode each \(G^{(t)}\) into a graph-level vectorized representation and then feed it into an LSTM backbone (see Section 5.1.1). Furthermore, we want to capture the multiscale information, i.e. local to global, that is essential in modeling the long-range spatial and temporal dependencies. Thus, instead of MPNN, we apply MGNN (see Section 5.2.3) to construct a hierarchy of latents (i.e. each latent is a graph-level representation for a resolution) for each graph \(G^{(t)}\). At the \(t\)-th timestep, a Transformer (see Section 5.1.2) is applied to encode the hierarchy of latents into a single vector that will be fed further into a temporal architecture. Finally, another Transformer is used, instead of LSTM, as the temporal backbone. We call this novel architecture as Attention-based Multiresolution Graph Neural Networks or ATMGNN.
### Data preprocessing
Our dataset primarily focuses on COVID-19 in New Zealand. The number of cases in different district health boards or regions of New Zealand was gathered from government official open data and later reprocessed into the format suitable for input into the forecasting models. The reprocessed data is available on our GitHub page for the project [https://github.com/HySonLab/pandemic_tgnn](https://github.com/HySonLab/pandemic_tgnn).
New Zealand daily new cases with graphsOfficial data originally obtained is in tabular form with information regarding the sex, age group, district health board location, case status and travel of each COVID-19 infected patient. All cases are filtered so that only cases in 2022 and cases that are confirmed are included in the dataset. For each district health board, on each day, all confirmed cases regardless of sex or age group are aggregated and counted toward the daily new cases count. From the geographical map of the district health boards (Figure 1), an adjacency matrix that represents the topology of the disease graph is generated by connecting each board to itself with a unit weighted edge, and each board to every other board that shares any part of its border with edges weighted as 2. Original data is imported and transformed using the Python packages Pandas and NumPy [67, 68], while disease graphs are built with the included code and the NetworkX [69] package. All data that was preprocessed and converted to graph form between March 4th, 2022 and November 4th, 2022 is available on GitHub.
New Zealand economic featuresOfficial categorical GDP data is obtained from NZStats [70], with the original data containing GDP information in terms of NZ dollars for each predefined administrative region and for every 22 available economic industry categories. At the time of collection, only GDP information until the end of 2020 is finalized and available for all applicable industries. Thus, based on the assumption that categorical GDP from 2020 and 2022 does not change significantly year-to-year, GDP data from 2020 is incorporated as additional features into all time steps of the forecasting model. Due to differences between the administrative region map and the district health board map, all regions between the two maps are matched
appropriately, with the merged GDP being the sum or the average between matched regions accordingly. The raw GDP number of each industry/category of each region is concatenated to a common vector of that region without labels to be added to the inputs as a single feature vector. All economic feature vectors are normalized (via mean and standard deviation) to allow the models to learn properly and mitigate exploding/vanishing gradients.
|
2306.16422 | Neural networks can detect model-free static arbitrage strategies | In this paper we demonstrate both theoretically as well as numerically that
neural networks can detect model-free static arbitrage opportunities whenever
the market admits some. Due to the use of neural networks, our method can be
applied to financial markets with a high number of traded securities and
ensures almost immediate execution of the corresponding trading strategies. To
demonstrate its tractability, effectiveness, and robustness we provide examples
using real financial data. From a technical point of view, we prove that a
single neural network can approximately solve a class of convex semi-infinite
programs, which is the key result in order to derive our theoretical results
that neural networks can detect model-free static arbitrage strategies whenever
the financial market admits such opportunities. | Ariel Neufeld, Julian Sester | 2023-06-19T17:42:40Z | http://arxiv.org/abs/2306.16422v2 | # Neural Networks can Detect
###### Abstract.
In this paper we demonstrate both theoretically as well as numerically that neural networks can detect model-free static arbitrage opportunities whenever the market admits some. Due to the use of neural networks, our method can be applied to financial markets with a high number of traded securities and ensures almost immediate execution of the corresponding trading strategies. To demonstrate its tractability, effectiveness, and robustness we provide examples using real financial data. From a technical point of view, we prove that a _single_ neural network can approximately solve a _class_ of convex semi-infinite programs, which is the key result in order to derive our theoretical results that neural networks can detect model-free static arbitrage strategies whenever the financial market admits such opportunities.
**Keywords:** Static Arbitrage, Model-Free Finance, Deep Learning, Convex Optimization
## 1. Introduction
Detecting arbitrage opportunities in financial markets and efficiently implementing them numerically is an intricate and demanding task, both in theory and practice. In recent academic papers, researchers have extensively tackled this problem for various types of assets, highlighting its significance and complexity.
The authors from Cui et al. (2020) and Cui and Taylor (2020) focus their studies on the foreign exchange market, establish conditions that eliminate triangular opportunities and propose computational approaches to detect arbitrage opportunities.
Soon and Ye (2011) propose a binary integer programming model for the detection of arbitrage in currency exchange markets, while Papapantoleon and Sarmiento (2021) focus on arbitrage in multi-asset markets under the assumptions that the risk-neutral marginal distributions are known. Also assuming knowledge of risk-neutral marginals in multi-asset markets, Tavin (2015) provides a copula-based approach to characterize the absence of arbitrage. Cohen et al. (2020) study arbitrage opportunities in markets where vanilla options are traded and propose an efficient procedure to change the option prices minimally (w.r.t. the \(l^{1}\) distance) such that the market becomes arbitrage-free. Neufeld et al. (2022) develop cutting-plane based algorithms to calculate model free upper and lower price bounds whose sub-optimality can be chosen to be arbitrarily small, and use them to detect model-free arbitrage strategies. Furthermore, by observing call option prices Biagini et al. (2022) train neural networks to detect financial asset bubbles.
In this paper we study the detection of model-free static arbitrage in potentially high-dimensional financial markets, i.e., in markets where a large number of securities are traded. A trading strategy is called _static_ if the strategy consists of buying or selling financial derivatives as well as the corresponding underlying securities in the market only at initial time (with corresponding bid and ask prices) and then holding the positions till maturity without any readjustment. Therefore, one says that a market admits static arbitrage if there exists a static trading strategy which provides a guaranteed risk-free profit at maturity. We aim to detect static arbitrage opportunities in a _model-free_ way, i.e. purely based on observable market data without imposing any (probabilistic) model assumptions on the underlying financial market. We also refer to Acciaio et al. (2016); Burzoni et al. (2017, 2019, 2021); Cheridito et al. (2017); Davis et al. (2014); Fahim and Huang (2016);
Hobson* et al. (2005); Hobson et al. (2005); Neufeld and Sester (2023); Riedel (2015); Wang and Ren (2021) for more details on model-free arbitrage and its characterization.
The goal of this paper is to demonstrate both theoretically as well as numerically using real-market data that neural networks can detect model-free static arbitrage whenever the market admits some. The motivation of using neural networks is their known ability to efficiently deal with high-dimensional problems in various fields. There are several algorithms that can detect (static) arbitrage strategies in a financial market under fixed market conditions, for example in a market with fixed options with corresponding strikes as well as fixed corresponding bid and ask prices. However directly applying these algorithms in real financial market scenarios to exploit arbitrage is challenging due to well-known issue that market conditions changes extremely fast and high-frequency trading often cause these opportunities to vanish rapidly. This associated risk is commonly known as _execution risk_, as discussed, e.g., in Kozhan and Tham (2012). The speed of investment execution therefore becomes crucial in capitalizing on arbitrage opportunities.
By training neural networks according to our algorithm purely based on observed market data, we obtain _detectors_ that allow, given any market conditions, to detect not only the existence of static arbitrage but also to determine a proper applicable arbitrage strategy. Our algorithm therefore provides to financial agents an instruction how to trade and to exploit the arbitrage strategy while the opportunity persists. In contrast to other numerical methods which need to be executed entirely each time the market is scanned for arbitrage or the market conditions are changing, our proposed method only needs one neural network to be trained offline. After training, the neural network is then able to detect arbitrage and can be executed extremely fast, allowing to invest in the resultant strategies in every new market situation that one faces. We refer to Section 3 for detailed description of our algorithm as well as our numerical results evaluated on real market data.
We justify the use of neural networks by proving that neural networks can detect model-free static arbitrage strategies whenever the market admits some. We refer to Theorem 2.5 and Theorem 2.6 for our main theoretical results regarding arbitrage detection. The main idea is to relate arbitrage with the superhedging of the zero-payoff function. We prove in Proposition 2.7 that there exists a _single_ neural network that provides a corresponding \(\varepsilon\)-optimal superhedging strategy _for any given market conditions_. In fact, we show for a certain class of convex semi-infinite programs (CSIP), which includes the superhedging problem of the zero-payoff function as special case, that a _single_ neural network can provide _for each of the (CSIP)_ within this class a corresponding feasible and \(\varepsilon\)-optimal solution, see Theorem 4.5.
The remainder of this paper is as follows. In Section 2, we introduce the setting of the financial market as well as the corresponding (static) trading strategies, and provide our main theoretical results ensuring that model-free static arbitrage can be detected by neural networks if existent. Section 3 focuses on the presentation and numerical implementation of our neural networks based algorithm to detect static arbitrage, featuring experiments conducted on real financial data to showcase the feasibility and robustness of our method. In Section 4, we introduce a class of convex semi-infinite programs and provide our main technical result that a single neural network can approximately solve this class of (CSIPs). Finally, all proofs are presented in Section 5.
## 2. Detection of static Arbitrage Strategies
In this section, we study a financial market in which a financial agent can trade statically in various types of options and which may admit the opportunity of static arbitrage profits. In such a setting, the natural difficulty for a trader is first to decide whether such arbitrage exists and second to identify potential strategies that exploit arbitrage profits. Our goal is to show that for each financial market in which an agent can trade statically in options, the corresponding market admits static arbitrage if and only if there exists a neural network that detects the existence of model-free static arbitrage by outputting a corresponding arbitrage strategy.
### Setting
In this paper, we consider a market in which a financial agent can trade statically in options. To introduce the market under consideration, let \(S=(S_{1},\ldots,S_{d})\) denote the underlying \(d\in\mathbb{N}\) stocks at some future time \(t=1\). We only consider values \(S\in\mathcal{S}\subseteq[0,\infty)^{d}\) for some predefined set \(\mathcal{S}\), which can be interpreted as prediction set1 where the financial agent may allow to
exclude values which she considers to be impossible to model future stock prices \(S=(S_{1},\ldots,S_{d})\) at time \(1\).
Let \(N_{\Psi}\in\mathbb{N}\) denote the number of different _types2_ of traded options \(\Psi_{i}:\mathcal{S}\times[0,\overline{K}]\to[0,\infty)\), \(i=1,\ldots,N_{\Psi}\), written on \(S\). For each option type \(i\in\{1,\ldots,N_{\Psi}\}\) let \(n_{i}\in\mathbb{N}\) denote the corresponding amount of different strikes under consideration \((K_{i,j})_{j=1,\ldots,n_{i}}\subseteq[0,\overline{K}]\), where the strikes are contained in \([0,\overline{K}]\) for some \(\overline{K}<\infty\), and denote by \(N:=\sum_{i=1}^{N_{\Psi}}n_{i}\) the total number of traded options. Moreover, we denote by \(\pi=(\pi_{i,j})_{\underset{j=1,\ldots,n_{i}}{j=1,\ldots,N_{\Psi}}}\). \(=(\pi_{i,j}^{+},\pi_{i,j}^{-})_{\underset{j=1,\ldots,n_{i}}{j=1,\ldots,N_{ \Psi}}}\in[0,\overline{\pi}]^{2N}\) the bid and ask prices of the traded options respectively, where we assume that all the bid and ask prices are bounded by some \(\overline{\pi}>0\).
Footnote 2: We say two options are of the same _type_ if the payoffs only differ with respect to the specification of a strike. Also note that trading in the underlying securities itself can be considered as an option, e.g., a call option with strike \(0\).
The financial agent then can trade in the market by buying and selling the options described above. More precisely, we first fix the minimal initial cash position of a trading strategy to be given by \(\underline{a}\in\mathbb{R}\), and we assume that the maximal amount of shares of options one can buy or sell is capped by some constant \(0<\overline{H}<\infty\). This allows to consider the payoff of a static trading strategy by the function
\[\begin{split}\mathcal{I}_{S}:[0,\overline{K}]^{N}\times[ \underline{a},\infty)\times[0,\overline{H}]^{2N}&\to\mathbb{R}\\ (K,a,h)&\mapsto a+\sum_{i=1}^{N_{\Psi}}\sum_{j=1}^{n_ {i}}\left(h_{i,j}^{+}-h_{i,j}^{-}\right)\cdot\Psi_{i}(S,K_{i,j}),\end{split} \tag{2.1}\]
where we use the notation \(h=\left(h_{i,j}^{+},h_{i,j}^{-}\right)_{\underset{j=1,\ldots,n_{i}}{j=1, \ldots,N_{\Psi}}}\in[0,\overline{H}]^{2N}\) to denote long and short positions in the traded options, respectively, as well as \(K=(K_{i,j})_{\underset{j=1,\ldots,n_{i}}{j=1,\ldots,N_{\Psi}}}\). to denote all strikes. The corresponding pricing functional is then defined by
\[\begin{split} f:[0,\overline{\pi}]^{2N}\times[\underline{a}, \infty)\times[0,\overline{H}]^{2N}&\to\mathbb{R}\\ (\pi,a,h)&\mapsto a+\sum_{i=1}^{N_{\Psi}}\sum_{j=1}^{ n_{i}}\left(h_{i,j}^{+}\pi_{i,j}^{+}-h_{i,j}^{-}\pi_{i,j}^{-}\right)\end{split} \tag{2.2}\]
determining the price of a corresponding trading strategy with respect to the corresponding bid and ask prices of the options.
Moreover, we define the set-valued map which maps a set of strikes \(K=(K_{i,j})_{\underset{j=1,\ldots,n_{i}}{j=1,\ldots,N_{\Psi}}}\) to the corresponding strategies leading to a greater payoff than \(0\) for each possible value \(S\in\mathcal{S}\) by
\[\Gamma:[0,\overline{K}]^{N}\ni K\twoheadrightarrow\Gamma(K):=\left\{(a,h)\in[ \underline{a},\infty)\times[0,\overline{H}]^{2N}\ \big{|}\ \mathcal{I}_{S}(K,a,h)\geq 0\text{ for all }S\in\mathcal{S}\right\}. \tag{2.3}\]
Then, the minimal price of a trading strategy that leads to a greater price than \(0\) for each possible value \(S\in\mathcal{S}\), in dependence of strikes \(K=(K_{i,j})_{\underset{j=1,\ldots,n_{i}}{j=1,\ldots,N_{\Psi}}}\). and option prices \(\pi=(\pi_{i,j}^{+},\pi_{i,j}^{-})_{\underset{j=1,\ldots,n_{i}}{j=1,\ldots,N_{ \Psi}}}\), is given by
\[\begin{split} V:[0,\overline{K}]^{N}\times[0,\overline{\pi}]^{2N}& \to\mathbb{R}\\ (K,\pi)&\mapsto\inf_{(a,h)\in\Gamma(K)}f(\pi,a,h). \end{split} \tag{2.4}\]
In this paper, we consider the following type of model-free3 static arbitrage. We refer to Burzoni et al. (2019) for several notions of model-free arbitrage.
Footnote 3: It is called _model-free_ since no probabilistic assumptions on the financial market has been imposed
**Definition 2.1** (Model-free static arbitrage).: _Let \((K,\pi)\in[0,\overline{K}]^{N}\times[0,\overline{\pi}]^{2N}\). Then, we call a static trading strategy \((a,h)\in[\underline{a},\infty)\times[0,\overline{H}]^{2N}\) a model-free static arbitrage strategy if the following two conditions hold._
1. \((a,h)\in\Gamma(K)\)_,_
2. \(f(\pi,a,h)<0\)_._
_Moreover, for any \(\varepsilon>0\) we call a model-free static arbitrage strategy to be of magnitude \(\varepsilon\) if \(f(\pi,a,h)\leq-\varepsilon\)._
This means according to Definition 2.1 that the market with parameters \((K,\pi)\in[0,\overline{K}]^{N}\times[0,\overline{\pi}]^{2N}\) admits no model-free static arbitrage strategy if and only if \(V(K,\pi)\geq 0\).
**Neural Networks.** By _neural networks_ with input dimension \(d_{\text{in}}\in\mathbb{N}\), output dimension \(d_{\text{out}}\in\mathbb{N}\), and number of layers \(l\in\mathbb{N}\) we refer to functions of the form
\[\begin{split}\mathbb{R}^{d_{\text{in}}}&\to \mathbb{R}^{d_{\text{out}}}\\ x&\mapsto A_{l}\circ\varphi_{l}\circ A_{l-1}\circ \cdots\circ\varphi_{1}\circ A_{0}(x),\end{split} \tag{2.5}\]
where \((A_{i})_{i=0,\dots,l}\) are affine4 functions of the form
Footnote 4: This means for all \(i=0,\dots,l\), the function \(A_{i}\) is assumed to have an affine structure of the form \(A_{i}(x)=M_{i}x+b_{i}\) for some matrix \(M_{i}\in\mathbb{R}^{h_{i+1}\times h_{i}}\) and some vector \(b_{i}\in\mathbb{R}^{h_{i+1}}\), where \(h_{0}:=d_{\text{in}}\) and \(h_{i+1}:=d_{\text{out}}\).
\[A_{0}:\mathbb{R}^{d_{\text{in}}}\to\mathbb{R}^{h_{1}},\qquad A_{i}:\mathbb{R} ^{h_{i}}\to\mathbb{R}^{h_{i+1}}\text{ for }i=1,\dots,l-1,(\text{if }l>1),\text{ and }\qquad A_{l}:\mathbb{R}^{h_{l}}\to \mathbb{R}^{d_{\text{out}}}, \tag{2.6}\]
and where the function \(\varphi_{i}\) is applied componentwise, i.e., for \(i=1,\dots,l\) we have \(\varphi_{i}(x_{1},\dots,x_{h_{i}})=(\varphi(x_{1}),\dots,\varphi(x_{h_{i}}))\). The function \(\varphi:\mathbb{R}\to\mathbb{R}\) is called _activation function_ and assumed to be continuous and non-polynomial. We say a neural network is _deep_ if \(l\geq 2\). Here \(h=(h_{1},\dots,h_{l})\in\mathbb{N}^{l}\) denotes the dimensions (the number of neurons) of the hidden layers, also called _hidden dimension_.
Then, we denote by \(\mathfrak{N}^{l,h}_{d_{\text{in}},d_{\text{out}}}\) the set of all neural networks with input dimension \(d_{\text{in}}\), output dimension \(d_{\text{out}}\), \(l\) hidden layers, and hidden dimension \(h\), whereas the set of all neural networks from \(\mathbb{R}^{d_{\text{in}}}\) to \(\mathbb{R}^{d_{\text{out}}}\) (i.e. without specifying the number of hidden layers and hidden dimension) is denoted by
\[\mathfrak{N}_{d_{\text{in}},d_{\text{out}}}:=\bigcup_{l\in\mathbb{N}}\bigcup_ {h\in\mathbb{N}}\mathfrak{N}^{l,h}_{d_{\text{in}},d_{\text{out}}}.\]
It is well-known that the set of neural networks possess the so-called _universal approximation property_, see, e.g., Pinkus (1999).
**Proposition 2.2** (Universal approximation theorem).: _For any compact set \(\mathbb{K}\subset\mathbb{R}^{d_{\text{in}}}\) the set \(\mathfrak{N}_{d_{\text{in}},d_{\text{out}}}|_{\mathbb{K}}\) is dense in \(C(\mathbb{K},\mathbb{R}^{d_{\text{out}}})\) with respect to the topology of uniform convergence on \(C(\mathbb{K},\mathbb{R}^{d_{\text{out}}})\)._
### Main results
To formulate our main result we first impose the following mild assumptions.
**Assumption 2.3**.:
* _There exists some_ \(L_{\Psi}>0\) _such that for all_ \(S\in\mathcal{S}\) _and for all_ \(i=1,\dots,N_{\Psi}\) _the map_ \([0,\overline{K}]\ni K_{i,j}\mapsto\Psi_{i}(S,K_{i,j})\) _is_ \(L_{\Psi}\)_-Lipschitz._
* _There exists some by_ \(C_{\Psi}>0\) _such that the map_ \(\Psi_{i}\) _is bounded by_ \(C_{\Psi}\) _on_ \(\mathcal{S}\times[0,\overline{K}]\) _for all_ \(i=1,\dots,N_{\Psi}\)_._
**Remark 2.4**.: _First, note that we do not impose any topological or geometric conditions on the prediction set \(\mathcal{S}\subset[0,\infty)^{d}\). However, a sufficient criterion for Assumption 2.3 (ii) to hold would be that, e.g., \(\mathcal{S}\subset[0,\infty)^{d}\) is bounded and that \([0,\infty)^{d}\times[0,\overline{K}]\ni(S,K)\mapsto\Psi_{i}(S,K)\) is continuous for each \(i=1,\dots,N_{\Psi}\). Moreover, note that Assumption 2.3 (i) is satisfied for example for any payoff function which is continuous and piece-wise affine (CPWA), which includes most relevant payoff functions in finance. We refer to Neufeld et al. (2022a); Li and Neufeld (2023) for a detailed list of examples of (CPWA) payoff functions._
In our first result, we conclude that the financial market described in Section 2.1 admits model-free static arbitrage if and only if there exists a neural network that detects the existence of model-free static arbitrage by outputting a corresponding arbitrage strategy.
**Theorem 2.5** (Neural networks can detect static arbitrage).: _Let Assumption 2.3 hold true, and let \((K,\pi)\in[0,\overline{K}]^{N}\times[0,\overline{\pi}]^{2N}\). Then, there exists model-free static arbitrage if and only if there exists a neural network \(\mathcal{NN}\in\mathfrak{N}_{3N,1+2N}\) with_
* \(\mathcal{NN}(K,\pi):=(\mathcal{NN}_{a}(K,\pi),\mathcal{NN}_{h}(K,\pi))\in \Gamma(K)\)_,_
* \(f\left(\pi,\mathcal{NN}_{a}(K,\pi),\mathcal{NN}_{h}(K,\pi)\right)<0\)_._
In our second result, we show that for any given \(\varepsilon>0\) and \(0<\delta<\varepsilon\) there exists a _single_ neural network such that for any given strikes \(K=(K_{i,j})_{\genfrac{}{}{0.0pt}{}{i=1,\dots,N_{\Psi}}{j=1,\dots,n_{i}}}\) and option prices \(\pi=(\pi_{i,j}^{+},\pi_{i,j}^{-})_{\genfrac{}{}{0.0pt}{}{i=1,\dots,N_{\Psi}}{j=1,\dots,n_{i}}}\) the neural network can detect model-free static arbitrage of magnitude \(\delta\) if the financial market with
corresponding market conditions \((K,\pi)\) admits static arbitrage of magnitude \(\varepsilon\). From a practical point of view, this is crucial, since it allows the financial trader to _only train one single neural network_ which can then, once trained, instantaneously detect corresponding static arbitrage opportunities if the current market conditions \((K,\pi)\) admit such opportunities. On the other hand, a trader applying the trained neural network to a financial market which admits no static arbitrage opportunities pays at most \(\varepsilon-\delta\) for the trading strategy, i.e., if \(\varepsilon\approx\delta\), the risk of paying for trading strategies which are no static arbitrage strategies can be reduced to an arbitrarily small amount.
**Theorem 2.6** (A single neural network can detect static arbitrage of magnitude \(\varepsilon\)).: _Let \(\varepsilon>0\) and \(0<\delta<\varepsilon\). Then, there exists a neural \(\mathcal{NN}\in\mathfrak{N}_{3N,1+2N}\) such that for every \((K,\pi)\in[0,\overline{K}]^{N}\times[0,\pi]^{2N}\) the following holds._
1. _If the financial market with respect to_ \((K,\pi)\) _admits model-free static arbitrage of magnitude_ \(\varepsilon\)_, then the neural network outputs a trading strategy_ \((\mathcal{NN}_{a}(K,\pi),\mathcal{NN}_{h}(K,\pi))\) _which is a model-free static arbitrage of magnitude_ \(\delta\)_._
2. _If the financial market with respect to_ \((K,\pi)\) _admits no model-free static arbitrage, then the neural network outputs a trading strategy_ \((\mathcal{NN}_{a}(K,\pi),\mathcal{NN}_{h}(K,\pi))\in\Gamma(K)\) _which has a price of at most_ \(\varepsilon-\delta\)_._
The main idea to derive Theorem 2.5 and Theorem 2.6 relies on the relation between arbitrage and supehedging of the \(0\)-payoff function. The following result establishes that for any prescribed \(\varepsilon>0\) there exists a _single_ neural network such that for any given strikes \(K=(K_{i,j})_{i=1,\ldots,N_{\Psi_{i}}\atop j=1,\ldots,n_{i}}\) and option prices \(\pi=(\pi_{i,j}^{+},\pi_{i,j}^{-})_{i=1,\ldots,N_{\Psi_{i}}\atop j=1,\ldots,n_{ i}}\) defining the market, the neural network produces a static trading strategy which superhedges the \(0\)-payoff for all possible values \(S\in\mathcal{S}\) whose price is \(\varepsilon\)-optimal.
**Proposition 2.7** (Approximating \(V\) with neural networks).: _Let Assumption 2.3 hold true. Then for all \(\varepsilon>0\) there exists a neural network \(\mathcal{NN}\in\mathfrak{N}_{3N,1+2N}\) such that_
1. \(\mathcal{NN}(K,\pi):=(\mathcal{NN}_{a}(K,\pi),\mathcal{NN}_{h}(K,\pi))\in \Gamma(K)\) _for all_ \((K,\pi)\in[0,\overline{K}]^{N}\times[0,\overline{\pi}]^{2N}\)_,_
2. \(f\left(\pi,\mathcal{NN}_{a}(K,\pi),\mathcal{NN}_{h}(K,\pi)\right)-V(K,\pi) \leq\varepsilon\) _for all_ \((K,\pi)\in[0,\overline{K}]^{N}\times[0,\overline{\pi}]^{2N}\)_._
In fact, we will use Proposition 2.7 to prove our main results Theorem 2.5 and Theorem 2.6 on detecting static arbitrage strategies. To prove Proposition 2.7, we interpret (2.4) as a class of linear semi-infinite optimization problem (LSIP), where each \((K,\pi)\in[0,\overline{K}]^{N}\times[0,\overline{\pi}]^{2N}\) determines a single (LSIP). In Section 3, we introduce a (much more) general class of convex semi-infinite optimization problem (CSIP) which covers (2.4) as special case. Then we show that a single neural network can approximately solve all (CSIP) of this class simultaneously. We refer to Theorem 4.5 for the precise statement.
The proofs of all our main results are provided in Section 5.
## 3. The Numerics of Static Arbitrage Detection in Financial Markets
The results from Section 2 prove, with non-constructive arguments, the existence of neural networks that can detect model-free arbitrage strategies. These results therefore immediately raise the question how to construct neural networks that are capable to learn these strategies. To this end, we present with Algorithm 1 an approach that combines a supervised learning approach in the spirit of Neufeld and Sester (2023) with an unsupervised learning approach as presented for example in Auslender et al. (2009), Eckstein and Kupper (2021), and Neufeld et al. (2022b).
```
1:\(\mathcal{NN}_{a}(K,\pi):=(\mathcal{NN}_{a}(K,\pi),\mathcal{NN}_{h}(K,\pi))\) for all \((K,\pi)\in[0,\overline{K}]^{N}\times[0,\overline{\pi}]^{2N}\),
2:\(f\left(\pi,\mathcal{NN}_{a}(K,\pi),\mathcal{NN}_{h}(K,\pi)\right)-V(K,\pi)\leq \varepsilon\) for all \((K,\pi)\in[0,\overline{K}]^{N}\times[0,\overline{\pi}]^{2N}\),
3:for all \((K,\pi)\in[0,\overline{K}]^{N}\times[0,\overline{\pi}]^{2N}\)do
4:\(f\left(\pi,\mathcal{NN}_{a}(K,\pi),\mathcal{NN}_{h}(K,\pi)\right)-V(K,\pi)\leq \varepsilon\) for all \((K,\pi)\in[0,\overline{K}]^{N}\times[0,\overline{\pi}]^{2N}\),
5:for all \((K,\pi)\in[0,\overline{K}]^{N}\times[0,\overline{\pi}]^{2N}\)do
[MISSING_PAGE_POST]
Algorithm 1 is designed to minimize the price function \(f\left(\pi_{i,b},\mathcal{N}\mathcal{N}_{a}(K_{i,b},\pi_{i,b}),\mathcal{N} \mathcal{N}_{h}(K_{i,b},\pi_{i,b})\right)\) by incorporating two specific penalization terms. These terms are carefully crafted to facilitate the learning of the key characteristics associated with model-free arbitrage strategies.
The first penalization term 6\(\gamma\cdot\frac{1}{S_{B}}\sum_{j=1}^{S_{B}}\left(\left(-\mathcal{I}_{S_{i,b},j} \left(K_{i,b},\mathcal{N}\mathcal{N}_{a}(K_{i,b},\pi_{i,b}),\mathcal{N} \mathcal{N}_{h}(K_{i,b},\pi_{i,b})\right)^{+}\right)^{2}\) incentivizes the feasibility (see (2.3)) of learned strategies by penalizing negative payoffs in proportion to the degree of violation of the positivity constraint. This encourages the strategies to have positive payoffs.
Footnote 6: This can be realized, e.g., by \(\tanh\) and sigmoid activation functions multiplied with the corresponding bounds.
The second penalization term
\[\gamma\cdot\bigg{(}-(\widetilde{Y}_{i,b}+0.5)\cdot f\left(\pi_{i,b},\mathcal{ N}\mathcal{N}_{a}(K_{i,b},\pi_{i,b}),\mathcal{N}\mathcal{N}_{h}(K_{i,b},\pi_{i,b}) \right)\bigg{)}^{+} \tag{3.1}\]
vanishes if and only if the price \(f\left(\pi_{i,b},\mathcal{N}\mathcal{N}_{a}(K_{i,b},\pi_{i,b}),\mathcal{N} \mathcal{N}_{h}(K_{i,b},\pi_{i,b})\right)\) of the strategy expressed by the neural network and the pre-computed price \(\widetilde{Y}_{i,b}\) are either both non-negative, or both negative. Since the pre-computed price \(\widetilde{Y}_{i,b}\) is negative if and only if the market (under the current market parameters \((K_{i,b},\pi_{i,b})\)) admits some static arbitrage, the second penalization term vanishes if and only if the trading strategy expressed by the neural network correctly identifies if the markets admits static arbitrage, or not.
It is worth mentioning that the design of the penalization terms does not guarantee the feasibility of strategies in the sense of (2.3) or the correct sign of prices. However, due to the penalty imposed on constraint violations, as demonstrated in Example 3.1.1, in practice, violations happen frequently but are typically only marginal in magnitude.
### Application to real financial data
In the following we apply Algorithm 1 to real financial data in order to detect model-free static arbitrage in the trading of financial derivatives. For convenience of the reader, we provide under [https://github.com/juliansester/Deep-Arbitrage](https://github.com/juliansester/Deep-Arbitrage) the used Python-code.
#### 3.1.1. Training with data of the S&P 500
We consider trading in a financial market that consists of \(d=5\) assets and corresponding 10 vanilla call options (i.e. 10 different strikes) written on each of the assets. This means we consider \(N_{\Psi}=5\) different types of options with \(n_{i}=11\) for \(i=1,\ldots,5\) referring to the number of call options plus the underlying assets (which can be considered as a call option with strike 0) so that in total \(N=\sum_{i=1}^{N_{\Psi}}n_{i}=55\) different securities are considered.
To create a training set we consider for each of the 500 constituents of the \(S\&P\) 500 the 10 most liquidly traded7 call options with maturity \(T=19\) May 2023. The data was downloaded on 25 April 2023 via _Yahoo Finance_.
Footnote 7: “Most liquidly traded” refers to the strikes with the highest trading volume.
We then use this data to create 50 000 samples by combining the call options of 5 randomly chosen constituents in each sample. The spot values of the underlying assets are scaled to 1, therefore the strikes and corresponding prices are included as percentage values w.r.t. the spot value of the underlying asset. We assume \(\mathcal{S}=[0,2]^{5}\), i.e, we assume that the underlying assets at maturity only attain values between 0% and 200% of its current spot value. This assumption can be regarded as a restriction imposed on the space of possible outcomes to a prediction set as mentioned in the beginning of Section 2.1.
Relying on these samples, we compute, using the LSIP algorithm from Neufeld et al. (2022a), minimal super-replication strategies of the 0-payoff for each of the 50 000 samples.
Of these 50 000 samples, we regard 5000 samples as a test set on which the neural network is not trained.
To demonstrate the performance of our approach, we apply Algorithm 1 with \(N_{\text{iter}}=20\) 000 iterations, a penalization parameter8\(\gamma=10\) 000, and batch sizes \(S_{B}=32\) and \(B=512\) to train a neural network \(\mathcal{NN}\) with 1024 neurons and 5 hidden layers and with a _ReLU_ activation function in each of the hidden layers. The used learning rate for training with the _Adam_ optimizer (Kingma and Ba (2014)) is 0.0001.
Footnote 8: Following the empirical experiments from Eckstein et al. (2021) and Eckstein and Kupper (2021), in the implementation, we let \(\gamma\) increase with the number of iterations so that in the first iteration \(\gamma\) equal 1, and after 20 000 iterations \(\gamma\) is 10 000.
To train the neural network, we assume \(\underline{a}=-1\), \(\overline{H}=1\), i.e., the maximal investment is 1 in each position9.
Footnote 9: Note that in practice these bounds impose not a severe restriction as the resultant strategies can be scaled arbitrarily large if desired.
The training set of 45 000 samples contains 34 146 cases in which the market admits model-free arbitrage while the test set contains 3787 cases of model-free arbitrage. After training on the 45 000 samples, the neural network assigns to 39 570 out of 45 000 correctly the correct sign of the price of the strategy learned by the neural network, i.e., in 87.93% of cases on the training set, the neural network can correctly decide whether the market admits arbitrage or not. On the test set we have 4164 out of 5000 correct identifications which corresponds to 83.28%. However, it is important to emphasize that wrong identifications of the sign of the resultant strategy does not mean that the resultant strategy incur huge losses, as the magnitude of the predicted prices turns out to be on a small scale for the majority strategies with wrongly predicted sign. To showcase this, we evaluate the net profit \(\mathcal{I}_{S_{i,j}}(K_{i},a_{i},h_{i})-f(\pi_{i},a_{i},h_{i})\) for \(i=1,\ldots,5000\), \(j=1,\ldots,200\), i.e., each of the 5000 samples of the test set is evaluated on 200 realizations of \(S\in\mathcal{S}\) that are denoted by \(S_{i,j}\) (uniformly sampled from \([0,2]^{5}\)) and we show the results in Table 1. The results verify that on the test set the net profit is in the vast majority of the 1 000 000 evaluated cases positive, compare also the histogram provided in Figure 1.
#### 3.1.2. Backtesting with historical option prices
We backtest the strategy trained in Section 3.1.1 on the stocks of _Apple_, _Alphabet_, _Microsoft_, _Google_, and _Meta_. To this end, we consider for each of the companies call options with maturity 24 March 2023 for ten different strikes.
The bid and ask prices of these call options and the underlying securities were observed on 33 trading days ranging from 2 February 2023 until 22 March 2023.
We apply the strategy trained in Section 3.1.1 to the prices observed on each of the 33 trading days and evaluate it on the realized values of the 5 underlying securities at maturity. In Table 2 and Figure 2 we summarize the net profits of the 33 strategies. Note that to apply the trained neural network from Section 3.1.1, we first scale all the financial instruments such that the spot values of the underlying securities equal 1, as described in Section 3.1.1. Then, after applying the strategies to the scaled inputs, we rescale the values of the involved quantities back to unnormalized values, and we report in Table 2 and Figure 2 the net profits for both cases: after rescaling the values of the underlying securities, options, and strikes to unnormalized values, as well as without scaling back. The results of the backtesting study reveal that even though the neural network from Section 3.1.1 was trained on data extracted at a different day (25 April 2025) involving call options with a different maturity written on other assets, the resultant strategy still allows to trade profitably in the majority of cases, showcasing the robustness of our algorithm.
\begin{table}
\begin{tabular}{l l} \hline count & 1 000 000 \\ mean & 0.511423 \\ std & 0.341690 \\ min & -0.199701 \\
25\% & 0.246262 \\
50\% & 0.442689 \\
75\% & 0.737924 \\ max & 3.939372 \\ \hline \end{tabular}
\end{table}
Table 1. The table shows the summary statistics of the net profit \(\mathcal{I}_{S_{i,j}}(K_{i},a_{i},h_{i})-f(\pi_{i},a_{i},h_{i})\) for \(i=1,\ldots,5000\), \(j=1,\ldots,200\), i.e., each of the 5000 samples of the test set is evaluated on 200 realizations of \(S\in\mathcal{S}\) leading to a total number of 1 000 000 profits of the strategy trained as described in Section 3.1.1.
Figure 1. The histogram shows the distribution of the net profit \(\mathcal{I}_{S_{i,j}}(K_{i},a_{i},h_{i})-f(\pi_{i},a_{i},h_{i})\) for \(i=1,\ldots,5000\), \(j=1,\ldots,200\) of the strategy trained as described in Section 3.1.1.
## 4. Approximation of optimal solutions of general convex semi-infinite programs by neural networks
In this section we show for a certain class of convex semi-infinite optimization problems (CSIP) that each of them can be approximately solved by a _single_ neural networks. More precisely, for every prescribed accuracy \(\varepsilon>0\) we show that there exists a _single_ neural network which outputs a _feasible_ solution which is \(\varepsilon\)-optimal. This class of convex semi-infinite problems covers the setting of static arbitrage detection introduced in Section 2 as special case. We leave further applications for future research.
### Setting
Let \(\underline{a}\in\mathbb{R}\), let \(\mathbb{K}_{x}\subset\mathbb{R}^{n_{x}}\) be compact for some \(n_{x}\in\mathbb{N}\), and let \(\mathbb{K}_{y}\subset\mathbb{R}^{n_{y}}\) be compact and convex for some \(n_{y}\in\mathbb{N}\).
We consider some function
\[f:\mathbb{K}_{x}\times[\underline{a},\infty)\times\mathbb{K}_{y}\ni(x,a,y) \mapsto f(x,a,y)\in\mathbb{R},\]
which we aim to minimize under suitable constraints. To define these constraints we consider some (possibly uncountable infinite) index set \(\mathcal{S}\) as well as for all \(s\in\mathcal{S}\) a function
\[\mathbb{K}_{x}\times[\underline{a},\infty)\times\mathbb{K}_{y}\ni(x,a,y) \mapsto\mathcal{I}_{s}(x,a,y)\in\mathbb{R}.\]
Further, let \(\mathbb{K}_{x}\ni x\rightarrow\Gamma(X)\subseteq[\underline{a},\infty)\times \mathbb{K}_{y}\) be the correspondence defined by
\[\Gamma(x):=\left\{(a,y)\in[\underline{a},\infty)\times\mathbb{K}_{y}\ |\ - \mathcal{I}_{s}(x,a,y)\leq 0\text{ for all }s\in\mathcal{S}\right\},\]
\begin{table}
\begin{tabular}{l c c} & **unscaled** & **scaled** \\ \hline count & 33 & 33 \\ mean & 5.855233 & 0.063460 \\ std & 9.706270 & 0.089347 \\ min & -4.225227 & -0.017063 \\
25\% & -1.475227 & -0.007025 \\
50\% & 1.535172 & 0.021668 \\
75\% & 13.032318 & 0.093558 \\ max & 32.687988 & 0.318201 \\ \hline \end{tabular}
\end{table}
Table 2. In the setting of Section 3.1.2, the table shows the summary statistics of the net profit \(\mathcal{I}_{S_{T}}(K_{i},a_{i},h_{i})-f(\pi_{i},a_{i},h_{i})\) for \(i=1,\ldots,33\), where here \(S_{T}\in\mathbb{R}^{5}\) refers to the observed realization of the \(5\) underlying securities at maturity \(T=24\) March 2023. To apply the trained neural network we first scale the values such that the spot prices of the underlying assets equal \(1\). The left column shows the values after scaling the values back, whereas the right column shows the statistics directly after applying the neural network to the scaled data.
Figure 2. In the setting of Section 3.1.2, the histogram depicts the net profits \(\mathcal{I}_{\mathcal{S}_{T}}(K_{i},a_{i},h_{i})-f(\pi_{i},a_{i},h_{i})\) for \(i=1,\ldots,33\), where here \(S_{T}\in\mathbb{R}^{5}\) refers to the observed realization of the \(5\) underlying securities at maturity \(T=24\) March 2023.
that defines the set of _feasible_ elements from \([\underline{a},\infty)\times\mathbb{K}_{y}\). To define our optimization problem, we now consider the function \(\mathbb{K}_{x}\ni x\mapsto V(x)\in\mathbb{R}\) defined by
\[V(x):=\inf_{(a,y)\in\Gamma(x)}f(x,a,y). \tag{4.1}\]
We impose the following assumptions on the above defined quantities.
**Assumption 4.1** (Assumptions on \(f\)).:
1. _There exists some_ \(L_{f}\geq 1\) _such that the function_ \([\underline{a},\infty)\times\mathbb{K}_{y}\ni(a,y)\mapsto f(x,a,y)\) _is_ \(L_{f}\)_-Lipschitz continuous for all_ \(x\in\mathbb{K}_{x}\)_._
2. _The function_ \(\mathbb{K}_{x}\times[\underline{a},\infty)\times\mathbb{K}_{y}\ni(x,a,y) \mapsto f(x,a,y)\) _is continuous._
3. _The function_ \([\underline{a},\infty)\times\mathbb{K}_{y}\ni(a,y)\mapsto f(x,a,y)\) _is convex for all_ \(x\in\mathbb{K}_{x}\)_._
4. _The function_ \([\underline{a},\infty)\ni a\mapsto f(x,a,y)\) _is increasing for all_ \(x\in\mathbb{K}_{x}\)_,_ \(y\in\mathbb{K}_{y}\)_._
5. _We have that_ \[\mathcal{L}_{a,f}:=\inf_{\begin{subarray}{c}x\in\mathbb{K}_{x},y\in\mathbb{K} _{y}\\ a_{1},a_{2}\in[\underline{a},\infty),a_{1}\neq a_{2}\end{subarray}}\frac{|f(x,a _{1},y)-f(x,a_{2},y)|}{|a_{1}-a_{2}|}>0.\]
**Assumption 4.2** (Assumptions on \(\mathcal{I}_{s}\)).:
1. _There exists some_ \(L_{\mathcal{I}}\geq 1\) _such that_ \(\mathbb{K}_{x}\times[\underline{a},\infty)\times\mathbb{K}_{y}\ni(x,a,y) \mapsto\mathcal{I}_{s}(x,a,y)\) _is_ \(L_{\mathcal{I}}\)_-Lipschitz continuous for all_ \(s\in\mathcal{S}\)_._
2. _The function_ \([\underline{a},\infty)\times\mathbb{K}_{y}\ni(a,y)\mapsto\mathcal{I}_{s}(x,a,y)\) _is concave for all_ \(x\in\mathbb{K}_{x},s\in\mathcal{S}\)_._
3. _The function_ \([\underline{a},\infty)\ni a\mapsto\mathcal{I}_{s}(x,a,y)\) _is increasing for all_ \(x\in\mathbb{K}_{x},y\in\mathbb{K}_{y},s\in\mathcal{S}\)_._
4. _We have that_ \[\mathcal{L}_{a,\mathcal{I}}:=\inf_{s\in\mathcal{S}}\inf_{\begin{subarray}{c }x\in\mathbb{K}_{x},\\ y\in\mathbb{K}_{y}\end{subarray}}\inf_{\begin{subarray}{c}a_{1},a_{2},\\ a_{1},a_{2}\in[\underline{a},\infty)\end{subarray}}\frac{|\mathcal{I}_{s}(x,a _{1},y)-\mathcal{I}_{s}(x,a_{2},y)|}{|a_{1}-a_{2}|}>0.\]
5. _We have that_ \[\inf_{\begin{subarray}{c}s\in\mathcal{S},\\ x\in\mathbb{K}_{x},y\in\mathbb{K}_{y}\end{subarray}}\mathcal{I}_{s}(x, \underline{a},y)>-\infty.\]
**Assumption 4.3** (Assumptions on \(\mathbb{K}_{y}\)).: _There exists \(0<r<1\) and \(L_{r}\geq 1\) such that for all \(0<\delta<r\) there exists some closed and convex set \(C_{y,\delta}\subset\mathbb{K}_{y}\) such that for all \(y^{\prime}\in C_{y,\delta}\), \(y\in\mathbb{R}^{n_{y}}\) we have_
\[\|y^{\prime}-y\|\leq\delta\Rightarrow y\in\mathbb{K}_{y},\]
_and_
\[\max_{y\in\mathbb{K}_{y}}\min_{y^{\prime}\in C_{y,\delta}}\left\{\|y-y^{ \prime}\|\right\}\leq L_{r}\delta.\]
**Remark 4.4** (On the assumptions).:
1. _Let_ (4.2) \[\overline{a}^{\mathrm{UB}}:=\underline{a}+\frac{1}{\mathcal{L}_{a,\mathcal{I} }}\left|\inf_{\begin{subarray}{c}s\in\mathcal{S}\\ x\in\mathbb{K}_{x},y\in\mathbb{K}_{y}\end{subarray}}\mathcal{I}_{s}(x, \underline{a},y)\right|\in[\underline{a},\infty)\] _Then, we have_ \(\mathcal{I}_{s}(x,\overline{a}^{\mathrm{UB}},y)\geq 0\) _for all_ \(x\in\mathbb{K}_{x}\)_,_ \(y\in\mathbb{K}_{y}\)_,_ \(s\in\mathcal{S}\)_. In particular,_ \(\Gamma(x)\neq\emptyset\) _for all_ \(x\in\mathbb{K}_{x}\)_. Indeed, by using the definition of_ \(\overline{a}^{\mathrm{UB}}\) _and_ \(\mathcal{L}_{a,\mathcal{I}}\) _together with Assumption_ 4.2 _(v) we have for all_ \(x\in\mathbb{K}_{x}\)_,_ \(y\in\mathbb{K}_{y}\)_,_ \(s\in\mathcal{S}\) _that_ \[\mathcal{I}_{s}(x,\overline{a}^{\mathrm{UB}},y) =\mathcal{I}_{s}(x,\overline{a}^{\mathrm{UB}},y)-\mathcal{I}_{s}(x, \underline{a},y)+\mathcal{I}_{s}(x,\underline{a},y)\] \[\geq\mathcal{L}_{a,\mathcal{I}}\cdot\overline{a}^{\mathrm{UB}}- \mathcal{L}_{a,\mathcal{I}}\cdot\underline{a}+\inf_{\begin{subarray}{c}s\in \mathcal{S},\\ x\in\mathbb{K}_{x},y\in\mathbb{K}_{y}\end{subarray}}\mathcal{I}_{s}(x, \underline{a},y)\] \[=\mathcal{L}_{a,\mathcal{I}}\cdot\left(\underline{a}+\frac{1}{ \mathcal{L}_{a,\mathcal{I}}}\cdot\left|\inf_{\begin{subarray}{c}s\in\mathcal{S},\\ x\in\mathbb{K}_{x},y\in\mathbb{K}_{y}\end{subarray}}\mathcal{I}_{s}(x, \underline{a},y)\right|\right)-\mathcal{L}_{a,\mathcal{I}}\cdot\underline{a}+ \inf_{\begin{subarray}{c}s\in\mathcal{S},\\ x\in\mathbb{K}_{x},y\in\mathbb{K}_{y}\end{subarray}}\mathcal{I}_{s}(x, \underline{a},y)\geq 0.\]
2. _Assumption_ 4.1 _(ii) and (iv), and the assumption that_ \(\mathbb{K}_{x}\) _are compact ensure together with Remark_ 4.4 _(i) that_ \(V(x)\in\mathbb{R}\) _for all_ \(x\in\mathbb{K}_{x}\)_. Indeed, for any_ \(x\in\mathbb{K}_{x}\) _and_ \((a,y)\in\Gamma(x)\)_, we have_ \(f(x,a,y)\geq f(x,\underline{a},y)\geq\inf_{x\in\mathbb{K}_{x},y\in\mathbb{K}_{y}}f (x,\underline{a},y)>-\infty\)
3. _Assumption_ 4.1 _(iv) and (v) ensure that the function_ \(f\) _is strictly increasing in_ \(a\in[\underline{a},\infty)\) _uniformly in_ \(x\in\mathbb{K}_{x}\)_,_ \(y\in\mathbb{K}_{y}\)_. Analogously, Assumption_ 4.2 _(iii) and (iv) ensure that the function_ \(\mathcal{I}\) _is strictly increasing in_ \(a\in[\underline{a},\infty)\) _uniformly in_ \(x\in\mathbb{K}_{x}\)_,_ \(y\in\mathbb{K}_{y}\)_,_ \(s\in\mathcal{S}\)_._
4. _Note that Assumption_ 4.3 _roughly speaking means that the geometry of_ \(\mathbb{K}_{y}\subseteq\mathbb{R}^{n_{y}}\) _is similar to a box. Indeed, if_ \(\mathbb{K}_{y}=\times_{i=1}^{n_{y}}[l_{i},u_{i}]\) _for some_ \(-\infty<l_{i}<u_{i}<\infty\)_,_ \(i=1,\ldots,n_{y}\)_, then one can choose_ \(0<r<\min_{i}(\frac{u_{i}-l_{i}}{2})>0\)_,_ \(C_{y,\delta}:=\times_{i=1}^{n_{y}}[l_{i}+\delta,u_{i}-\delta]\subseteq\mathbb{ K}_{y}\)_, and_ \(L_{r}:=\sqrt{n_{y}}\)_._
Our main result of this section establishes the existence of a _single_ neural network such that for any input \(x\in\mathbb{K}_{x}\) defining the (CSIP) in (4.1) the neural network outputs a _feasible_ solution which is \(\varepsilon\)_-optimal_.
**Theorem 4.5** (Single neural network provides corresponding feasible \(\varepsilon\)-optimizer for class of (CSIP)).: _Let Assumptions 4.1, 4.2, and 4.3 hold true. Then, for all \(\varepsilon>0\) there exists a neural network \(\mathcal{N}\mathcal{N}\in\mathfrak{N}_{\mathfrak{n}_{x},1+n_{y}}\) such that_
1. \(\mathcal{N}\mathcal{N}(x):=(\mathcal{N}\mathcal{N}_{a}(x),\mathcal{N}\mathcal{ N}_{y}(x))\in\Gamma(x)\) _for all_ \(x\in\mathbb{K}_{x}\)_,_
2. \(f\left(x,\mathcal{N}\mathcal{N}_{a}(x),\mathcal{N}\mathcal{N}_{y}(x)\right)-V( x)\leq\varepsilon\) _for all_ \(x\in\mathbb{K}_{x}\)_._
The proof of Theorem 4.5 is provided in the next section.
## 5. Proofs and Auxiliary Results
In this section, we present the proofs of the main results from Section 2 and 4.
### Proofs of Section 2
The proof of Proposition 2.7 consists of verifying that the optimization problem (2.4) is included in the general (CSIP) introduced in Section 4. Then, applying Proposition 2.7 together with the universal approximation property of neural networks allows to conclude Theorem 2.5 and Theorem 2.6.
Proof of Proposition 2.7.: We verify that the conditions imposed in Theorem 4.5 are satisfied under Assumption 2.3 with \(x\leftarrow(K,\pi)\), \(a\gets a\), \(y\gets h\), \(V\gets V\) in the notation of Theorem 4.5. To that end, note that Assumption 4.1 holds with \(\mathcal{L}_{a,f}=1\), and \(L_{f}=\max\{1,\;\overline{\pi}\}\sqrt{1+2N}\). Moreover, note that for all \(x\in\mathbb{K}_{x}\), \((a,0)\in([\underline{a},\infty)\cap[0,\infty))\times[0,\overline{H}]^{2N}\) satisfies \(\mathcal{I}_{S}(x,a,0)\geq 0\) for all \(S\in\mathcal{S}\). Hence, Assumption 4.2 holds with \(\mathcal{L}_{a,\mathcal{I}}=1\) and \(L_{\mathcal{I}}=\max\{1,\;2\overline{H}L_{\Psi},\;C_{\Psi}\}\sqrt{3N+1}\). Furthermore, for any \(0<r<1\) and any \(0<\delta<1\), Assumption 4.3 is satisfied with \(C_{y,\delta}=[\delta,\overline{H}-\delta]^{2N}\) and \(L_{r}=\sqrt{2N}\). Therefore, the result follows by Theorem 4.5.
Proof of Theorem 2.5.: Let \((K,\pi)\in[0,\overline{K}]^{N}\times[0,\overline{H}]^{2N}\). Assume first there exists _model-free arbitrage_, i.e., we have \(V(K,\pi)<0\). Then, we choose \(\varepsilon\in\mathbb{R}\) with \(0<\varepsilon<-V(K,\pi)\) and obtain with Proposition 2.7 the existence of a neural network \(\mathcal{N}\mathcal{N}=(\mathcal{N}\mathcal{N}_{a},\mathcal{N}\mathcal{N}_{h })\in\mathfrak{N}_{3N,1+2N}\) with \(\mathcal{N}\mathcal{N}(K,\pi)\in\Gamma(K)\) and with \(f\left(\pi,\mathcal{N}\mathcal{N}_{a}(K,\pi),\mathcal{N}\mathcal{N}_{h}(K,\pi )\right)-V(K,\pi)\leq\varepsilon\) which implies
\[f\left(\pi,\mathcal{N}\mathcal{N}_{a}(K,\pi),\mathcal{N}\mathcal{N}_{h}(K,\pi )\right)\leq\varepsilon+V(K,\pi)<-V(K,\pi)+V(K,\pi)=0.\]
Conversely, if conditions (i) and (ii) hold, then the output of the neural network constitutes a _model-free arbitrage_ opportunity.
Proof of Theorem 2.6.: Let \(\varepsilon>0\). By Proposition 2.7, there exists a neural network \(\mathcal{N}\mathcal{N}\in\mathfrak{N}_{3N,1+2N}\) such that for every \((K,\pi)\in[0,\overline{K}]^{N}\times[0,\overline{\pi}]^{2N}\)
\[\mathcal{N}\mathcal{N}(K,\pi):=(\mathcal{N}\mathcal{N}_{a}(K,\pi),\mathcal{N} \mathcal{N}_{h}(K,\pi))\in\Gamma(K)\] \[\text{and}\;\;f\left(\pi,\mathcal{N}\mathcal{N}_{a}(K,\pi), \mathcal{N}\mathcal{N}_{h}(K,\pi)\right)-V(K,\pi)\leq\varepsilon-\delta.\]
Moreover, for every \((K,\pi)\in[0,\overline{K}]^{N}\times[0,\overline{\pi}]^{2N}\), if the market with respect to \((K,\pi)\) admits model-free static arbitrage of magnitude \(\varepsilon\), then by definition \(V(K,\pi)\leq-\varepsilon\). This implies that \(\mathcal{N}\mathcal{N}(K,\pi):=(\mathcal{N}\mathcal{N}_{a}(K,\pi),\mathcal{N} \mathcal{N}_{h}(K,\pi))\) provides a model-free static arbitrage strategy of magnitude \(\delta\).
If the market with respect to \((K,\pi)\) admits no model-free static arbitrage, then \(V(K,\pi)=0\) and hence
\[f\left(\pi,\mathcal{N}\mathcal{N}_{a}(K,\pi),\mathcal{N}\mathcal{N}_{h}(K,\pi) \right)\leq V(K,\pi)+\varepsilon-\delta=\varepsilon-\delta.\]
It remains to prove Theorem 4.5, which is our main _technical_ result. Its proof is provided in the next subsection.
### Proofs of Section 4
The main idea of the proof of Theorem 4.5 is to show that the correspondence of feasible \(\varepsilon\)-optimizers of the convex semi-infinite program (CSIP) defined in (4.1), as a function of the input \(x\in\mathbb{K}_{x}\) of the (CSIP), is non-empty, convex, closed, and lower hemicontinuous10, where the major difficulty lies in the establishment of the lower hemicontinuity. This then allows us to apply Michael's continuous selection theorem (Michael (1956)), which together with the universal approximation property of neural networks leads to the existence of a _single_ neural network which for any input \(x\in\mathbb{K}_{x}\) defining the (CSIP) in (4.1) outputs a _feasible_ solution which is \(\varepsilon\)-optimal. We highlight that no strict-convexity of the map \((a,y)\mapsto f(x,a,y)\) for any fixed \(x\) is assumed in (4.1), hence one cannot expect uniqueness of optimizers for the (CSIP), which in turn means that one cannot expect to have lower hemicontinuity of the correspondence of feasible true optimizers of the (CSIP) in (4.1).
Footnote 10: We refer to, e.g., (Aliprantis and Border, 2006, Chapter 17) as reference for the standard notions of lower/upper (hemi)continuity of correspondences.
#### 5.2.1. Auxiliary Results
Before reporting the proof of Theorem 4.5, we establish several auxiliary results which are necessary for the proof of the main result from Theorem 4.5.
For all of the auxiliary results from Section 5.2.1 we assume the validity of Assumption 4.1, Assumption 4.2 and Assumption 4.3. Moreover, from now on, we define the following quantity
\[\overline{a}^{\mathrm{UB}}:=\underline{a}+\frac{1}{\mathcal{L}_{a,\mathcal{I }}}\left|\inf_{\begin{subarray}{c}s\in\mathbb{S},\\ {}_{x\in\mathbb{K}_{x},y\in\mathbb{K}_{y}}\end{subarray}}\mathcal{I}_{s}(x, \underline{a},y)\right|\in[\underline{a},\infty). \tag{5.1}\]
**Lemma 5.1**.:
1. _Let_ \(a\in[\underline{a},\infty)\) _such that_ \(a\geq\overline{a}^{\mathrm{UB}}\)_. Then, we have that_ \(\mathcal{I}_{s}(x,a,y)\geq 0\) _for all_ \(x\in\mathbb{K}_{x}\)_,_ \(y\in\mathbb{K}_{y}\)_,_ \(s\in\mathcal{S}\)_._
2. _Let_ \(a\in[\underline{a},\infty)\) _such that_ \(a\geq\overline{a}^{\mathrm{UB}}+\frac{1}{\mathcal{L}_{a,f}}\)_. Then, for all_ \(x\in\mathbb{K}_{x}\) _and for all_ \(y\in\mathbb{K}_{y}\) _we have_ \(f(x,a,y)-V(x)\geq 1\)_._
Proof.:
1. Let \(a\in[\underline{a},\overline{a}]\) such that \(a\geq\overline{a}^{\mathrm{UB}}\). Further, let \(x\in\mathbb{K}_{x}\), \(y\in\mathbb{K}_{y}\), \(s\in\mathcal{S}\). Then, we have by the monotonicity of \(\mathcal{I}_{s}\) on \([\underline{a},\overline{a}]\) (stated in Assumption 4.2 (iii)) that (5.2) \[\mathcal{I}_{s}(x,a,y)\geq\mathcal{I}_{s}(x,\overline{a}^{\mathrm{UB}},y)= \mathcal{I}_{s}(x,\overline{a}^{\mathrm{UB}},y)-\mathcal{I}_{s}(x,\underline{a },y)+\mathcal{I}_{s}(x,\underline{a},y).\] By using the above inequality (5.2), Assumption 4.2 (iv), and the definition of \(\overline{a}^{\mathrm{UB}}\) we then have \[\mathcal{I}_{s}(x,a,y) \geq\mathcal{L}_{a,\mathcal{I}}\cdot\left(\overline{a}^{\mathrm{ UB}}-\underline{a}\right)+\inf_{\begin{subarray}{c}s\in\mathbb{S},\\ {}_{x\in\mathbb{K}_{x},y\in\mathbb{K}_{y}}\end{subarray}}\mathcal{I}_{s}(x, \underline{a},y)\] \[=\mathcal{L}_{a,\mathcal{I}}\cdot\overline{a}^{\mathrm{UB}}- \mathcal{L}_{a,\mathcal{I}}\cdot\underline{a}+\inf_{\begin{subarray}{c}s\in \mathbb{S},\\ {}_{x\in\mathbb{K}_{x},y\in\mathbb{K}_{y}}\end{subarray}}\mathcal{I}_{s}(x, \underline{a},y)\] \[=\mathcal{L}_{a,\mathcal{I}}\cdot\left(\underline{a}+\frac{1}{ \mathcal{L}_{a,\mathcal{I}}}\left|\inf_{\begin{subarray}{c}s\in\mathbb{S},\\ {}_{x\in\mathbb{K}_{x},y\in\mathbb{K}_{y}}\end{subarray}}\mathcal{I}_{s}(x, \underline{a},y)\right|\right)-\mathcal{L}_{a,\mathcal{I}}\cdot\underline{a}+ \inf_{\begin{subarray}{c}s\in\mathbb{S},\\ {}_{x\in\mathbb{K}_{x},y\in\mathbb{K}_{y}}\end{subarray}}\mathcal{I}_{s}(x, \underline{a},y)\] \[=\left|\inf_{\begin{subarray}{c}s\in\mathbb{S},\\ {}_{x\in\mathbb{K}_{x},y\in\mathbb{K}_{y}}\end{subarray}}\mathcal{I}_{s}(x, \underline{a},y)\right|+\inf_{\begin{subarray}{c}s\in\mathbb{S},\\ {}_{x\in\mathbb{K}_{x},y\in\mathbb{K}_{y}}\end{subarray}}\mathcal{I}_{s}(x, \underline{a},y)\geq 0.\]
2. First note that, by the assertion from (i), we have \((\overline{a}^{\mathrm{UB}},y)\in\Gamma(x)\) for all \(y\in\mathbb{K}_{y}\) and hence (5.3) \[f(x,\overline{a}^{\mathrm{UB}},y)\geq V(x)\text{ for all }x\in\mathbb{K}_{x},\ y\in \mathbb{K}_{y}.\] Then, as by assumption \(a\geq\overline{a}^{\mathrm{UB}}+\frac{1}{\mathcal{L}_{a,f}}\), we have for all \(x\in\mathbb{K}_{x}\) and for all \(y\in\mathbb{K}_{y}\) by Assumption 4.1 (iv), by Assumption 4.1 (v), and by (5.3), that \[f(x,a,y)-V(x) =f(x,a,y)-f(x,\overline{a}^{\mathrm{UB}},y)+f(x,\overline{a}^{ \mathrm{UB}},y)-V(x)\] \[\geq\mathcal{L}_{a,f}\cdot(a-\overline{a}^{\mathrm{UB}})+f(x, \overline{a}^{\mathrm{UB}},y)-V(x)\] \[\geq\mathcal{L}_{a,f}\cdot(a-\overline{a}^{\mathrm{UB}})\geq 1.\]
From now on, let
\[\overline{a}:=\overline{a}^{\mathrm{UB}}+\frac{1}{\mathcal{L}_{a,f}}+2, \tag{5.4}\]
where \(\overline{a}^{\mathrm{UB}}\) is defined in (5.1). Moreover, we define the correspondence
\[X_{x}\ni x\twoheadrightarrow\Gamma_{\overline{a}}(x):=\left\{(a,y)\in\Gamma(x) \ |\ a\leq\overline{a}\right\}=\left\{(a,y)\in[\underline{a},\overline{a}]\times \mathbb{K}_{y}\ |\ \mathcal{I}_{s}(x,a,y)\geq 0\text{ for all }s\in\mathcal{S}\right\}. \tag{5.5}\]
**Lemma 5.2**.: _Let \(\overline{a}\) be defined in (5.4). Moreover, let \(\mathbb{K}_{x}\ni x\mapsto\Gamma_{\overline{a}}(x)\) be defined in (5.5). Then, for all \(x\in\mathbb{K}_{x}\), \(\Gamma_{\overline{a}}(x)\) is nonempty, and for all \(x\in\mathbb{K}_{x}\)_
\[V_{\overline{a}}(x):=\inf_{(a,y)\in\Gamma_{\overline{a}}(x)}f(x,a,y)=\inf_{(a,y)\in\Gamma(x)}f(x,a,y)=V(x).\]
Proof.: By Remark 4.4 (i) we see that \(\Gamma_{\overline{a}}(x)\neq\emptyset\) for all \(x\in\mathbb{K}_{x}\). Moreover, as \(\Gamma_{\overline{a}}(x)\subseteq\Gamma(x)\), we have \(V_{\overline{a}}(x)\geq V(x)\) for every \(x\in\mathbb{K}_{x}\). To see that \(V_{\overline{a}}(x)\leq V(x)\) for every \(x\in\mathbb{K}_{x}\), fix any \(x\in\mathbb{K}_{x}\) and let \((a,y)\in\Gamma(x)\). By Remark 4.4 (i), we have \((\overline{a}^{\mathrm{UB}},y)\in\Gamma_{\overline{a}}(x)\). Hence, \(f(x,a,y)\geq f(x,\min\{a,\overline{a}^{\mathrm{UB}}\},y)\geq\inf_{(\overline{ a},\overline{a})\in\Gamma_{\overline{a}}(x)}f(x,\widetilde{a},\widetilde{y})\). Since \((a,y)\in\Gamma(x)\) was arbitrary we obtain the desired result.
**Lemma 5.3**.: _The map \(\mathbb{K}_{x}\ni x\twoheadrightarrow\Gamma_{\overline{a}}(x)\) defined in (5.5) is a non-empty, compact-valued, convex-valued, and continuous correspondence._
Proof.: The non-emptiness follows from Remark 4.4.
Let \(x\in\mathbb{K}_{x}\). Consider a sequence \((a^{(n)},y^{(n)})_{n\in\mathbb{N}}\subseteq\Gamma_{\overline{a}}(x)\). Then, by the compactness of \([\underline{a},\overline{a}]\times\mathbb{K}_{y}\), there exists a subsequence \((a^{(n_{k})},y^{(n_{k})})_{k\in\mathbb{N}}\subseteq\Gamma_{\overline{a}}(x)\) such that \((a^{(n_{k})},y^{(n_{k})})\to(a,y)\) as \(k\to\infty\) for some \((a,y)\in[\underline{a},\overline{a}]\times\mathbb{K}_{y}\). The continuity of \([\underline{a},\overline{a}]\times\mathbb{K}_{y}\ni(a,y)\mapsto\mathcal{I}_{s }(x,a,y)\), which is ensured by Assumption 4.2 (i), then implies that \(0\leq\lim_{k\to\infty}\mathcal{I}_{s}(x,a^{(n_{k})},y^{(n_{k})})=\mathcal{I}_ {s}(x,a,y)\). Hence, \(\Gamma_{\overline{a}}(x)\) is compact.
Let \(x\in\mathbb{K}_{x}\), and let \((a,y),\ (a^{\prime},y^{\prime})\in\Gamma_{\overline{a}}(x)\). Then, it follows for all \(t\in[0,1]\) by Assumption 4.2 (ii) that
\[\mathcal{I}_{s}\left(x,\ t\cdot a+(1-t)a^{\prime},\ ty+(1-t)\cdot y^{\prime} \right)\geq t\cdot\mathcal{I}_{s}(x,a,y)+(1-t)\cdot\mathcal{I}_{s}(x,a^{ \prime},y^{\prime})\geq 0\text{ for all }s\in\mathcal{S}.\]
Hence, the convexity of (5.5) follows.
It remains to show the continuity, i.e., that the map from (5.5) is lower hemicontinuous and upper hemicontinuous.
Let \((x^{(n)})_{n\in\mathbb{N}}\subseteq\mathbb{K}_{x}\) with \(\lim_{n\to\infty}x^{(n)}=x\in\mathbb{K}_{x}\) and let \((a,y)\in\Gamma_{\overline{a}}(x)\subseteq[\underline{a},\overline{a}]\times \mathbb{K}_{y}\). To show the lower-hemicontinuity, according to the characterization provided, e.g., in (Aliprantis and Border, 2006, Theorem 17.21), we need to prove the existence of a subsequence \((x^{(n_{k})})_{k\in\mathbb{N}}\) and elements \((a^{(k)},y^{(k)})\in\Gamma(x^{(n_{k})})\) for each \(k\in\mathbb{N}\) with \(\lim_{k\to\infty}(a^{(k)},y^{(k)})=(a,y)\).
First assume that \(a\leq\overline{a}^{\mathrm{UB}}\). Since \(\lim_{n\to\infty}x^{(n)}=x\), there exists, by definition of \(\overline{a}\), some \(n_{0}\in\mathbb{N}\) such that for all \(n\geq n_{0}\) we have
\[a^{(n)}:=a+\frac{L_{\mathcal{I}}}{\mathcal{L}_{a,\mathcal{I}}}\cdot\left\|x^{(n )}-x\right\|\leq\overline{a}. \tag{5.6}\]
Since by Assumption 4.2 (iii) the map \([\underline{a},\overline{a}]\ni a\mapsto\mathcal{I}_{s}(x,a,y)\) is monotone for all \(x\in\mathbb{K}_{x},y\in\mathbb{K}_{y},s\in\mathcal{S}\), with Assumption 4.2 (iv), and with the Lipschitz-property of \(\mathcal{I}_{s}\) from Assumption 4.2 (i), we have for all \(s\in\mathcal{S}\) and for all \(n\in\mathbb{N}\) that
\[\mathcal{I}_{s}\left(x^{(n)},\ a^{(n)},\ y\right) =\mathcal{I}_{s}\left(x^{(n)},\ a,\ y\right)-\mathcal{I}_{s} \left(x^{(n)},\ a,\ y\right)+\mathcal{I}_{s}\left(x^{(n)},\ a+\frac{L_{ \mathcal{I}}}{\mathcal{L}_{a,\mathcal{I}}}\cdot\left\|x^{(n)}-x\right\|,\ y\right)\] \[\geq\mathcal{I}_{s}\left(x^{(n)},\ a,\ y\right)+\mathcal{L}_{a,\mathcal{I}}\cdot\frac{L_{\mathcal{I}}}{\mathcal{L}_{a,\mathcal{I}}}\cdot \left\|x^{(n)}-x\right\|\] \[\geq\mathcal{I}_{s}\left(x^{(n)},\ a,\ y\right)-\mathcal{I}_{s} \left(x^{(n)},\ a,\ y\right)+\mathcal{I}_{s}\left(x,\ a,\ y\right)\geq 0,\]
where the last inequality follows since \((a,y)\in\Gamma_{\overline{a}}(x)\). Thus, we have \((a^{(n)},y)\in\Gamma_{\overline{a}}(x^{(n)})\) for all \(n\geq n_{0}\) as well as by (5.6) that \(\lim_{n\to\infty}(a^{(n)},y)=(a,y)\). Hence lower-hemicontinuity follows for the case \(a\leq\overline{a}^{\mathrm{UB}}\).
Now we consider the case that \(a>\overline{a}^{\mathrm{UB}}\). Note that in this case \(\mathcal{I}_{s}(x,a,y)>0\) for all \(s\in\mathcal{S}\) due to
the strict monotonocity of \(\mathcal{I}_{s}\) and by Remark 4.4 (i). Hence, by the continuity of \(\mathcal{I}_{s}\), there exists some \(n_{0}\in\mathbb{N}\) such that for all \(n\geq n_{0}\) we have \(\mathcal{I}_{s}(x^{(n)},a,y)>0\) implying that \((a,y)\in\Gamma_{\overline{a}}(x^{(n)})\) for all \(n\geq n_{0}\). Thus, we conclude with (Aliprantis and Border, 2006, Theorem 17.21) the lower hemicontinuity of the map from (5.5) also for the case \(a>\overline{a}^{\mathrm{UB}}\).
It remains to show the upper hemicontinuity. To this end, let \((x^{(n)},a^{(n)},y^{(n)})\in\operatorname{Gr}\Gamma_{\overline{a}}\) with \(\lim_{n\to}x^{(n)}=x\). We apply the characterization of upper hemicontinuity provided, e.g., in (Aliprantis and Border, 2006, Theorem 17.20), and therefore we need to show the existence of a subsequence \((a^{(n)},y^{(n)})_{k\in\mathbb{N}}\) with \(\lim_{k\to\infty}(a^{(n_{k})},y^{(n_{k})})=(a,y)\in\Gamma_{\overline{a}}(x)\).
As \((a^{(n)},y^{(n)})_{n\in\mathbb{N}}\subseteq[\underline{a},\overline{a}]\times \mathbb{K}_{y}\) is a sequence defined on a compact space, there exists a subsequence \((a^{(n_{k})},y^{(n_{k})})_{k\in\mathbb{N}}\) with \(\lim_{k\to\infty}(a^{(n_{k})},y^{(n_{k})})=(a,y)\in[\underline{a},\overline{a}] \times\mathbb{K}_{y}\). Since \(\mathcal{I}_{s}(x^{(n_{k})},a^{(n_{k})},y^{(n_{k})})\geq 0\) for all \(k\in\mathbb{N}\) as \(\left(x^{(n_{k})},a^{(n_{k})},y^{(n_{k})}\right)\in\operatorname{Gr}\Gamma_{ \overline{a}}\), we obtain by the continuity of \(\mathcal{I}_{s}\) that \(\mathcal{I}_{s}(x,a,y)\geq 0\). This means \((a,y)\in\Gamma_{\overline{a}}(x)\).
**Lemma 5.4**.: _For all \(\varepsilon\in(0,1)\) the correspondence_
\[\mathbb{K}_{x}\ni x\twoheadrightarrow\mathcal{M}_{\varepsilon}(x):=\{(a,y) \in\Gamma_{\overline{a}}(x)\ |\ f(x,a,y)-V_{\overline{a}}(x)<\varepsilon\} \tag{5.7}\]
_is non-empty, convex-valued, and lower hemicontinuous._
Proof.: Let \(\varepsilon\in(0,1)\). The non-emptiness of \(\mathcal{M}_{\varepsilon}(x)\) for each \(x\in\mathbb{K}_{x}\) follows by definition and by Remark 4.4. To show the convexity of \(\mathcal{M}_{\varepsilon}(x)\) for each \(x\in\mathbb{K}_{x}\), fix any \(x\in\mathbb{K}_{x}\) and let \((y,a),\ (\widetilde{y},\widetilde{a})\in\mathcal{M}_{\varepsilon}(x)\) and \(t\in[0,1]\). Then by Lemma 5.3 implying that \(\Gamma_{\overline{a}}(x)\) is convex, we have \(t\cdot(a,y)+(1-t)\cdot(\widetilde{y},\widetilde{a})\in\Gamma_{\overline{a}}(x)\). Moreover, by Assumption 4.1 (iii) ensuring that \([\underline{a},\overline{a}]\times\mathbb{K}_{y}\ni(a,y)\mapsto f(x,a,y)\) is convex, we have
\[\begin{split}& f\left(x,t\cdot a+(1-t)\cdot\widetilde{a},t\cdot y +(1-t)\cdot\widetilde{y}\right)-V_{\overline{a}}(x)\\ &\leq t\cdot\left(f\left(x,a,y\right)-V_{\overline{a}}(x)\right) +(1-t)\cdot\left(f\left(x,\widetilde{a},\widetilde{y}\right)-V_{\overline{a}} (x)\right)\leq t\cdot\varepsilon+(1-t)\cdot\varepsilon=\varepsilon,\end{split}\]
from which we conclude the convexity of \(\mathcal{M}_{\varepsilon}(x)\). To show the lower hemicontinuity of (5.7) let \((x^{(n)})_{n\in\mathbb{N}}\subseteq\mathbb{K}_{x}\) with \(\lim_{n\to\infty}x^{(n)}=x\in\mathbb{K}_{x}\), and let \((a,y)\in\mathcal{M}_{\varepsilon}(x)\). We apply the characterization of lower hemicontinuity from (Aliprantis and Border, 2006, Theorem 17.20) and therefore aim at showing that there exists a subsequence \((x^{(n_{k})})_{k\in\mathbb{N}}\) and elements \((a^{(k)},y^{(k)})\in\mathcal{M}_{\varepsilon}(x^{(n_{k})})\) for each \(k\in\mathbb{N}\) such that \(\lim_{k\to\infty}(a^{(k)},y^{(k)})=(a,y)\).
By Lemma 5.3 the correspondence \(\mathbb{K}_{x}\ni x\twoheadrightarrow\Gamma_{\overline{a}}(x)\) is non-empty, compact-valued, continuous, and by Assumption 4.1 (ii), the map \(\mathbb{K}_{x}\times[\underline{a},\overline{a}]\times\mathbb{K}_{y}\ni(x,a, y)\mapsto f(x,a,y)\) is continuous. Hence, Berge's maximum theorem (see Berge (1959) or (Aliprantis and Border, 2006, Theorem 17.31)) is applicable.
We then obtain by Berge's maximum theorem that the map
\[\mathbb{K}_{x}\ni x\mapsto V_{\overline{a}}(x):=\inf_{(a,y)\in\Gamma_{ \overline{a}}(x)}f(x,a,y)\]
is continuous. Therefore, as \((a,y)\in\mathcal{M}_{\varepsilon}(x)\), and since both \(f\) and \(V_{\overline{a}}\) are continuous, there exists some \(\gamma\in(0,1)\) such that for all \((x,^{\prime}a^{\prime},y^{\prime})\) with \((x,^{\prime}a^{\prime},y^{\prime})\in\mathcal{B}_{\gamma}(x,a,y)\subseteq \mathbb{K}_{x}\times[\underline{a},\overline{a}]\times\mathbb{K}_{y}\), it holds
\[f(x^{\prime},a^{\prime},y^{\prime})-V_{\overline{a}}(x^{\prime})<\varepsilon. \tag{5.8}\]
Moreover, as \(\lim_{n\to\infty}x^{(n)}=x\), there exist some \(n_{0}\in\mathbb{N}\) such that for all \(n\geq n_{0}\) we have
\[\sqrt{\left\|x^{(n)}-x\right\|^{2}+\left(\frac{L_{\mathcal{I}}}{\mathcal{L}_{a,\mathcal{I}}}\left\|x^{(n)}-x\right\|\right)^{2}}\leq\gamma. \tag{5.9}\]
Moreover, since \((a,y)\in\mathcal{M}_{\varepsilon}(x)\) and \(\varepsilon\in(0,1)\), we have by Lemma 5.1 (ii) that \(a<\overline{a}^{\mathrm{UB}}+\frac{1}{\mathcal{L}_{a,f}}\). Hence, by (5.9) and by definition of \(\overline{a}\) we have for all \(n\geq n_{0}\) also that
\[a+\frac{L_{\mathcal{I}}}{\mathcal{L}_{a,\mathcal{I}}}\left\|x^{(n)}-x\right\| \leq a+\gamma\leq\overline{a}. \tag{5.10}\]
Note also that for \(n\geq n_{0}\) we have by Assumption 4.2 (iv) and Assumption 4.2 (i) for all \(s\in\mathcal{S}\) the following inequality
\[\mathcal{I}_{s}\left(x^{(n)},\ a+\frac{L_{\mathcal{I}}}{\mathcal{L }_{a,\mathcal{I}}}\left\|x^{(n)}-x\right\|,\ y\right) =\mathcal{I}_{s}\left(x^{(n)},\ a+\frac{L_{\mathcal{I}}}{\mathcal{ L}_{a,\mathcal{I}}}\left\|x^{(n)}-x\right\|,\ y\right)-\mathcal{I}_{s}\left(x^{(n)},\ a,\ y\right)\] \[\quad+\mathcal{I}_{s}\left(x^{(n)},\ a,\ y\right)\] \[\geq\mathcal{L}_{a,\mathcal{I}}\frac{L_{\mathcal{I}}}{\mathcal{ L}_{a,\mathcal{I}}}\left\|x^{(n)}-x\right\|+\mathcal{I}_{s}\left(x^{(n)},\ a,\ y\right)\] \[=L_{\mathcal{I}}\left\|x^{(n)}-x\right\|+\mathcal{I}_{s}\left(x ^{(n)},\ a,\ y\right)-\mathcal{I}_{s}\left(x,\ a,\ y\right)+\mathcal{I}_{s} \left(x,\ a,\ y\right)\] \[\geq L_{\mathcal{I}}\left\|x^{(n)}-x\right\|-L_{\mathcal{I}} \left\|x^{(n)}-x\right\|+\mathcal{I}_{s}\left(x,\ a,\ y\right)\geq 0, \tag{5.11}\]
since \((a,y)\in\Gamma_{\overline{a}}(x)\). Hence, (5.10) and (5.11) together show that
\[\left(a+\frac{L_{\mathcal{I}}}{\mathcal{L}_{a,\mathcal{I}}}\left\|x^{(n)}-x \right\|,\ y\right)\in\Gamma_{\overline{a}}(x^{(n)})\ \text{for all}\ n\geq n_{0}. \tag{5.12}\]
By (5.9) we have \(\left(x^{(n)},a+\frac{L_{\mathcal{I}}}{\mathcal{L}_{a,\mathcal{I}}}\left\|x^{ (n)}-x\right\|,\ y\right)\in\mathcal{B}_{\gamma}(x,a,y)\) for all \(n\geq n_{0}\). Thus, it follows with (5.8) and (5.12) that
\[\left(a+\frac{L_{\mathcal{I}}}{\mathcal{L}_{a,\mathcal{I}}}\left\|x^{(n)}-x \right\|,\ y\right)\in\mathcal{M}_{\varepsilon}\left(x^{(n)}\right)\ \text{for all}\ n\geq n_{0},\]
proving the lower hemicontinuity of (5.7), by applying the characterization of lower hemicontinuity from (Aliprantis and Border, 2006, Theorem 17.20) to the subsequences \(\left(x^{(n)}\right)_{n\in\mathbb{N},\atop n\geq n_{0}}\) and \(\left(a+\frac{L_{\mathcal{I}}}{\mathcal{L}_{a,\mathcal{I}}}\left\|x^{(n)}-x \right\|,\ y\right)_{n\in\mathbb{N},\atop n\geq n_{0}}.\)
**Corollary 5.5**.: _For all \(\varepsilon\in(0,1)\) the correspondence_
\[\mathbb{K}_{x}\ni x\twoheadrightarrow\overline{\mathcal{M}_{\varepsilon}(x)}: =\mathrm{cl}\left(\mathcal{M}_{\varepsilon}(x)\right) \tag{5.13}\]
_is nonempty, convex, closed, lower hemicontinuous, and satisfies_
\[\overline{\mathcal{M}_{\varepsilon}(x)}\subseteq\left\{(a,y)\in\Gamma_{ \overline{a}}(x)\ |\ f(x,a,y)-V_{\overline{a}}(x)\leq\varepsilon\right\}. \tag{5.14}\]
Proof.: The non-emptiness and convexity of the map defined in (5.13) both follow from Lemma 5.4. That the map is closed is a consequence of the definition of a closure of a set. The lower-hemicontinuity also follows from Lemma 5.4 and from (Aliprantis and Border, 2006, Theorem 17.22 (1), p. 566) which ensures that the closure of a lower hemicontinuous map is again lower hemicontinuous. The relation (5.14) follows as the map \(\mathbb{K}_{x}\times[\underline{a},\overline{a}]\times\mathbb{K}_{y}\ni(x,a, y)\mapsto f(x,a,y)\) is continuous by Assumption 4.1 (ii).
**Corollary 5.6**.: _For all \(\varepsilon\in(0,1)\) there exists a continuous map \(\mathbb{K}_{x}\ni x\mapsto(a^{*,\varepsilon}(x),\ y^{*,\varepsilon}(x))\in \Gamma_{\overline{a}}(x)\) satisfying both_
* \(a^{*,\varepsilon}(x)\leq\overline{a}^{\mathrm{UB}}+\frac{1}{\mathcal{L}_{a,f}}\) _for all_ \(x\in\mathbb{K}_{x}\)_,_
* \(f\left(x,\ a^{*,\varepsilon}(x),\ y^{*,\varepsilon}\right)-V_{\overline{a}}(x )\leq\varepsilon.\)__
Proof.: Corollary 5.5 ensures that the requirements for an application of the Michael selection theorem (see Michael (1956) or (Aliprantis and Border, 2006, Theorem 17.66)) are fulfilled. By the Michael selection theorem we then obtain a continuous selector \(\mathbb{K}_{x}\ni x\mapsto(a^{*,\varepsilon}(x),\ y^{*,\varepsilon}(x))\in \mathrm{cl}\left(\mathcal{M}_{\varepsilon}(x)\right)\subseteq\Gamma_{ \overline{a}}(x)\) implying, by definition of \(\mathcal{M}_{\varepsilon}(x)\), that (ii) is fulfilled.
Assume now that (i) does not hold, i.e., that we have \(a^{*,\varepsilon}(x)>\overline{a}^{\mathrm{UB}}+\frac{1}{\mathcal{L}_{a,f}}\). This, however, by Lemma 5.1 (ii), contradicts (ii), which concludes the proof.
Now, for any \(0<\delta<r\) recall the definition of the set \(C_{y,\delta}\subseteq\mathbb{K}_{y}\) from Assumption 4.3.
**Lemma 5.7**.: _For all \(\delta\in(0,r)\), the map_
\[\mathbb{K}_{x}\ni x\mapsto\left(a_{\delta}^{*,\varepsilon}(x),\ y_{\delta}^{*, \varepsilon}(x)\right):=\operatorname*{argmin}_{(a,y)\in[\underline{a}+\delta \overline{a}-\delta]\times C_{y,\delta}}\left\|(a,y)-(a^{*,\varepsilon}(x),y^{*, \varepsilon}(x))\right\|^{2}\]
_is continuous._
Proof.: Note that \((x,a,y)\mapsto\left\|(a,y)-(a^{*,\varepsilon}(x),y^{*,\varepsilon}(x))\right\|^{2}\) is continuous by Corollary 5.6. Moreover, the single-valued map is well-defined as the projection of the point \(\left(a^{*,\varepsilon}(x),\ y^{*,\varepsilon}(x)\right)\) onto the compact, convex set \([\underline{a}+\delta,\overline{a}-\delta]\times C_{y,\delta}\). The continuity follows now by, e.g., Berge's maximum theorem ([1, Theorem 17.31]) and ([1, Lemma 17.6]).
#### 5.2.2. Proof of Theorem 4.5
In Section 5.2.1 we have established all auxiliary results that allow us now to report the proof of Theorem 4.5.
Proof of Theorem 4.5.: Without loss of generality let \(\varepsilon\in(0,1)\), else we substitute \(\varepsilon\) by \(\overline{\varepsilon}:=\frac{\varepsilon}{1+\varepsilon}\). By Corollary 5.6, for all \(x\in\mathbb{K}_{x}\) there exists, by abuse of notation with \(\varepsilon\leftarrow\varepsilon/2\) in the notation of Corollary 5.6, some continuous map \(\mathbb{K}_{x}\ni x\mapsto\left(a^{*,\varepsilon}(x),\ y^{*,\varepsilon}(x) \right)\in\Gamma_{\overline{a}}(x)\) satisfying for all \(x\in\mathbb{K}_{x}\) that
\[a^{*,\varepsilon}(x)\leq\overline{a}^{\mathrm{UB}}+\frac{1}{\mathcal{L}_{a,f}} \tag{5.15}\]
and such that
\[f\left(x,\ a^{*,\varepsilon}(x),\ y^{*,\varepsilon}\right)-V_{\overline{a}}( x)\leq\varepsilon/2. \tag{5.16}\]
We recall \(r\in(0,1)\) from Assumption 4.3 and define
\[\delta_{0}:=\frac{\varepsilon\min\{\mathcal{L}_{a,\mathcal{I}},1\}}{8\max\left\{ L_{\mathcal{I}},\mathcal{L}_{a,f}\right\}\sqrt{1+L_{r}^{2}L_{f}}}\cdot r\in(0,r). \tag{5.17}\]
Note that by definition of the projection from Lemma 5.7 with respect to \([\underline{a}-\delta_{0},\underline{a}+\delta_{0}]\times C_{y,\delta_{0}}\), and by Assumption 4.3 we have
\[\left\|(x,a^{*,\varepsilon}(x),\ y^{*,\varepsilon}(x))-\left(x,a^{*, \varepsilon}_{\delta_{0}}(x),\ y^{*,\varepsilon}_{\delta_{0}}(x)\right) \right\|\leq\sqrt{\delta_{0}^{2}+L_{r}^{2}\delta_{0}^{2}}=\delta_{0}\sqrt{1+L_ {r}^{2}}\text{ for all }x\in\mathbb{K}_{x}. \tag{5.18}\]
Hence, for all \(x\in\mathbb{K}_{x}\), by using the Lipschitz-continuity of \(f\) from Assumption 4.1 (i), by (5.18), and by the definition of \(\delta_{0}\) in (5.17), we have
\[\left|f\left(x,a^{*,\varepsilon}(x),\ y^{*,\varepsilon}(x)\right) -f\left(x,a^{*,\varepsilon}_{\delta_{0}}(x),\ y^{*,\varepsilon}_{\delta_{0}}( x)\right)\right| \leq L_{f}\left\|(x,a^{*,\varepsilon}(x),\ y^{*,\varepsilon}(x))- \left(x,a^{*,\varepsilon}_{\delta_{0}}(x),\ y^{*,\varepsilon}_{\delta_{0}}(x )\right)\right\|\] \[\leq L_{f}\cdot\delta_{0}\sqrt{1+L_{r}^{2}}\] \[=r\cdot\frac{L_{f}}{L_{f}}\frac{\sqrt{1+L_{r}^{2}}}{\sqrt{1+L_{r} ^{2}}}\cdot\frac{\varepsilon\min\{\mathcal{L}_{a,\mathcal{I}},1\}}{8\max\left\{ L_{\mathcal{I}},\mathcal{L}_{a,f}\right\}}\leq\frac{\varepsilon}{8}.\]
By Corollary 5.5, we have \((x,a^{*,\varepsilon}(x),y^{*,\varepsilon}(x))\in\Gamma_{\overline{a}}(x)\), and in particular, \(\mathcal{I}_{s}(x,a^{*,\varepsilon}(x),y^{*,\varepsilon}(x))\geq 0\) for all \(s\in\mathcal{S}\) and all \(x\in\mathbb{K}_{x}\). This implies by the Lipschitz-continuity of \(\mathcal{I}_{s}\) (Assumption 4.2 (i)), by using (5.18), and the definition of \(\delta_{0}\), that
\[\mathcal{I}_{s}(x,\ a^{*,\varepsilon}_{\delta_{0}}(x),\ y^{*, \varepsilon}_{\delta_{0}}(x)) =\mathcal{I}_{s}(x,\ a^{*,\varepsilon}_{\delta_{0}}(x),\ y^{*, \varepsilon}_{\delta_{0}}(x))-\mathcal{I}_{s}(x,\ a^{*,\varepsilon}(x),\ y^{*, \varepsilon}(x))+\mathcal{I}_{s}(x,\ a^{*,\varepsilon}(x),\ y^{*, \varepsilon}(x))\] \[\geq-L_{f}\lambda_{\delta}\sqrt{1+L_{r}^{2}}\] \[=-r\cdot\frac{\sqrt{1+L_{r}^{2}}}{\sqrt{1+L_{r}^{2}}}\cdot\frac{ \min\{\mathcal{L}_{a,\mathcal{I}},1\}}{L_{f}}\cdot\frac{L_{\mathcal{I}}}{\max \left\{L_{\mathcal{I}},\mathcal{L}_{a,f}\right\}}\cdot\frac{\varepsilon}{8} \geq-\frac{\mathcal{L}_{a,\mathcal{I}}}{L_{f}}\frac{\varepsilon}{8}. \tag{5.20}\]
By the universal approximation theorem (Proposition 2.2) and Lemma 5.7 there exists a neural network \(\widetilde{\mathcal{N}}:=\left(\widetilde{\mathcal{N}}\widetilde{\mathcal{N}}_{a },\widetilde{\mathcal{N}}_{y}\right)\in\mathfrak{R}_{n_{x},1+n_{y}}\) such that
\[\sup_{x\in\mathbb{K}_{x}}\left\|\left(a^{*,\varepsilon}_{\delta_{0}}(x),\ y^{*, \varepsilon}_{\delta_{0}}(x)\right)-\left(\widetilde{\mathcal{N}}\widetilde{ \mathcal{N}}_{a}(x),\widetilde{\mathcal{N}}\widetilde{\mathcal{N}}_{y}(x) \right)\right\|<\delta_{0}. \tag{5.21}\]
Moreover, we have by (5.20) and (5.21) for all \(x\in\mathbb{K}_{x}\), \(s\in\mathcal{S}\) that
\[\mathcal{I}_{s}(x,\widetilde{\mathcal{N}}\mathcal{N}_{a}(x), \widetilde{\mathcal{N}}\mathcal{N}_{y}(x)) =\mathcal{I}_{s}(x,\widetilde{\mathcal{N}}\mathcal{N}_{a}(x), \widetilde{\mathcal{N}}\mathcal{N}_{y}(x))-\mathcal{I}_{s}(x,a_{\delta_{0}}^{*,\varepsilon}(x),\ y_{\delta_{0}}^{*,\varepsilon}(x))+\mathcal{I}_{s}(x,a_{ \delta_{0}}^{*,\varepsilon}(x),\ y_{\delta_{0}}^{*,\varepsilon}(x))\] \[\geq-L_{T}\delta_{0}+\mathcal{I}_{s}(x,a_{\delta_{0}}^{*, \varepsilon}(x),\ y_{\delta_{0}}^{*,\varepsilon}(x))\] \[\geq-L_{T}\frac{\varepsilon\min\{\mathcal{L}_{a,T},1\}}{8\max\{ L_{T},\mathcal{L}_{a,f}\}\sqrt{1+L_{r}^{2}L_{f}}}\cdot r-\frac{\mathcal{L}_{a,T}}{L_{ f}}\frac{\varepsilon}{8}\] \[\geq-\frac{\mathcal{L}_{a,T}}{L_{f}}\frac{\varepsilon}{8}-\frac{ \mathcal{L}_{a,T}}{L_{f}}\frac{\varepsilon}{8}=-\frac{\mathcal{L}_{a,T}}{L_{ f}}\frac{\varepsilon}{4}. \tag{5.22}\]
In addition, we have by (5.21) and Assumption 4.3 that \(\left(\widetilde{\mathcal{N}}\mathcal{N}_{a}(x),\widetilde{\mathcal{N}} \mathcal{N}_{y}(x)\right)\in[\underline{a},\overline{a}]\times\mathbb{K}_{y}\) for all \(x\in\mathbb{K}_{x}\). Furthermore, for all \(x\in\mathbb{K}_{x}\)
\[\left|f\left(x,\widetilde{\mathcal{N}}\mathcal{N}_{a}(x),\widetilde{\mathcal{N }}\mathcal{N}_{y}(x)\right)-f\left(x,a_{\delta_{0}}^{*,\varepsilon}(x),\ y_{ \delta_{0}}^{*,\varepsilon}(x)\right)\right|\leq L_{f}\delta_{0}=r\cdot\frac{ L_{f}}{L_{f}}\frac{\varepsilon\min\{\mathcal{L}_{a,T},1\}}{8\max\{L_{T},\mathcal{L}_{a,f}\} \sqrt{1+L_{r}^{2}}}\leq\frac{\varepsilon}{8}. \tag{5.23}\]
Next, define a neural network \(\mathcal{N}\mathcal{N}:=(\mathcal{N}\mathcal{N}_{a},\mathcal{N}\mathcal{N}_{y}) \in\mathfrak{N}_{n_{x},1+n_{y}}\) by
\[(\mathcal{N}\mathcal{N}_{a}(x),\mathcal{N}\mathcal{N}_{y}(x)):=\left(\widetilde {\mathcal{N}}\mathcal{N}_{a}(x)+\frac{1}{L_{f}}\frac{\varepsilon}{4}, \widetilde{\mathcal{N}}\mathcal{N}_{y}(x)\right),\qquad x\in\mathbb{R}^{n_{x}}. \tag{5.24}\]
Then, for all \(x\in\mathbb{K}_{x}\), by using (5.21), (5.18), (5.17), Corollary 5.6, and the definition of \(\overline{a}\) in (5.4), we obtain
\[\mathcal{N}\mathcal{N}_{a}(x) =\widetilde{\mathcal{N}}\mathcal{N}_{a}(x)+\frac{1}{L_{f}}\frac{ \varepsilon}{4}\leq a_{\delta_{0}}^{*,\varepsilon}(x)+\delta_{0}+\frac{1}{L_{ f}}\frac{\varepsilon}{4}\leq a^{*,\varepsilon}(x)+\delta_{0}\sqrt{1+L_{r}^{2}}+ \delta_{0}+\frac{1}{L_{f}}\frac{\varepsilon}{4} \tag{5.26}\] \[\leq\overline{a}^{\rm UB}+\frac{1}{\mathcal{L}_{a,f}}+\delta_{0} \sqrt{1+L_{r}^{2}}+\delta_{0}+\frac{1}{L_{f}}\frac{\varepsilon}{4}\] (5.27) \[=\overline{a}^{\rm UB}+\frac{1}{\mathcal{L}_{a,f}}+\frac{\varepsilon \min\{\mathcal{L}_{a,T},1\}}{8\max\{L_{T},\mathcal{L}_{a,f}\}\sqrt{1+L_{r}^{2}L _{f}}}\cdot r\sqrt{1+L_{r}^{2}}+\delta_{0}+\frac{1}{L_{f}}\frac{\varepsilon} {4}\] (5.28) \[\leq\overline{a}^{\rm UB}+\frac{1}{\mathcal{L}_{a,f}}+\frac{ \varepsilon}{8}+\delta_{0}+\frac{\varepsilon}{4}\leq\overline{a}^{\rm UB}+ \frac{1}{\mathcal{L}_{a,f}}+2\leq\overline{a}. \tag{5.25}\]
Hence, we conclude by (5.24) and (5.25) that
\[(\mathcal{N}\mathcal{N}_{a}(x),\mathcal{N}\mathcal{N}_{y}(x))\in[\underline{a},\overline{a}]\times\mathbb{K}_{y}\text{ for all }x\in\mathbb{K}_{x}. \tag{5.29}\]
Moreover, by (5.24) and (5.22) we have for all \(x\in\mathbb{K}_{x}\) and \(s\in\mathcal{S}\) that
\[\mathcal{I}_{s}(x,\mathcal{N}\mathcal{N}_{a}(x),\mathcal{N}\mathcal{ N}_{y}(x)) =\mathcal{I}_{s}(x,\mathcal{N}\mathcal{N}_{a}(x),\widetilde{\mathcal{N}}\mathcal{N}_{y} (x))-\mathcal{I}_{s}(x,\widetilde{\mathcal{N}}\mathcal{N}_{a}(x),\widetilde{ \mathcal{N}}\mathcal{N}_{y}(x))\] \[\qquad\qquad\qquad\qquad\qquad+\mathcal{I}_{s}(x,\widetilde{ \mathcal{N}}\mathcal{N}_{a}(x),\widetilde{\mathcal{N}}\mathcal{N}_{y}(x))\] \[\geq\frac{\mathcal{L}_{a,T}}{L_{f}}\frac{\varepsilon}{4}+ \mathcal{I}_{s}(x,\widetilde{\mathcal{N}}\mathcal{N}_{a}(x),\widetilde{ \mathcal{N}}\mathcal{N}_{y}(x))\] \[\geq\frac{\mathcal{L}_{a,T}}{L_{f}}\frac{\varepsilon}{4}-\frac{ \mathcal{L}_{a,T}}{L_{f}}\frac{\varepsilon}{4}=0. \tag{5.30}\]
Hence, we see that
\[(\mathcal{N}\mathcal{N}_{a}(x),\mathcal{N}\mathcal{N}_{y}(x))\in\Gamma_{ \overline{a}}(x)\subseteq\Gamma(x)\text{ for all }x\in\mathbb{K}_{x}. \tag{5.31}\]
Furthermore, by (5.24), we have for all \(x\in\mathbb{K}_{x}\) that
\[\left|f\left(x,\widetilde{\mathcal{N}}\mathcal{N}_{a}(x),\widetilde{\mathcal{N}} \mathcal{N}_{y}(x)\right)-f\left(x,\mathcal{N}\mathcal{N}_{a}(x),\mathcal{N} \mathcal{N}_{y}(x)\right)\right|\leq L_{f}\frac{1}{L_{f}}\frac{\varepsilon}{4}= \frac{\varepsilon}{4}. \tag{5.32}\]
Therefore, we conclude by Lemma 5.2, (5.16), (5.19), (5.23), and (5.32) that for all \(x\in\mathbb{K}_{x}\)
\[f\left(x,\mathcal{NN}_{a}(x),\mathcal{NN}_{y}(x)\right)-V(x)= f\left(x,\mathcal{NN}_{a}(x),\mathcal{NN}_{y}(x)\right)-V_{\overline{a}}(x)\] \[= \left(f\left(x,\mathcal{NN}_{a}(x),\mathcal{NN}_{y}(x)\right)-f \left(x,\widetilde{\mathcal{NN}}_{a}(x),\widetilde{\mathcal{NN}}_{y}(x) \right)\right)\] \[+\left(f\left(x,\widetilde{\mathcal{NN}}_{a}(x),\widetilde{ \mathcal{NN}}_{y}(x)\right)-f\left(x,a_{\delta_{0}}^{*,\varepsilon}(x),\ y_{ \delta_{0}}^{*,\varepsilon}(x)\right)\right)\] \[+\left(f\left(x,a_{\delta_{0}}^{*,\varepsilon}(x),\ y_{\delta_{0 }}^{*,\varepsilon}(x)\right)-f\left(x,a^{*,\varepsilon}(x),\ y^{*,\varepsilon }(x)\right)\right)\] \[+\left(f\left(x,a^{*,\varepsilon}(x),\ y^{*,\varepsilon}(x) \right)-V_{\overline{a}}(x)\right)\] \[\leq \frac{\varepsilon}{4}+\frac{\varepsilon}{8}+\frac{\varepsilon}{ 8}+\frac{\varepsilon}{2}=\varepsilon.\]
## Acknowledgments
Financial support by the Nanyang Assistant Professorship Grant (NAP Grant) _Machine Learning based Algorithms in Finance and Insurance_ is gratefully acknowledged.
|
2304.11534 | Graph Neural Networks for Text Classification: A Survey | Text Classification is the most essential and fundamental problem in Natural
Language Processing. While numerous recent text classification models applied
the sequential deep learning technique, graph neural network-based models can
directly deal with complex structured text data and exploit global information.
Many real text classification applications can be naturally cast into a graph,
which captures words, documents, and corpus global features. In this survey, we
bring the coverage of methods up to 2023, including corpus-level and
document-level graph neural networks. We discuss each of these methods in
detail, dealing with the graph construction mechanisms and the graph-based
learning process. As well as the technological survey, we look at issues behind
and future directions addressed in text classification using graph neural
networks. We also cover datasets, evaluation metrics, and experiment design and
present a summary of published performance on the publicly available
benchmarks. Note that we present a comprehensive comparison between different
techniques and identify the pros and cons of various evaluation metrics in this
survey. | Kunze Wang, Yihao Ding, Soyeon Caren Han | 2023-04-23T04:21:50Z | http://arxiv.org/abs/2304.11534v3 | # Graph Neural Networks for Text Classification: A Survey
###### Abstract.
Text Classification is the most essential and fundamental problem in Natural Language Processing. While numerous recent text classification models applied the sequential deep learning technique, graph neural network-based models can directly deal with complex structured text data and exploit global information. Many real text classification applications can be naturally cast into a graph, which captures words, documents, and corpus global features. In this survey, we bring the coverage of methods up to 2023, including corpus-level and document-level graph neural networks. We discuss each of these methods in detail, dealing with the graph construction mechanisms and the graph-based learning process. As well as the technological survey, we look at issues behind and future directions addressed in text classification using graph neural networks. We also cover datasets, evaluation metrics, and experiment design and present a summary of published performance on the publicly available benchmarks. Note that we present a comprehensive comparison between different techniques and identify the pros and cons of various evaluation metrics in this survey.
Graph Neural Networks, Text Classification. Representation Learning +
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †:
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †:
Footnote †: journal: ACM
+
Footnote †:
Footnote †: journal: ACM
+
Footnote †:
Footnote †: journal: ACM
+
Footnote †:
Footnote †: journal: ACM
+
Footnote †:
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †:
Footnote †: journal: ACM
+
Footnote †:
Footnote †:
Footnote †:
Footnote †:
+
[MISSING_PAGE_POST]
document-level graphs and each of them represents a document. The corpus-level graph can capture the global structural information of the entire corpus, while the document-level graph can capture the word-to-word relationships within a document explicitly. Both ways of applying graph neural networks to text classification achieve good performance.
This paper mainly focuses on GNN-based text classification techniques, datasets, and their performance. The graph construction approaches for both corpus-level and document-level graphs are addressed in detail. Papers on the following aspects will be reviewed:
* GNNs-based text classification approaches. Papers that design GNN-based frameworks to enhance the feature representation or directly apply GNNs to conduct sequence text classification tasks will be summarized, described and discussed. GNNs applied for token-level classification (Natural Language Understanding) tasks, including NER, slot filling, etc, will not be discussed in this work.
* Text classification benchmark datasets and their performance applied by GNN-based models. The text classification datasets with commonly used metrics used by GNNs-based text classification models will be summarized and categorized based on task types, together with the model performance on these datasets.
### Related Surveys and Our Contribution
Before 2019, the text classification survey papers [2; 35; 116; 129; 46] have focused on covering traditional machine learning-based text classification models. Recently, with the rapid development of deep learning techniques, [55; 82; 147; 149] review the various deep learning based approaches. In addition, some papers not only review the SoTA model architectures, but summarize the overall workflow [10; 39; 42; 49; 83] or specific techniques for text classification including word embedding [102], feature selection [22; 96; 103], term weighting [3; 91] and etc. Meanwhile, some growing potential text classification architectures are surveyed, such as CNNs [132], attention mechanisms [78]. Since the powerful ability to represent non-Euclidean relation, GNNs have been used in multiple practical fields and reviewed e.g. financial application [117], traffic prediction [72], bio-informatics [142], power system [60], recommendation system [27; 59; 131]. Moreover, [14; 126; 139; 8; 144] comprehensively review the general algorithms and applications of GNNs, as well as certain surveys mainly focus on specific perspectives including graph construction [105; 112], graph representation [34], training [128], pooling [65] and more. However, only [55; 82] briefly introduce certain SoTA GNN-based text classification models. A recent short review paper [75] reviews several SoTA models without providing a comprehensive overview in this area. The contribution of this survey includes:
* This is the first survey focused only on graph neural networks for text classification with a comprehensive description and critical discussion on more than twenty GNN text classification models.
* We categorize the existing GNN text classification models into two main categories with multiple sub-categories, and the tree structure of all the models shows in Figure 1.
* We compare these models in terms of graph construction, node embedding initialization, and graph learning methods. And We also compare the performance of these models on the benchmark datasets and discuss the key findings.
* We discuss the existing challenges and some potential future work for GNN text classification models.
### Text Classification Tasks
Text classification involves assigning a pre-defined label to a given text sequence. The process typically involves encoding pre-processed raw text into numerical representations and using classifiers to predict the corresponding categories. Typical sub-tasks include sentiment analysis, topic labelling, news categorization, and hate speech detection. Certain frameworks can be extended to advanced applications such as information retrieval, summarising, question answering, and natural language inference. This paper focuses specifically on GNN-based models used for typical text classification.
* **Sentiment Analysis** is a task that aims to identify the emotional states and subjective opinions expressed in the input text, such as reviews, micro-blogs, etc. This can be achieved through binary or multi-class classification. Effective sentiment analysis can aid in making informed business decisions based on user feedback.
* **Topic Classification** is a supervised deep learning task to automatically understand the text content and classified into multiple domain-specific categories, typically more than two. The data sources may gather from different domains, including Wikipedia pages, newspapers, scientific papers, etc.
* **Junk Information Detection** involves detecting inappropriate social media content. Social media providers commonly use approaches like hate speech, abusive language, advertising or spam detection to remove such content efficiently.
### Text Classification Development
Many traditional machine learning methods and deep learning models are selected as the baselines for comparing with the GNN-based text classifiers. We mainly summarized those baselines into three types:
_Traditional Machine Learning_: In earlier years, traditional methods such as Support Vector Machines (SVM) (Krizhevsky et al., 2014) and Logistic Regression (Krizhevsky et al., 2014) utilized sparse representations like Bag of Words (BoW) and TF-IDF. However, recent advancements (Krizhevsky et al., 2014; Krizhevsky et al., 2014; Krizhevsky et al., 2014) have focused on dense representations, such as Word2vec, GloVe, and Fasttext, to mitigate the limitations of sparse representations. These dense representations are also used as inputs for sophisticated methods, such as Deep Averaging Networks (DAN) (Krizhevsky et al., 2014) and Paragraph Vector (Doc2Vec) (Krizhevsky et al., 2014), to achieve new state-of-the-art results.
_Sequential Models_: RNNs and CNNs have been utilized to capture local-level semantic and syntactic information of consecutive words from input text bodies. The upgraded models, such as LSTM (Hochreiter and Schmidhuber, 1997) and GRU (Hochreiter and Schmidhuber, 1997), have been proposed to address the vanishing or exploding gradient problems caused by vanilla RNN. CNN-based structures have been applied to capture N-gram features by using one or more convolution and pooling layers, such as Dynamic CNN (Krizhevsky et al., 2014) and TextCNN (Krizhevsky et al., 2014). However, these models can only capture local dependencies of consecutive words. To capture longer-term or non-Euclidean relations, improved RNN structures, such as Tree-LSTM (Hochreiter and Schmidhuber, 1997) and MT-LSTM (Krizhevsky et al., 2014), and global semantic information, like TopicRNN (Krizhevsky et al., 2014), have been proposed. Additionally, graph (Krizhevsky et al., 2014) and tree structure (Krizhevsky et al., 2014) enhanced CNNs have been proposed to learn more about global and long-term dependencies.
_Attentions and Transformers_: attention mechanisms (Chen et al., 2016) have been widely adopted to capture long-range dependencies, such as hierarchical attention networks (Chen et al., 2016) and attention-based hybrid models (Krizhevsky et al., 2014). Self-attention-based transformer models have achieved state-of-the-art performance on many text classification benchmarks via pre-training on some tasks to generate strong contextual word representations. However, these models only focus on learning the
relation between input text bodies and ignore the global and corpus level information. Researchers have proposed combining the benefits of attention mechanisms and Graph Neural Networks (GNNs) to learn both the relation between input text bodies and the global and corpus level information, such as VGCN-BERT (Wang et al., 2019) and BERTGCN (Wang et al., 2019).
### Outline
The outline of this survey is as follows:
* Section 1 presents the research questions and provides an overview of applying Graph Neural Networks to text classification tasks, along with the scope and organization of this survey.
* Section 2 provides background information on text classification and graph neural networks and introduces the key concepts of applying GNNs to text classification from a designer's perspective.
* Section 3 and Section 4 discuss previous work on Corpus-level Graph Neural Networks and Document-level Graph Neural Networks, respectively, and provide a comparative analysis of the strengths and weaknesses of these two approaches.
* Section 5 introduces the commonly used datasets and evaluation metrics in GNN for text classification.
* Section 6 reports the performance of various GNN models on a range of benchmark datasets for text classification and discusses the key findings.
* The challenges for the existing methods and some potential future works are discussed in Section 7.
* In Section 8, we present the conclusions of our survey on GNN for text classification and discuss potential directions for future work.
## 2. Backgrounds of Gnn
### Definition of Graph
A graph in this paper is represented as \(G=(V,E)\), where \(V\) and \(E\) represent a set of nodes (vertices) and edges of \(G\), respectively. A single node in the node set is represented \(v_{i}\in V\), as well as \(e_{ij}=(v_{i},v_{j})\in E\) donates an edge between node \(v_{i}\) and \(v_{j}\). The adjacent matrix of graph \(G\) is represented as \(A\), where \(A\in\mathbb{R}^{n\times n}\) and \(n\) is the number of nodes in graph \(G\). If \(e_{ij}\in E\), \(A_{ij}=1\), otherwise \(A_{ij}=0\). In addition, we use \(\mathbf{X}\) and \(\mathbf{E}\) to represent the nodes and edges representations in graph \(G\), where \(\mathbf{X}\in\mathbb{R}^{n\times m}\) and \(\mathbf{E}\in\mathbb{R}^{n\times c}\). \(\mathbf{x}_{i}\in\mathbb{R}^{m}\) represents the \(m\)-dimensional vector of node \(v_{i}\) and \(\mathbf{e}_{ij}\in\mathbb{R}^{c}\) represents the \(c\)-dimensional vector of edge \(e_{ij}\) (most of the recent studies set \(c=1\) to represent a weighting scalar). \(\mathbf{A}\) donates the edge feature weighted adjacent matrix.
### Traditional Graph-based Algorithms
Before GNNs were broadly used for representing irregular relations, traditional graph-based algorithms have been applied to model the non-Euclidean structures in text classification e.g. Random Walk (197; 146), Graph Matching (101; 104), Graph Clustering (79) which has been well summarized in (124). There are three common limitations of those traditional graph-based algorithms. Firstly, most of those algorithms mainly focus on capturing graph-level structure information without considering the significance of node and edge features. For example, Random Walk based approaches (107; 146) mainly focus on using distance or angle between node vectors to calculate transition probability while ignoring the information represented by node vectors. Secondly, since the traditional graph-based
algorithms are only suitable for specific tasks, there is no unified learning framework for addressing various practical tasks. For example, [44] proposes a graph clustering method that requires a domain knowledge-based ontology graph. Lastly, the traditional graph based methods are comparative time inefficient like the Graph Edit Distance-based graph matching methods have exponential time complexity [104].
### Foundations of GNN
To tackle the limitation of traditional graph-based algorithms and better represent non-Euclidean relations in practical applications, Graph Neural Networks are proposed by [100]. GNNs have a unified graph-based framework and simultaneously model the graph structure, node, and edge representations. This section will provide the general mathematical definitions of Graph Neural Networks. The general forward process of GNN can be summarised as follows:
\[\mathbf{H}^{(I)}=\mathcal{F}(\mathbf{A},\mathbf{H}^{(l-1)}) \tag{1}\]
where \(\mathbf{A}\in\mathbb{R}^{n\times n}\) represents the weighted adjacent matrix and \(\mathbf{H}^{(I)}\in\mathbb{R}^{n\times d}\) is the updated node representations at the \(l\)-th GNN layers by feeding \(l-1\)-th layer node features \(\mathbf{H}^{(l-1)}\in\mathbb{R}^{n\times k}\) ( \(k\) is the dimensions of previous layers node representations ) into pre-defined graph filters \(\mathcal{F}\).
Figure 1: Categorizing the graph neural network text classification models.
The most commonly used graph filtering method is defined as follows:
\[\mathbf{H}^{(I)}=\phi(\tilde{\mathbf{A}}\mathbf{H}^{(I-1)}\mathbf{W}) \tag{2}\]
where \(\tilde{\mathbf{A}}=\mathbf{D}^{-\frac{1}{2}}\mathbf{A}\mathbf{D}^{-\frac{1}{2}}\) is the normalized symmetric adjacency matrix. \(\mathbf{A}\in\mathbb{R}^{n\times n}\) is the adjacent matrix of graph \(G\) and \(\mathbf{D}\) is the degree matrix of \(\mathbf{A}\), where \(D_{ii}=\Sigma_{j}A_{ij}\). \(\mathbf{W}\in\mathbb{R}^{k\times d}\) is the weight matrix and \(\phi\) is the activation function. If we design a two layers of GNNs based on the above filter could get a vanilla Graph Convolutional Network (GCN) (Kunze et al., 2017) framework for text classification:
\[\mathbf{Y}=softmax(\tilde{\mathbf{A}}(ReLU(\tilde{\mathbf{A}}\mathbf{H}\mathbf{W}^{(0)}))\mathbf{W}^{ (1)}) \tag{3}\]
where \(\mathbf{W}^{0}\) and \(\mathbf{W}^{1}\) represent different weight matrix for different GCN layers and \(\mathbf{H}\) is the input node features. \(ReLU\) function is used for non-linearization and \(softmax\) is used to generated predicted categories \(\mathbf{Y}\).
### GNN for Text Classification
In this paper, we mainly discuss how GNNs are applied in Text Classification tasks. Before we present the specific applications in this area, we first introduce the key concepts of applying GNNs to text classification from a designer's view. We suppose for addressing a text classification task need to design a graph \(G=(V,E)\). The general procedures include _Graph Construction_, _Initial Node Representation_, _Edge Representations_, _Training Setup_.
#### 2.4.1. Graph Construction
Some applications have explicit graph structures including constituency or dependency graphs (Kunze et al., 2017), knowledge graphs (Kunze et al., 2017; Kunze et al., 2018), social networks (Kunze et al., 2017) without constructing graph structure and defining corresponding nodes and edges. However, for text classification, the most common graph structures are implicit, which means we need to define a new graph structure for a specific task such as designing a word-word or word-document co-occurrence graph. In addition, for text classification tasks, the graph structure can be generally classified into two types:
* _Corpus-level/Document-level_. Corpus-level graphs intend to construct the graph to represent the whole corpus such as (Kunze et al., 2017; Kunze et al., 2018; Kunze et al., 2019), while the document-level graphs focus on representing the non-Euclidean relations existing in a single text body like (Kunze et al., 2017; Kunze et al., 2019; Kunze et al., 2019). Supposing a specific corpus \(\mathcal{C}\) contains a set of documents (text bodies) \(\mathcal{C}=\{D_{1},D_{2},..,D_{j}\}\) and each \(D_{i}\) contains a set of tokens \(D_{i}=\{t_{i_{1}},t_{i_{2}},...,t_{i_{k}}\}\). The vocabulary of \(\mathcal{C}\) can be represented as \(\mathcal{D}=\{t_{1},t_{2},...,t_{l}\}\), where \(l\) is the length of \(\mathcal{D}\). For the most commonly adopted corpus-level graph \(G_{corpus}=(V_{corpus},E_{corpus})\), a node \(v_{i}\) in \(V_{corpus}\) follows \(v_{i}\in\mathcal{C}\cup\mathcal{D}\) and the edge \(e_{ij}\in E_{corpus}\) is one kind of relations between \(v_{i}\) and \(v_{j}\). Regarding the document level graph \(G_{doc_{i}}=(V_{doc_{i}},E_{doc_{i}})\), a node \(v_{i_{j}}\) in \(V_{doc_{i}}\) follows \(v_{i_{j}}\in D_{i}\).
After designing the graph-scale for the specific tasks, specifying the graph types is also important to determine the nodes and their relations. For text classification tasks, the commonly used graph construction ways can be summarized into:
* _Homogeneous/Heterogeneous Graphs_: homogeneous graphs have the same node and edge type while heterogeneous graphs have various node and edge types. For a graph \(G=(V,E)\), we use \(\mathbf{N}^{\mathbf{\alpha}}\) and \(\mathbf{N}^{\mathbf{e}}\) to represent the number of types of \(V\) and \(E\). If \(\mathbf{N}^{\mathbf{\alpha}}=\mathbf{N}^{\mathbf{e}}=1\), \(G\) is a homogeneous graph. If \(\mathbf{N}^{\mathbf{\alpha}}>1\) or \(\mathbf{N}^{\mathbf{e}}>1\), \(G\) is a heterogeous graph.
* _Static/Dynamic Graphs_: Static graphs aim to use the constructed graph structure by various external or internal information to leverage to enhance the initial node representation such as dependency or constituency graph
[110], co-occurrence between word nodes [143], TF-IDF between word and document nodes [53, 123, 133] and so on. However, compared with the static graph, the dynamic graph initial representations or graph topology are changing during training without certain domain knowledge and human efforts. The feature representations or graph structure can jointly learn with downstream tasks to be optimised together. For example, [120] proposed a novel topic-award GNN text classification model with dynamically updated edges between topic nodes with others (e.g. document, word). Piao et al. [95] also designed a dynamic edge based graph to update the contextual dependencies between nodes. Additionally, [16] propose a dynamic GNN model to jointly update the edge and node representation simultaneously. We provide more details about above mentioned models in Section 3 and Section 4.
Another widely used pair of graph categories are _directed_ or _undirected_ graphs based on whether the directions of edges are bi-directional or not. For text classification, most of the GNN designs are following the unidirectional way. In addition, those graph type pairs are not parallel which means those types can be combined.
#### 2.4.2. Initial Node Representation.
Based on the pre-defined graph structure and specified graph type, selecting the appropriate initial node representations is the key procedure to ensure the proposed graph structure can effectively learn node. According to the node entity type, the existing node representation approaches for text classification can be generally summarised into:
* _Word-level Representation_: non-context word embedding methods such as GloVe [93], Word2vec [81], FastText [13] are widely adopted by many GNN-based text classification framework to numerically represent the node features. However, those embedding methods are restricted to capturing only syntactic similarity and fail to represent the complex semantic relationships between words, as well as, they cannot capture the meaning of out-of-vocabulary (OOV) words, and their representations are fixed. Therefore, there are some recent studies selecting ELMo [94], BERT [23], GPT [97] to get contextual word-level node representation. Notably, even if one-hot encoding is the simplest word representation method, there are many GNN-based text classifiers using one-hot encoding and achieving state-of-the-art performance. Few frameworks use randomly initialised vectors to represent the word-level node features.
* _Document-level Representation_: similar to other NLP applications, document level representations are normally acquired by aggregating the word level representation via some deep learning frameworks. For example, some researchers select by extracting the last-hidden state of LSTM or using the [CLS] token from BERT to numerically represent the input text body. Furthermore, it is also a commonly used document-level node representation way to use TF-IDF based document vectors.
Most GNN based text classification frameworks will compare the performance between different node representation methods to conduct quantitative analysis, as well as provide reasonable justifications for demonstrating the effectiveness of the selected initial node representation based on defined graph structure.
#### 2.4.3. Edge Features.
Well-defined edge features can effectively improve the graph representation learning efficiency and performance to exploit more explicit and implicit relations between nodes. Based on the predefined graph types, the edge feature types can be divided into _structural features_ and _non-structural features_. The structural edge features are acquired from explicit relations between nodes such as dependency or constituency relation between words, word-word adjacency
relations, etc. Those relations between nodes are explicitly defined and are also widely employed in other NLP applications. However, more commonly used edge features are non-structural features which implicitly existed between the nodes and specifically applied to specific graph-based frameworks. The typically non-structural edge features are firstly defined by (Wang et al., 2017) for GNNs-based text classification tasks including:
* **PMI** measures the co-occurrence between two words in a sliding window \(W\) and is calculated as: \[\text{PMI}(i,j) =log\frac{p(i,j)}{p(i)p(j)}\] (4) \[p(i,j) =\frac{\#W(i,j)}{\#W}\] (5) \[p(i) =\frac{\#W(i)}{\#W}\] (6) where \(\#W\) is the number of windows in total, and \(\#W(i)\), \(\#W(i,j)\) shows the number of windows containing word \(i\) and both word \(i\) and \(j\) respectively.
* **TF-IDF** is the broadly used weight of the edges between document-level nodes and word-level nodes.
Except for those two widely used implicit edge features, some specific edge weighting methods are proposed to meet the demands of particular graph structures for exploiting more information of input text bodies.
#### 2.4.4. Training Setup.
After specifying the graph structure and types, the graph representation learning tasks and training settings also need to be determined to decide how to optimise the designed GNNs. Generally, the graph representation learning tasks can be categorised into three levels including _Node-level_, _Graph-level_ and _Edge-level_. Node-level and Graph-level tasks involve node or graph classification, clustering, regression, etc, while Edge-level tasks include link prediction or edge classification for predicting the relation existence between two nodes or the corresponding edge categories.
Similar to other deep learning model training settings, GNNs also can be divided into _supervised_, _semi-supervised_ and _unsupervised training settings_. Supervised training provides labelled training data, while unsupervised training utilises unlabeled data to train the GNNs. However, compared with supervised or unsupervised learning, semi-supervised learning methods are broadly used by GNNs designed for text classification applications which could be classified into two types:
* _Inductive Learning_ adjusts the weights of proposed GNNs based on a labelled training set for learning the overall statistics to induce the general trained model for following processing. The unlabeled set can be fed into the trained GNNs to compute the expected outputs.
* _Transductive Learning_ intends to exploit labelled and unlabeled sets simultaneously for leveraging the relations between different samples to improve the overall performance.
## 3. Corpus-Level GNN for Text Classification
We define a corpus-level Graph Neural Network as "constructing a graph to represent the whole corpus", thus, only one or several graphs will be built for the given corpus. We categorize Corpus-level GNN into four subcategories based on the types of nodes shown in the graph.
### Document and Word Nodes as a Graph
Most corpus-level graphs include word nodes and document nodes and there are word-document edges and word-word edges. By applying \(K\)(normally \(K\)=2 or 3) layer GNN, word nodes will serve as a bridge to propagate the information from one document node to another.
#### 3.1.1. PMI and TF-IDF as graph edges: TextGCN, SGC, S\({}^{2}\)GC, NMGC, TG-Transformer, BertGCN.
**TextGCN[133]** Yao et al. [133] builds a corpus-level graph with training document nodes, test document nodes and word nodes. Before constructing the graph, a common preprocessing method[47] has been applied and words shown fewer than 5 times or in NLTK[11] stopwords list have been removed. The edge value between the document node and the word node is TF-IDF and that between the word nodes is PMI. The adjacency matrix of this graph shows as follows.
\[A_{ij}=\begin{cases}\text{PMI}(i,j)&i,j\text{ are words},\text{PMI}(i,j)>0\\ \text{TF-IDF}_{i,j}&i\text{ is document},j\text{ is word}\\ 1&i=j\\ 0&\text{otherwise}\end{cases} \tag{7}\]
A two-layer GCN is applied to the graph, and the dimension of the second layer output equals to the number of classes in the dataset. Formally, the forward propagation of TextGCN shows as:
\[Z=\text{softmax}(\tilde{A}(\text{ReLU}(\tilde{A}X\mathbf{W}^{(0)}))\mathbf{W}^{(1)}) \tag{8}\]
where \(\tilde{A}\) is the normalized adjacency of \(A\) and \(X\) is one-hot embedding. \(W_{0}\) and \(W_{1}\) are learnable parameters of the model. The representation on training documents is used to calculate the loss and that on test documents is for prediction.
\begin{table}
\begin{tabular}{l l} \hline \hline
**Notations** & **Descriptions** \\ \hline \(G\) & A graph. \\ \hline \(V\) & The set of nodes in a graph. \\ \hline \(E\) & The set of edges in a graph. \\ \hline \(e_{ij}\) & An edge between node \(i\) and node \(j\). \\ \hline \(N_{i}\) & The neighbors of a node \(i\). \\ \hline \(\mathbf{A}\) & The graph adjacency matrix. \\ \hline \(\tilde{\mathbf{A}}\) & The normalized matrix \(\mathbf{A}\). \\ \hline \(\tilde{\mathbf{A}}^{k},k\in Z\) & The \(k^{th}\) power of \(\tilde{\mathbf{A}}\). \\ \hline \([\mathbf{A}||B]\) & The concatenation of \(A\) and \(B\). \\ \hline \(D\) & The degree matrix of \(\mathbf{A}\). \(D_{ii}=\Sigma_{j=1}^{n}A_{ij}\). \\ \hline \(W^{(l)}\) & The weight matrix of layer \(l\). \\ \hline \(H\in R^{n\times d}\) & The feature matrix of a graph. \\ \hline \(H^{(l)}\in R^{n\times d}\) & The feature matrix of a graph at layer \(l\). \\ \hline \(\tilde{\mathbf{h}}_{i}\in R^{n}\) & The feature vector of the node \(i\) \\ \hline \(\tilde{\mathbf{h}}_{i}^{(l)}\in R^{n}\) & The feature vector of the node \(i\) at layer \(l\). \\ \hline \(Z\in R^{n\times d}\) & The output feature matrix of a graph. \\ \hline \(z_{i}\in R^{n}\) & The output feature vector of the node \(i\) \\ \hline \hline \end{tabular}
\end{table}
Table 1. Commonly used notations in Graph Neural Networks
TextGCN is the first work that treats a text classification task as a node classification problem by constructing a corpus-level graph and has inspired many following works.
Based on TextGCN, several works follow the same graph construction method and node initialization but apply different graph propagation models.
**SGC[123]** To make GCN efficient, SGC (Simple Graph Convolution) removes the nonlinear activation function in GCN layers, therefore, the K-layer propagation of SGC shows as:
\[Z=\text{softmax}(\tilde{\mathbf{A}}...(\tilde{\mathbf{A}}(\tilde{\mathbf{A}}\mathbf{X}\mathbf{W}^{ (0)})\mathbf{W}^{(1)})...\mathbf{W}^{(K)}) \tag{9}\]
which can be reparameterized into
\[Z=\text{softmax}(\tilde{\mathbf{A}}^{K}\mathbf{X}\mathbf{W}) \tag{10}\]
and \(K\) is 2 when applied to text classification tasks. With a smaller number of parameters and only one feedforward layer, SGC saves computation time and resources while improving performance.
**S\({}^{2}\)GC[148]** To solve the oversmoothing issues in GCN, Zhu and Koniusz [148] propose Simple Spectral Graph Convolution(S\({}^{2}\)GC) which includes self-loops using Markov Diffusion Kernel. The output of S\({}^{2}\)GC is calculated as:
\[Z=\text{softmax}(\frac{1}{K}\Sigma_{k=0}^{K}\tilde{\mathbf{A}}^{k}\mathbf{X}\mathbf{W}) \tag{11}\]
and can be generalized into:
\[Z=\text{softmax}(\frac{1}{K}\Sigma_{k=0}^{K}((1-\alpha)\tilde{\mathbf{A}}^{k}\mathbf{ X}+\alpha\mathbf{X})\mathbf{W}) \tag{12}\]
Similarly, \(K=2\) on text classification tasks and \(\alpha\) denotes the trade-off between self-information of the node and consecutive neighbourhood information. S\({}^{2}\)GC can also be viewed as introducing skip connections into GCN.
**NMGC[53]** Other than using the sum of each GCN layer in S\({}^{2}\)GC, NMGC applies min pooling using the Multi-hop neighbour Information Fusion (MIF) operator to address oversmoothing problems. A MIF function is defined as:
\[\text{MIF}(K)=\min(\tilde{\mathbf{A}}\mathbf{X}\mathbf{W},\tilde{\mathbf{A}}^{2}\mathbf{X}\mathbf{W},...,\tilde{\mathbf{A}}^{K}\mathbf{X}\mathbf{W}) \tag{13}\]
NMGC-K firstly applies a MIF(\(K\)) layer then a GCN layer and K is 2 or 3. For example, when \(K=3\), the output is:
\[Z=\text{softmax}(\tilde{\mathbf{A}}(\text{ReLU min}(\tilde{\mathbf{A}}\mathbf{X}\mathbf{W}^{ (0)},\tilde{\mathbf{A}}^{2}\mathbf{X}\mathbf{W}^{(0)},\tilde{\mathbf{A}}^{3}\mathbf{X}\mathbf{W}^{(0) }))\mathbf{W}^{(1)}) \tag{14}\]
NMGC can also be treated as a skip-connection in Graph Neural Networks which makes the shallow layer of GNN contribute to the final representation directly.
**TG-Transformer[137]** TextGCN treats the document nodes and word nodes as the same type of nodes during propagation, and to introduce heterogeneity into the TextGCN graph, TG-Transformer (Text Graph Transformer) adopts two sets of weights for document nodes and word nodes respectively. To cope with a large corpus graph, subgraphs are sampled from the TextGCN graph using PageRank algorithm[88]. The input embedding of is the sum of three types of embedding: pretrained GloVe embedding, node type embedding, and Weisfeiler-Lehman structural encoding[85]. During propagation, self-attention[114] with graph residual[138] is applied.
**BertGCN[63]** To combine BERT[45] and TextGCN, BertGCN enhances TextGCN by replacing the document node initialization with the BERT [CLS] output of each epoch and replacing the word input vector with zeros. BertGCN trains BERT and TextGCN jointly by interpolating the output of TextGCN and BERT:
\[Z=\lambda Z_{GCN}+(1-\lambda)Z_{BERT} \tag{15}\]
where \(\lambda\) is the trade-off factor. To optimize the memory during training, a memory bank is used to track the document input and a smaller learning rate is set to BERT module to remain the consistency of the memory bank. BertGCN shows that with the help of TextGCN, BERT can achieve better performance.
#### 3.1.2. Multi-Graphs/Multi-Dimensional Edges: TensorGCN, ME-GCN.
**TensorGCN**[68] Instead of constructing a single corpus-level graph, TensorGCN builds three independent graphs: Semantic-based graph, Syntactic-based graph, and Sequential-based graph to incorporate semantic, syntactic and sequential information respectively and combines them into a tensor graph.
Three graphs share the same set of TF-IDF values for the word-document edge but different values for word-word edges. Semantic-based graph extracts the semantic features from a trained Long short-term memory(LSTM)[36] model and connects the words sharing high similarity. Syntactic-based graph uses Stanford CoreNLP parser[76] and constructs edges between words when they have a larger probability of having dependency relation. For Sequential-based graph, PMI value is applied as TextGCN does.
The propagation includes intra-graph propagation and inter-graph propagation. The model first applies the GCN layer on three graphs separately as intra-graph propagation. Then the same nodes on three graphs are treated as a virtual graph and another GCN layer is applied as inter-graph propagation.
**ME-GCN**[118] To fully utilize the corpus information and analyze rich relational information of the graph, ME-GCN (Multi-dimensional Edge-Embedded GCN) builds a graph with multi-dimensional word-word, word-document and document-document edges. Word2vec and Doc2vec embedding is firstly trained on the given corpus and the similarity of each dimension of trained embedding is used to construct the multi-dimensional edges. The trained embedding also serves as the input embedding of the graph nodes. During propagation, GCN is firstly applied on each dimension and representations on different dimensions are either concatenated or fed into a pooling method to get the final representations of each layer.
#### 3.1.3. Making TextGCN Inductive: HeteGCN, InducT-GCN, T-VGAE.
**HeteGCN**[98] HeteGCN (Heterogeneous GCN) optimizes the TextGCN by decomposing the TextGCN undirected graph into several directed subgraphs. Several subgraphs from TextGCN graph are combined sequentially as different layers: feature graph (word-word graph), feature-document graph (word-document graph), and document-feature graph (document-word graph). Different combinations were tested and the best model is shown as:
\[Z=\text{softmax}(\mathbf{A}_{w-d}(\text{ReLU}(\mathbf{A}_{w-w}\mathbf{X}_{w}\mathbf{W}^{(0)} ))\mathbf{W}^{(1)}) \tag{16}\]
where \(\mathbf{A}_{w-w}\) and \(\mathbf{A}_{w-d}\) show the adjacency matrix for the word-word subgraph and word-document subgraph. Since the input of HeteGCN is the word node embeddings without using document nodes, it can also work in an inductive way while the previous corpus-level graph text classification models are all transductive models.
**InducT-GCN**[119] InducT-GCN (InducTive Text GCN) aims to extend the transductive TextGCN into an inductive model. Instead of using the whole corpus for building the graph, InducT-GCN builds a training corpus graph and makes the input embedding of the document as the TF-IDF vectors, which aligns with the one-hot word embeddings. The weights are learned following TextGCN but InducT-GCN builds virtual subgraphs for prediction on new test documents.
**T-VGAE**[127] T-VGAE (Topic Variational Graph Auto-Encoder) applies Variational Graph Auto-Encoder on the latent topic of each document to make the model inductive. A vocabulary graph \(A_{\varnothing}\) which connects the words using PMI values is constructed while each document is represented using the TF-IDF vector. All the document vectors are
stacked into a matrix which can also be treated as a bipartite graph \(A_{d}\). Two graph auto-encoder models are applied on \(A_{p}\) and \(A_{d}\) respectively. The overall workflow shows as:
\[Z_{v}=\text{Encoder}_{GCN}(\mathbf{A}_{v},\mathbf{X}_{v}) \tag{18}\] \[Z_{d}=\text{Encoder}_{UDMP}(\mathbf{A}_{d},Z_{v})\] (19) \[\mathbf{A}_{v}^{*}=\text{Decoder}(Z_{v})\] (20) \[\mathbf{A}_{d}^{*}=\text{Decoder}(\mathbf{Z}_{d},\mathbf{Z}_{v}) \tag{17}\]
where \(X^{p}\) is an Identity Matrix. The \(\text{Encoder}_{GCN}\) and the decoders are applied following \(VGAE\)(Kunze et al., 2017) while \(\text{Encoder}_{UDMP}\) is an unidirectional message passing variant of \(\text{Encoder}_{GCN}\). The training objective is minimising the reconstruction error and \(Z_{d}\) is used for the classification task.
### Document Nodes as a Graph
To show the global structure of the corpus directly, some models only adopt document nodes in the non-heterogeneous graph.
**knn-GCN(Chen et al., 2017)** knn-GCN constructs a k-nearest-neighbours graph by connecting the documents with their \(K\) nearest neighbours using Euclidean distances of the embedding of each document. The embedding is generated in an unsupervised way: either using the mean of pretrained GloVe word vectors or applying LDA(Kunze et al., 2017). Both GCN and Attention-based GNN(Kunze et al., 2017) are used as the graph model.
**TextGTL(Wang et al., 2017)** Similar to TensorFlowGCN, TextGTL (Text-oriented Graph-based Transductive Learning) constructs three different document graphs: Semantics Text Graph, Syntax Text Graph, and Context Text Graph while all the graphs are non-heterogeneous. Semantics Text Graph uses Generalized Canonical Correlation Analysis(Chen et al., 2017) and trains a classifier to determine the edge values between two document nodes. Syntax Text Graph uses the Stanford CoreNLP dependency parser(Sutskever et al., 2017) to construct units and also trains a classifier. Context Text Graph defines the edge values by summing up the PMI values of the overlapping words in two documents. Two GCN layers are applied and the output of each graph is mixed as the output of this layer and input for the next layer for all three graphs:
\[\mathbf{H}^{(1)}=\sigma(\mathbf{A}\mathbf{H}^{(0)}\mathbf{W}^{(0)}) \tag{22}\] \[\mathbf{H}^{(2)}=\sigma(\mathbf{A}[\mathbf{H}^{(1)}_{sem}||\mathbf{H}^{(1)}_{syn }||\mathbf{H}^{(1)}_{seq}]\mathbf{W}^{(1)})\] (23) \[Z=\text{Pooling}_{mean}(\mathbf{H}^{(2)}_{sem},\mathbf{H}^{(2)}_{syn}, \mathbf{H}^{(2)}_{seq}) \tag{21}\]
where \(H^{(0)}\) is the TF-IDF vector of the documents. Data augmentation with super nodes is also applied in TextGTL to strengthen the information in graph models.
### Word Nodes as a Graph
By neglecting the document nodes in the graph, a graph with only word nodes shows good performance in deriving the graph-based embedding and is used for downstream tasks. Since no document nodes are included, this method can be easily adapted as an inductive learning model.
**Vgcn-Bert(Sutskever et al., 2017)** VGCN-BERT enhances the input embedding of BERT by concatenating it with the graph embedding. It first constructs a vocabulary graph and uses PMI as the edge value. A variant of the GCN layer called
VGCN(Vocabulary GCN) is applied to derive the graph word embedding:
\[\mathbf{X}_{Graph}=\text{ReLU}(\mathbf{X}_{BERT}\mathbf{A}\mathbf{W}^{(0)})\mathbf{W}^{(1)} \tag{24}\]
where BERT embedding is used as the input. The graph word embeddings are concatenated with BERT embedding and fed into the BERT as extra information.
### Extra Topic Nodes in the Graph
Topic information of each document can also provide extra information in corpus-level graph neural networks. Several models also include topic nodes in the graph.
#### 3.4.1. Single Layer Topic nodes: HGAT, STGCN
**HGAT**[(64)] HGAT (Heterogeneous GAT) applies LDA[(12)] to extract topic information for each document, top \(P\) topics with the largest probabilities are selected as connected with the document. Instead of using the words directly, to utilize the external knowledge HGAT applies the entity linking tool TAGME1 to identify the entities in the document and connects them. The semantic similarity between entities using pretrained Word2vec with threshold is used to define the connectedness between entity nodes. Since the graph is a heterogeneous graph, a HIN (heterogeneous information network) model is implemented which propagates solely on each sub-graphs depending on the type of node. An HGAT model is applied by considering type-level attention and node-level attention. For a given node, the type-level attention learns the weights of different types of neighbouring nodes while node-level attention captures the importance of different neighbouring nodes when ignoring the type. By using the dual attention mechanism, HGAT can capture the information of type and node at the same time.
Footnote 1: [https://sobigdata.dscience.org/group/tagme/](https://sobigdata.dscience.org/group/tagme/)
**STGCN**[(130)] In terms of short text classification, STGCN (Short-Text GCN) applies BTM to get topic information to avoid the data sparsity problem from LDA. The graph is constructed following TextGCN while extra topic nodes are included. The edge values of word-topic and document-topic are from BTM and a classical two-layer GCN is applied. The word embeddings learned from STGCN are concatenated with BERT embeddings and a bi-LSTM model is applied for final prediction.
#### 3.4.2. Multi-layer Topic Nodes: DHTG
**DHTG**[(120)] To capture different levels of information, DHTG (Dynamic Hierarchical Topic Graph) introduces hierarchical topic-level nodes in the graph from fine-grain to coarse. Poisson gamma belief network (PGBN)[(145)] is used as a probabilistic deep topic model. The first-layer topics are from the combination of words, while deeper layers are generated by previous layers' topics with the weights of PGBN, and the weights serve as the edge values of each layer of topics. For the topics on the same layer, the cosine similarity is chosen as the edge value. A two-layer GCN is applied and the model is learned jointly with PGBN, which makes the edge of the topics dynamic.
### Critical Analysis
Compared with sequential models like CNN and LSTM, corpus-level GNN is able to capture the global corpus structure information with word nodes as bridges between document nodes and shows great performance without using external resources like pretrained embedding or pretrained model. However, the improvement in performance is marginal when pretrained embedding is included. Another issue is that most corpus-level GNN is transductive learning which is not
applicable in the real world. Meanwhile, constructing the whole corpus into a graph requires large memory space especially when the dataset is large.
A detailed comparison of corpus-level GNN is displayed in Table 2.
## 4. Document-level GNN for Text Classification
By constructing the graph based on each document, a graph classification model can be used as a text classification model. Since each document is represented by one graph and new graphs can be built for test documents, the model can easily work in an inductive way.
### Local Word Consecutive Graph
The simplest way to convert a document into a graph with words as nodes is by connecting the consecutive words within a sliding window.
#### 4.1.1. Simple consecutive graph models: Text-Level-GNN, MPAD, TextING.
**Text-Level-GNN[(37)]** Text-Level-GNN applies a small sliding window and constructs the graph with a small number of nodes and edges in each graph, which saves memory and computation time. The edge value is trainable and shared across the graphs when connecting the same two words, which also brings global information.
Unlike corpus-level graph models, Text-Level-GNN applies a message passing mechanism (MPM)[(30)] instead of GCN for graph learning. For each node, the neighbour information is aggregated using max-pooling with trainable edge values as the AGGREGATE function and then the weighted sum is used as the COMBINE function. To get the representation of each graph, sum-pooling and an MLP classifier are applied as the READOUT function. The propagation shows as:
\[\mathbf{h}_{i}^{(l+1)}=(1-\alpha)(max_{\mathbf{n}\in\mathcal{N}_{i}}e_{\mathbf{n}}\mathbf{h }_{n}^{(I)})+\alpha\mathbf{h}_{i}^{(I)} \tag{26}\] \[\mathbf{z}_{i}=\text{softmax}(\mathbf{W}\Sigma_{i}\mathbf{h}_{i}+\mathbf{b}) \tag{25}\]
where \(\mathbf{h}_{i}^{(I)}\) is \(i\)th word node presentation of layer \(l\), \(e_{\mathbf{n}i}\) is edge weight from node \(n\) to node \(i\). A two-layer MPM is applied, and the input of each graph is pretrained GloVe vectors.
**MPAD[(86)]** MPAD (Message Passing Attention Networks) connects words within a sliding window of size 2 but also includes an additional master node connecting all nodes in the graph. The edge only shows the connectedness of each pair of word nodes and is fixed. A variant of Gated Graph Neural Networks is applied where the AGGREGATE function is the weighted sum and the COMBINE function is GRU[(18)]. Self-attention is applied in the READOUT function.
To learn the high-level information, the master node is directly concatenated with the READOUT output, working as a skip connection mechanism. To get the final representation, each layer's READOUT results are concatenated to capture multi-granularity information. Pretrained Word2vec is used as the initialization of word nodes input.
**TextING[(143)]** To simplify MPAD, TextING ignores the master node in the document-level graphs, which makes the graph sparser. Compared with Text-Level-GNN, TextING remains fixed edges. A similar AGGREGATE and COMBINE function are applied under the concept of e Gated Graph Neural Networks(GGNN)[(58)] with the weighted sum and GRU. However, for the READOUT function, soft attention is used and both max-pooling and mean-pooling are applied to make sure that "every word plays a role in the text and the keywords should contribute more explicitly".
#### 4.1.2. Advanced graph models: MLGNN, TextSSL, DADGNN.
**MLGNN[61]** MLGNN (Multi-level GNN) builds the same graph as TextING but introduces three levels of MPM: bottom-level, middle-level and top-level. In the bottom-level MPM, the same method with Text-Level-GNN is applied with pretrained Word2vec as input embedding but the edge is non-trainable. In the middle level, a larger window size is adopted and Graph Attention Networks(GAT)[115] is applied to learn long distant word nodes information. In the top-level MPM, all word nodes are connected and multi-head self-attention[114] is applied. By applying three different levels of MPM, MLGNN learns multi-granularity information well.
**DADGNN[70]** DADGNN (Deep Attention Diffusion GNN) constructs the same graph as TextING but uses attention diffusion to overcome the oversmoothing issue. Pretrained word embedding is used as the input of each node and an MLP layer is applied. Then, the graph attention matrix is calculated based on the attention to the hidden states of each node. The diffusion matrix is calculated as
\[T=\Sigma_{n=0}^{\infty}\epsilon_{n}\mathbf{A}^{n} \tag{27}\]
where \(A\) is the graph attention matrix and \(\epsilon\) is the learnable coefficients. \(A^{n}\) plays a role of connecting \(n\)-hop neighbours and Liu et al. [70] uses \(n\in[4,7]\) in practice. A multi-head diffusion matrix is applied for layer propagation.
**TextSSL[95]** To solve the word ambiguity problem and show the word synonymity and dynamic contextual dependency, TextSSL (Sparse Structure Learning) learns the graph using intra-sentence neighbours and inter-sentence neighbours simultaneously. The local syntactic neighbour is defined as the consecutive words and trainable edges across graphs are also included by using Gumbel-softmax. By applying sparse structure learning, TextSSL manages to select edges with dynamic contextual dependencies.
### Global Word Co-occurrence Graph
Similar to the TextGCN graph, document-level graphs can also use PMI as the word-word edge values,
#### 4.2.1. Only global word co-occurrence: DAGNN
**DAGNN[125]** To address the long-distance dependency, hierarchical information and cross-domain learning challenges in domain-adversarial text classification tasks, Wu et al. [125] propose DAGNN (Domain-Adversarial Graph Neural Network). Each document is represented by a graph with content words as nodes and PMI values as edge values, which can capture long-distance dependency information. Pretrained FastText is chosen as the input word embeddings to handle the out-of-vocabulary issue and a GCN model with skip connection is used to address the oversmoothing problem. The propagation is formulated as:
\[\mathbf{H}^{(l+1)}=(1-\alpha)\mathbf{\tilde{A}}\mathbf{H}^{(l)}+\alpha\mathbf{H}^{(0)} \tag{28}\]
To learn the hierarchical information of documents, DiffPool[136] is applied to assign each document into a set of clusters. Finally, adversarial training is used to minimize the loss on source tasks and maximize the differentiation between source and target tasks.
#### 4.2.2. Combine with Extra Edges: ReGNN, GFN
**ReGNN[56]** To capture both global and local information, ReGNN (Recursive Graphical Neural Network) uses PMI together with consecutive words as the word edges. And graph propagation function is the same as GGNN while additive attention[7] is applied in aggregation. Pretrained GloVe is the input embedding of each word node.
**GFN[20]** GFN (Graph Fusion Network) builds four types of graphs using the word co-occurrence statistics, PMI, the similarity of pretrained embedding and Euclidean distance of pretrained embedding. Although four corpus-level
graphs are built, the graph learning actually happens on subgraphs of each document, making the method a document-level GNN. For each subgraph, each type of graph is learned separately using the graph convolutional method and then a fusion method of concatenation is used. After an MLP layer, average pooling is applied to get the document representation.
### Other word graphs
Some other ways of connecting words in a document have been explored.
**HyperGAT[(25)]** Ding et al. [(25)] proposes HyperGAT (Hypergraph Attention Networks) which builds hypergraphs for each document to capture high-level interaction between words. Two types of hyperedges are included: sequential hyperedges connecting all words in a sentence and semantic hyperedges connecting top-K words after getting the topic of each word using LDA. Like traditional hypergraph propagations, HyperGAT follows the same two steps of updating but with an attention mechanism to highlight the key information: Node-level attention is applied to learn hyperedges representations and edge-level attention is used for updating node representations.
**IGCN[(110)]** Contextual dependency helps in understanding a document and the graph neural network is no exception. IGCN constructs the graph with the dependency graph to show the connectedness of each pair of words in a document. Then, the word representation learned from Bi-LSTM using POS embedding and word embedding is used to calculate the similarity between each pair of nodes. Attention is used for the output to find the important relevant semantic features.
**GTNT[(80)]** Words with higher TF-IDF values should connect to more word nodes, with this in mind, GTNT(Graph Transformer Networks based Text representation) uses sorted TF-IDF value to determine the degree of each node and applies the Havel-Hakimi algorithm[(32)] to determine the edges between word nodes. A variant of GAT is applied during model learning. Despite the fact that GAT's attention score is mutual for two nodes, GTNT uses relevant importance to adjust the attention score from one node to another. Pretrained Word2vec is applied as the input of each node.
### Critical Analysis
Most document-level GNNs connect consecutive words as edges in the graph and apply a graph neural network model, which makes them similar to CNN where the receptive field enlarges when graph models go deeper. Also, the major differences among document-level GNNs are the details of graph models, e.g. different pooling methods, and different attention calculations, which diminishes the impact of the contribution of these works. Compared with corpus-level GNN, document-level GNN adopts more complex graph models and also suffers from the out-of-memory issue when the number of words in a document is large.
A detailed comparison of document-level GNN is displayed in Table 2.
## 5. Datasets and Metrics
### Datasets
There are many popular text classification benchmark datasets, while this paper mainly focuses on the datasets used by GNN-based text classification applications. Based on the purpose of applications, we divided the commonly adopted datasets into three types including _Topic Classification_, _Sentiment Analysis_ and _Other_. Most of these text classification datasets contain a single target label of each text body. The key information of each dataset is listed in Table 3.
#### 5.1.1. Topic Classification.
Topic classification models aim to classify input text bodies from diverse sources into predefined categories. News categorization is a typical topic classification task to obtain key information from news and classify them into corresponding topics. The input text bodies normally are paragraphs or whole documents especially for news categorization, while there are still some short text classification datasets from certain domains such as micro-blogs, bibliography, etc. Some typical datasets are listed:
* _Ohsumed_[(40)] is acquired from the MEDLINE database and further processed by [(133)] via selecting certain documents (abstracts) and filtering out the documents belonging to multiple categories. Those documents are classified into 23 cardiovascular diseases. The statistics of [(133)] processed Ohsumed dataset is represented in Table 3, which is directly employed by other related works.
* _R8 / R52_ are two subsets of the Reuters 21587 dataset 2 which contain 8 and 52 news topics from Reuters financial news services, respectively. Footnote 2: For the original Reuters 21587 dataset, please refer to this link [http://www.daviddlewis.com/resources/testcollections/reuters21578](http://www.daviddlewis.com/resources/testcollections/reuters21578)
* _20NG_ is another widely used news categorization dataset that contains 20 newsgroups. It was originally collected by [(50)], but the procedures are not explicitly described.
\begin{table}
\begin{tabular}{c|l l l l l} \hline \hline
**Graph** & **Model** & **External Resource** & **Edge Construction** & **Node Initialization** & **Learning** \\ \hline \multirow{8}{*}{**Corpus-level**} & TextGCN[(133)] & N/A & pmi, tf\({}^{-}\)idf & one-hot & transductive \\ \cline{2-6} & SGC[(123)] & N/A & pmi, tf\({}^{-}\)idf & one-hot & transductive \\ \cline{2-6} & SGC[(148)] & N/A & pmi, tf\({}^{-}\)idf & one-hot & transductive \\ \cline{2-6} & NMGC[(53)] & N/A & pmi, tf\({}^{-}\)idf & one-hot & transductive \\ \cline{2-6} & TG-transformer[(137)] & GloVe & pmi, tf\({}^{-}\)idf & GloVe & transductive \\ \cline{2-6} & BERTGCN[(63)] & BERT & pmi, tf\({}^{-}\)idf & doc: 0 word: BERT emb & transductive \\ \cline{2-6} & TensorGCN[(63)] & GloVe, CoreNLP & emb sim, dep graph, pmi, tf\({}^{-}\)idf & one-hot & transductive \\ \cline{2-6} & MI-GCN[(118)] & N/A & emb sim, tf\({}^{-}\)idf & Trained Word2vec/doc2vec & transductive \\ \cline{2-6} & HetGCN[(158)] & N/A & pmi, tf\({}^{-}\)idf & one-hot & inductive \\ \cline{2-6} & InducT-GCN[(119)] & N/A & pmi, tf\({}^{-}\)idf & one-hot, tf\({}^{-}\)idf vectors & inductive \\ \cline{2-6} & TVGAE[(127)] & N/A & pmi & one-hot & inductive \\ \cline{2-6} & VGCN-BERT[(73)] & BERT & pmi & BERT emb & transductive \\ \cline{2-6} & klm-GCN[(5)] & GloVe & emb sim & GloVe & transductive \\ \cline{2-6} & TextGTL[(54)] & CoreNLP & dep graph, pmi & tf\({}^{-}\)idf vectors & transductive \\ \cline{2-6} & HGAT[(64)] & TAGME, Word2vec & LDA, entity link, emb sim & tf\({}^{-}\)idf, LDA, Word2vec & transductive \\ \cline{2-6} & STGCN[(134)] & BERT & pmi, tf\({}^{-}\)idf, BTM & BERT emb & transductive \\ \cline{2-6} & DHTG[(120)] & N/A & PGRN, pmi, tf\({}^{-}\)idf & one-hot & transductive \\ \hline \multirow{8}{*}{**Doc-level**} & Text-Level-GNN[(37)] & GloVe & consecutive words & GloVe & inductive \\ \cline{2-6} & MPAD[(86)] & Word2vec & consecutive words & Word2vec & inductive \\ \cline{2-6} & TextING[(143)] & GloVe & consecutive words & GloVe & inductive \\ \cline{2-6} & MLGNN[(61)] & Word2vec & consecutive words & Word2vec & inductive \\ \cline{2-6} & DADGNN[(70)] & Word2vec/GloVe & consecutive words & Word2vec/GloVe & inductive \\ \cline{2-6} & TextSL[(95)] & GloVe & consecutive words & GloVe & inductive \\ \cline{2-6} & DAGNN[(125)] & GloVe & pmi & GloVe & inductive \\ \cline{2-6} & ReGNN[(56)] & GloVe & consecutive words, pmi & GloVe & inductive \\ \cline{2-6} & GFN[(20)] & GloVe & pmi, emb sim & GloVe & inductive \\ \cline{2-6} & HyperGAT[(25)] & N/A & LDA, consecutive words & one-hot & inductive \\ \cline{2-6} & RGCN[(110)] & spatCy & dep graph & LSTM emb & inductive \\ \cline{2-6} & GTNT[(80)] & Word2vec/GloVe & tf\({}^{-}\)idf sorted value & Word2vec/GloVe & inductive \\ \hline \hline \end{tabular}
\end{table}
Table 2. Models Detailed Comparison on whether using external resources, how to construct the edge and node input, and whether transductive learning or inductive learning. GloVe and Word2vec are pretrained if not specified. \({}^{*}\)emb sim\({}^{*}\) is short for \({}^{*}\)embedding similarity\({}^{*}\), \({}^{*}\)dep graph\({}^{*}\) is short \({}^{*}\)dependency graph\({}^{*}\).
* _AG News_[141] is a large-scale news categorization dataset compared with other commonly used datasets which are constructed by selecting the top-4 largest categories from the AG corpus. Each news topic contains 30,000 samples for training and 1900 samples for testing.
* _Database systems and Logic Programming (DBLP)_ is a topic classification dataset to classify the computer science paper titles into six various topics [80]. Different from paragraph or document based topic classification dataset, DBLP aims to categorise scientific paper titles into corresponding categories, the average input sentence length is much lower than others.
* _Dbpedia_[52] is a large-scale multilingual knowledge base that contains 14 non-overlapping categories. Each category contains 40000 samples for training and 5000 samples for testing.
* _WebKB_[19] is a long corpus web page topic classification dataset.
* _TREC_[57] is a question topic classification dataset to categorise one question sentence into 6 question categories.
#### 5.1.2. Sentiment Analysis.
The purpose of sentiment analysis is to analyse and mine the opinion of the textual content which could be treated as a binary or multi-class classification problem. The sources of existing sentiment analysis tasks come from movie reviews, product reviews or user comments, social media posts, etc. Most sentiment analysis datasets target to predict the people's opinions of one or two input sentences of which the average length of each input text body is around 25 tokens.
\begin{table}
\begin{tabular}{c|l l l l l l l l} \hline \hline
* _Movie Review (MR)_[(89)] is a binary sentiment classification dataset for movie review which contains positive and negative data equally distributed. Each review only contains one sentence.
* _Stanford Sentiment Treebank (SST)_[(106)] is an upgraded version of MR which contains two subsets SST-1 and SST-2. SST-1 provides five fine-grained labels while SST-2 is a binary sentiment classification dataset.
* _Internet Movie DataBase (IMDB)_[(74)] is also an equally distributed binary classification dataset for sentiment analysis. Different from other short text classification dataset, the average number of words of each review is around 221.
* _Yelp 2014_[(109)] is a large scale binary category based sentiment analysis dataset for longer user reviews collected from Yelp.com.
Certain binary sentiment classification benchmark datasets are also used by GNN-based text classifiers. Most of them are gathered from shorter user reviews or comments (normally one or two sentences) from different websites including Amazon Alexa Reviews (_AAR_), Twitter US Airline (_TUA_), Youtube comments (_SenTube-A_ and _SenTube-T_) [(113)].
#### 5.1.3. Other Datasets
There are some datasets targeting other tasks including hate detection, grammaticality checking, etc. For example, _ArangoHate_[(4)] is a hate detection dataset, a sub-task of intend detection, which contains 2920 hateful documents and 4086 normal documents by resampling the merged datasets from [(21)] and [(121)]. In addition, [(26)] proposes another large scale hate language detection dataset, namely _FountaHate_ to classify the tweets into four categories including 53,851, 14,030, 27,150, and 4,965 samples of normal, spam, hateful and abusive, respectively. Since there is no officially provided training and testing splitting radio for above datasets, the numbers represented in Table 3 are following the ratios (train/development/test is 85:5:10) defined by [(73)].
#### 5.1.4. Dataset Summary
Since an obvious limitation of corpus-level GNN models has high memory consumption limitation [(37; 25; 137)], the datasets with a smaller number of documents and vocabulary sizes such as Ohsumed, R8/R52, 20NG or MR are widely used to ensure feasibly build and evaluate corpus-level graphs. For the document-level GNN based models, some larger size datasets like AG-News can be adopted without considering the memory consumption problem. From Table 3, we could find most of the related works mainly focus on the GNN applied in topic classification and sentiment analysis which means the role of GNNs in other text classification tasks such as spam detection, intent detection, abstractive question answering need to be further exploited. Another observed trend is short text classification are gained less attention compared with long document classification tasks. In this case, GNN in short text classification may be an.
### Evaluation Methods
#### 5.2.1. Performance Metrics
In terms of evaluating and comparing the performance of proposed models with other baselines, accuracy and F1 are most commonly used metrics to conduct overall performance analysis, ablation studies and breakdown analysis. We use \(TP\), \(FP\), \(TN\) and \(FN\) to represent the number of true positive, false positive, true negative and false negative samples. \(N\) is the total number of samples.
* _Accuracy_ and _Error Rate_: are basic evaluation metrics adopted by many GNN-based text classifiers such as [(54; 67; 120; 133; 137)]. Most of the related papers run all baselines and their models 10 times or 5 times to show
the \(mean\pm standard\)\(deviation\) of accuracy for reporting more convincing results. It can be defined as: \[Accuracy=\frac{(TF+TN)}{N},\] (29) \[ErrorRate=1-Accuracy=\frac{(FP+FN)}{N}.\] (30)
* _Precision_, _Recall_ and _F1_: are metrics for measuring the performance especially for imbalanced datasets. Precision is used to measure the results relevancy, while recall is utilised to measure how many truly relevant results acquired. Through calculating the harmonic average of Precision and Recall could get F1. Those three measurements can be defined as: \[Precision=\frac{TP}{(TP+FP)},\] (31) \[Recall=\frac{TP}{(TP+FN)},\] (32) \[F1=\frac{2\times Precision\times Recall}{(Precision+Recall)},\] (33) Few papers only utilise recall or precision to evaluate the performance [80]. However, precision and recall are more commonly used together with F1 or Accuracy to evaluate and analyse the performance from different perspectives e.g. [56, 64, 73, 127]. In addition, based on different application scenarios, different F1 averaging methods are adopted by those papers to measure overall F1 score of multi-class (Number of Classes is \(C\)) classification tasks including:
* _Macro-F1_ applies the same weights to all categories to get overall \(F1_{macro}\) by taking the arithmetic mean. \[F1_{macro}=\frac{1}{C}\Sigma_{i=1}^{C}F1_{i},\] (34)
* _Micro-F1_ is calculated by considering the overall \(P_{micro}\) and \(R_{micro}\). It can be defined as: \[F1_{micro}=\frac{2\times P_{micro}\times R_{micro}}{(P_{micro}+R_{micro})}\] (35) where: \[P_{micro}=\frac{\Sigma_{i\in C}TP_{i}}{\Sigma_{i\in C}TP_{i}+FP_{i}},R_{micro }=\frac{\Sigma_{i\in C}TP_{i}}{\Sigma_{i\in C}TP_{i}+FN_{i}},\] (36)
* _Weighted-F1_ is the weighted mean of F1 of each category where the weight \(W_{i}\) is related to the number of occurrences of the corresponding \(i\)th class, which can be defined as: \[F1_{macro}=\Sigma_{i=1}^{C}F1_{i}\times W_{i},\] (37)
#### 5.2.2. Other Evaluation Aspects.
Since two limitations of GNN-based models are time and memory consumption, therefore, except the commonly used qualitative performance comparison, representing and comparing the GPU or CPU memory consumption and the training time efficiency of proposed models are also adopted by many related studies to demonstrate the practicality in real-world applications. In addition, based on the novelties of various models, specific evaluation methods are conducted to demonstrate the proposed contributions.
* _Memory Consumption_: [25, 37, 70] list the memory consumption of different models for comprehensively evaluating the proposed models in computational efficiency aspect.
* _Time Measurement_: [90; 98] perform performance training time comparison between their proposed models and baselines on different benchmarks. Due to the doubts about the efficiency of applying GNNs for text classification, it is an effective way to demonstrate they could well balance performance and time efficiency.
* _Parameter Sensitivity_ is commonly conducted by GNNs studies to investigate the effect of different hyperparameters e.g. varying sliding window sizes, embedding dimensions of proposed models to represent the model sensitivity via line chart such as [70; 25; 64].
* _Number of Labelled Documents_ is a widely adopted evaluation method by GNN-based text classification models [25; 54; 80; 120; 133; 64] which mainly analyse the performance trend by using different proportions of training data to test whether the proposed model can work well under the limited labelled training data.
* _Vocabulary Size_ is similar to the number of labelled documents but it investigates the effects of using different sizes of vocabulary during the GNN training stage adopted by [120].
#### 5.2.3. Metrics Summary.
For general text classification tasks, Accuracy, Precision, Recall and varying F1 are commonly used evaluation metrics for comparing with other baselines. However, for GNN based models, only representing the model performance cannot effectively represent the multi-aspects of proposed models. In this case, there are many papers conducting external processes to evaluate and analyse the GNN based classifier from multiple views including time and memory consumption, model sensitivity and dataset quantity.
## 6. Performance
While different GNN text classification models may be evaluated on different datasets, there are some datasets that are commonly used across many of these models, including **20NG**, **R8**, **R52**, **Ohsumed** and **MR**. The accuracy of various models assessed on these five datasets is presented in Table 4. Some of the results are reported with ten times average accuracy and standard derivation while some only report the average accuracy. Several conclusions can be drawn:
\begin{table}
\begin{tabular}{c|l l l l l l l} \hline \hline \multirow{2}{*}{**Type**} & \multirow{2}{*}{**Method**} & \multicolumn{2}{l}{**External**} & \multirow{2}{*}{**20NG**} & \multirow{2}{*}{**R8**} & \multirow{2}{*}{**R52**} & \multirow{2}{*}{**Ohsumed**} & \multirow{2}{*}{**MR**} \\ & & **Resource** & & & & & & \\ \hline \multirow{11}{*}{**Corpus-level**} & TextGCN [133] & N/A & 86.34 \(\pm\) 0.09 & 97.07 \(\pm\) 00.10 & 93.56 \(\pm\) 0.18 & 68.36 \(\pm\) 0.56 & 76.74 \(\pm\) 0.20 \\ \cline{2-7} & SGC [123] & N/A & 88.5 \(\pm\) 0.1 & 97.2 \(\pm\) 0.1 & 94.0 \(\pm\) 0.2 & 68.5 \(\pm\) 0.3 & 75.9 \(\pm\) 0.3 \\ \cline{2-7} & S2GC [148] & N/A & 88.6\(\pm\) 0.1 & 97.4 \(\pm\) 0.1 & 94.5 \(\pm\) 0.2 & 68.5 \(\pm\) 0.1 & 76.7 \(\pm\) 0.0 \\ \cline{2-7} & TG-transformer [137] & GloVe & - & 98.1\(\pm\)0.1 & 95.2\(\pm\)0.2 & 70.4\(\pm\)0.4 & - \\ \cline{2-7} & DHTG [120] & N/A & 87.13 \(\pm\) 0.07 & 97.33 \(\pm\) 0.06 & 93.93 \(\pm\) 0.10 & 68.80 \(\pm\) 0.33 & 77.21 \(\pm\) 0.11 \\ \cline{2-7} & TensorGCN [68] & GloVe, CoreNLP & 87.74 \(\pm\) 0.05 & 98.04 \(\pm\) 0.08 & 95.05 \(\pm\) 0.11 & 70.11 \(\pm\) 0.24 & 77.91 \(\pm\) 0.07 \\ \cline{2-7} & STGCN [134] & BERT & - & 98.5 & - & - & 82.5 \\ \cline{2-7} & NMGC [53] & N/A & 86.61 \(\pm\) 0.06 & 97.31 \(\pm\) 0.09 & 94.35 \(\pm\) 0.06 & 69.21 \(\pm\) 0.17 & 76.21 \(\pm\) 0.25 \\ \cline{2-7} & BertGCN [63] & BERT & 89.3 & 98.1 & 96.6 & 72.8 & 86 \\ \cline{2-7} & RobertaGCN [63] & RoBERTa & 89.5 & 98.2 & 96.1 & 72.8 & 89.7 \\ \cline{2-7} & T-VGAE [127] & N/A & 88.08 \(\pm\) 0.06 & 97.68 \(\pm\) 0.14 & 95.05 \(\pm\) 0.10 & 70.02 \(\pm\) 0.14 & 78.03 \(\pm\) 0.11 \\ \hline \multirow{11}{*}{**Doc-level**} & ReGNN [56] & GloVe & - & 97.93 \(\pm\) 0.31 & 95.17 \(\pm\) 0.17 & 67.93 \(\pm\) 0.33 & 78.71 \(\pm\) 0.56 \\ \cline{2-7} & Text-Level-GNN [37] & GloVe & 84.16 \(\pm\) 0.25\({}^{*}\) & 97.8 \(\pm\) 0.2 & 94.6 \(\pm\) 0.3 & 69.4 \(\pm\) 0.6 & 75.47 \(\pm\) 0.06\({}^{*}\) \\ \cline{1-1} \cline{2-7} & TextING [143] & GloVe & - & 98.13 \(\pm\) 0.12 & 95.68 \(\pm\) 0.35 & 70.84 \(\pm\) 0.52 & 80.19 \(\pm\) 0.31 \\ \cline{1-1} \cline{2-7} & HyperGAT [25] & N/A & 86.62 \(\pm\) 0.16 & 97.97 \(\pm\) 0.23 & 94.98 \(\pm\) 0.27 & 69.90 \(\pm\) 0.34 & 78.32 \(\pm\) 0.27 \\ \cline{1-1} \cline{2-7} & TextSSL [95] & GloVe & 85.26 \(\pm\) 0.28 & 97.81 \(\pm\) 0.14 & 95.48 \(\pm\) 0.26 & 70.59 \(\pm\) 0.38 & 79.74 \(\pm\) 0.19 \\ \hline \hline \end{tabular}
\end{table}
Table 4. Performance Table. - indicates unavailability. \({}^{*}\) refers to replication from HyperGAT [25].
* Models that use external resources usually achieve better performance than those that do not, especially models with BERT and RoBERTa(Rosen et al., 2019; Wang et al., 2020).
* Under the same setting, such as using GloVe as the external resource, Corpus-level GNN models (e.g. TG-Transformer(Wang et al., 2020), TensorGCN(Wang et al., 2020)) typically outperform Document-level GNN models (e.g. TextING(Wang et al., 2020), TextSSL(Wang et al., 2020)). This is because Corpus-level GNN models can work in a transductive way and make use of the test input, whereas Document-level GNN models can only use the training data.
* The advantage of Corpus-level GNN models over Document-level GNN models only applies to topic classification datasets and not to sentiment analysis datasets such as **MR**. This is because sentiment analysis involves analyzing the order of words in a text, which is something that most Corpus-level GNN models cannot do.
## 7. Challenges and Future Work
### Model Performance
With the development of pre-trained models(Wang et al., 2020; Wang et al., 2020), and prompt learning methods(Wang et al., 2020; Wang et al., 2020) achieve great performance on text classification. Applying GNNs in text classification without this pre-training style will not be able to achieve such good performance. For both corpus-level and document-level GNN text classification models, researching how to combine GNN models with these pretrained models to improve the pretrained model performance can be the future work. Meanwhile, more advanced graph models can be explored, e.g. more heterogeneous graph models on word and document graphs to improve the model performance.
### Graph Construction
Most GNN text classification methods use a single, static-value edge to construct graphs based on document statistics. This approach applies to both corpus-level GNN and document-level GNN. However, to better explore the complex relationship between words and documents, more dynamic hyperedges can be utilized. Dynamic edges in GNNs can be learned from various sources, such as the graph structure, document semantic information, or other models. And hyperedges can be built for a more expressive representation of the complex relationships between nodes in the graph.
### Application
While corpus-level GNN text classification models have demonstrated good performance without using external resources, these models are mostly transductive. To apply them in real-world settings, an inductive learning approach should be explored. Although some inductive corpus-level GNNs have been introduced, the large amount of space required to construct the graph and the inconvenience of incremental training still present barriers to deployment. Improving the scalability of online training and testing for inductive corpus-level GNNs represents a promising area for future work.
## 8. Conclusion
This survey article introduces how Graph Neural Networks have been applied to text classification in two different ways: corpus-level GNN and document-level GNN, with a detailed structural figure. Details of these models have been introduced and discussed, along with the datasets commonly used by these methods. Compared with traditional machine learning and sequential deep learning models, graph neural networks can explore the relationship between words and documents in the global structure (corpus-level GNN) or the local document (document-level GNN), giving a good
performance. A detailed performance comparison is applied to investigate the influence of external resources, model learning methods, and types of different datasets. Furthermore, we propose the challenges for GNN text classification models and potential future work.
|
2305.14765 | Masked Bayesian Neural Networks : Theoretical Guarantee and its
Posterior Inference | Bayesian approaches for learning deep neural networks (BNN) have been
received much attention and successfully applied to various applications.
Particularly, BNNs have the merit of having better generalization ability as
well as better uncertainty quantification. For the success of BNN, search an
appropriate architecture of the neural networks is an important task, and
various algorithms to find good sparse neural networks have been proposed. In
this paper, we propose a new node-sparse BNN model which has good theoretical
properties and is computationally feasible. We prove that the posterior
concentration rate to the true model is near minimax optimal and adaptive to
the smoothness of the true model. In particular the adaptiveness is the first
of its kind for node-sparse BNNs. In addition, we develop a novel MCMC
algorithm which makes the Bayesian inference of the node-sparse BNN model
feasible in practice. | Insung Kong, Dongyoon Yang, Jongjin Lee, Ilsang Ohn, Gyuseung Baek, Yongdai Kim | 2023-05-24T06:16:11Z | http://arxiv.org/abs/2305.14765v1 | # Masked Bayesian Neural Networks :
###### Abstract
Bayesian approaches for learning deep neural networks (BNN) have been received much attention and successfully applied to various applications. Particularly, BNNs have the merit of having better generalization ability as well as better uncertainty quantification. For the success of BNN, search an appropriate architecture of the neural networks is an important task, and various algorithms to find good sparse neural networks have been proposed. In this paper, we propose a new node-sparse BNN model which has good theoretical properties and is computationally feasible. We prove that the posterior concentration rate to the true model is near minimax optimal and adaptive to the smoothness of the true model. In particular the adaptiveness is the first of its kind for node-sparse BNNs. In addition, we develop a novel MCMC algorithm which makes the Bayesian inference of the node-sparse BNN model feasible in practice.
Machine Learning, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Bayesian Inference, Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Inference, Bayesian Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Bayesian Inference, Inference, Bayesian Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Bayesian Inference, Inference, Bayesian Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Bayesian Inference, Inference, Bayesian Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Bayesian Inference, Inference, Bayesian Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference Inference, Bayesian Inference Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference Inference, Bayesian Inference, Bayesian Inference Inference, Bayesian Inference, Bayesian Inference Inference, Bayesian Inference, Bayesian Inference Inference, Bayesian Inference Inference, Bayesian Inference Inference, Bayesian Inference Inference Inference, Bayesian Inference Inference, Bayesian Inference Inference, Bayesian Inference Inference, Bayesian Inference Inference Inference, Bayesian Inference Inference, Bayesian Inference Inference, Bayesian Inference Inference, Bayesian Inference Inference, Bayesian Inference Inference Inference, Bayesian Inference Inference, Bayesian Inference Inference, Bayesian Inference Inference, Bayesian Inference Inference Inference, Bayesian Inference Inference Inference, Bayesian Inference
been suggested. A popular approach is to prune iteratively unnecessary edges (Han et al., 2015; Frankle and Carbin, 2018) or unnecessary nodes (Wen et al., 2016; He et al., 2017; Wang et al., 2019; Chin et al., 2020). However, such algorithms do not provide proper uncertainty quantification. By analyzing image data with CNNs, we illustrate that our node-sparse BNN is good at compressing DNNs without hampering the ability of uncertainty quantification.
Our contributions are summarized as follows:
* We develop a node-sparse prior for DNNs such that the posterior concentration rate to the true model is near minimax optimal adaptively to the smoothness of the true model and Bayesian inference with a specially designed MCMC algorithm is possible.
* We implement an efficient MCMC algorithm for searching good node-sparse BNNs. In particular, we develop a local informed proposal distribution in the Metropolis-Hastings (MH) algorithm to search good node-sparse architectures efficiently.
* By numerical experiments, we illustrate that the mBNN outperforms other Bayesian approaches including nonsparse BNN and VI algorithms in terms of generalization and uncertainty quantification.
## 2 Preliminaries
### Notation
Let \(\mathbb{R}\) and \(\mathbb{N}\) be the sets of real numbers and natural numbers, respectively. For an integer \(n\in\mathbb{N}\), we denote \([n]:=\{1,\ldots,n\}\). A capital letter denotes a random variable or matrix interchangeably whenever its meaning is clear, and a vector is denoted by a bold letter, e.g. \(\mathbf{x}:=(x_{1},\ldots,x_{d})^{\top}\). For a \(d\)-dimensional vector \(\mathbf{x}\in\mathbb{R}^{d}\), we denote \(|\mathbf{x}|_{p}:=(\sum_{j=1}^{d}|x_{j}|^{p})^{1/p}\) for \(1\leq p<\infty\), \(|\mathbf{x}|_{0}:=\sum_{j=1}^{d}\mathbb{I}(x_{j}\neq 0)\) and \(|\mathbf{x}|_{\infty}:=\max_{j\in[d]}|x_{j}|\). For a real-valued function \(f:\mathcal{X}\to\mathbb{R}\) and \(1\leq p<\infty\), we denote \(||f||_{p,n}:=(\sum_{i=1}^{n}f(\mathbf{x}_{i})^{p}/n)^{1/p}\) and \(||f||_{p,\mathbb{P}\mathbf{X}}:=\left(\int_{\mathbf{X}\in\mathcal{X}}f(\mathbf{X})^{p}d \mathbb{P}\mathbf{X}\right)^{1/p}\) where \(\mathrm{P}_{\mathbf{X}}\) is a probability measure defined on input space \(\mathcal{X}\). Moreover, we define \(||f||_{\infty}=\sup_{\mathbf{x}\in\mathcal{X}}|f(\mathbf{x})|\). For \(b_{1}\leq b_{2}\), we define \(f_{[b_{1},b_{2}]}(\cdot):=\min(\max(f(\cdot),b_{1}),b_{2})\) which is a truncated version of \(f\) on \(b_{1}\) and \(b_{2}\). We denote \(\circ\) and \(\odot\) as the composition of functions and element-wise product of vectors or matrices, respectively. For a probability vector \(\mathbf{q}\in\mathbb{R}^{r}\), we denote \(\mathrm{Cat}(\mathbf{q})\) as the categorical distribution with the probabilities of each category being \(\mathbf{q}\) and \(\mathrm{Multi}_{(N,\mathbf{q})}\) as the distribution of N many selected balls without replacement from the jar containing \(r\) many balls whose selection probabilities are \(\mathbf{q}\). For technical simplicity, we assume \(\mathcal{X}=[-1,1]^{d}\).
### Data generating process
We consider two supervised learning problems : regression and classification. In regression problems, the input vector \(\mathbf{X}\) and the response variable \(Y\in\mathbb{R}\) are generated from the model
\[\begin{split}\mathbf{X}\sim&\mathrm{P}_{\mathbf{X}},\\ Y|\mathbf{X}\sim& N(f_{0}(\mathbf{X}),\sigma_{0}^{2}), \end{split} \tag{1}\]
where \(\mathrm{P}_{\mathbf{X}}\) is the probability measure defined on \(\mathcal{X}\). Here, \(f_{0}:\mathcal{X}\to\mathbb{R}\) and \(\sigma_{0}^{2}>0\) are the unknown true regression function and unknown variance of the noise, respectively.
For \(K\)-class classification problems, the input vector \(\mathbf{X}\) and the response variable \(Y\in[K]\) are generated from the model
\[\begin{split}\mathbf{X}\sim&\mathrm{P}_{\mathbf{X}},\\ Y|\mathbf{X}\sim&\mathrm{Cat}\left(\mathrm{softmax}(\bm {f}_{0}(\mathbf{X}))\right),\end{split} \tag{2}\]
where \(\mathrm{P}_{\mathbf{X}}\) is the probability measure defined on \(\mathcal{X}\) and \(\mathbf{f}_{0}:\mathcal{X}\to\mathbb{R}^{K}\) is the logit of the unknown true conditional class probability function.
## 3 Masked Bayesian Neural Network
To construct a node-sparse BNN, we propose to use masking vectors which screen some nodes of the hidden layers. We first define the masked Deep Neural Network (mDNN) model, and then propose the mBNN on the top of the mDNN by specifying a prior appropriately.
### Deep Neural Network
For \(L\in\mathbb{N}\) and \(\mathbf{p}=(p^{(0)},p^{(1)},...,p^{(L)},p^{(L+1)})^{\top}\in\mathbb{N}^{L+2}\), DNN with the \((L,\mathbf{p})\) architecture is a DNN model which has \(L\) hidden layers and \(p^{(l)}\) many nodes at the \(l\)-th hidden layer for \(l\in[L]\). The input and output dimensions are \(p^{(0)}\) and \(p^{(L+1)}\), respectively. The output of the DNN model can be written as
\[f_{\mathbf{\theta}}^{\mathrm{DNN}}(\cdot):=A_{L+1}\circ\rho\circ A_{L}\cdots\circ \rho\circ A_{1}(\cdot), \tag{3}\]
where \(A_{l}:\mathbb{R}^{p^{(l-1)}}\mapsto\mathbb{R}^{p^{(l)}}\) for \(l\in[L+1]\) is an affine map defined as \(A_{l}(\mathbf{x}):=W_{l}\mathbf{x}+\mathbf{b}_{l}\) with \(W_{l}\in\mathbb{R}^{p^{(l)}\times p^{(l-1)}}\) and \(\mathbf{b}_{l}\in\mathbb{R}^{p^{(l)}}\) and \(\rho\) is the RELU activation function. The DNN model is parameterized by \(\mathbf{\theta}\) which is the concatenation of the weight matrices and bias vectors, that is
\[\mathbf{\theta}:=(\mathrm{vec}(W_{1})^{\top},b_{1}^{\top},\ldots,\mathrm{vec}(W_{L+ 1})^{\top},b_{L+1}^{\top})^{\top}.\]
### Masked Deep Neural Network
For a given standard DNN, the corresponding masked DNN (mDNN) is constructed by simply adding masking parameters to the standard DNN model. For \(l\in[L]\), the mDNN
model screens the nodes at the \(l\)-th hidden layer using the binary masking vector \(\mathbf{m}^{(l)}\in\{0,1\}^{p^{(l)}}\). When \(\left(\mathbf{m}^{(l)}\right)_{j}=0,\) the \(j\)-th node at the \(l\)-th hidden layer becomes inactive. The output of the mDNN model with the \((L,\mathbf{p})\) architecture can be written as
\[f_{\mathbf{M},\mathbf{\theta}}^{\mathrm{mDNN}}(\cdot):=A_{L+1}\circ\rho_{\mathbf{m}^{(L)}} \circ A_{L}\cdots\circ\rho_{\mathbf{m}^{(1)}}\circ A_{1}(\cdot),\]
where \(A_{l}\) for \(l\in[L+1]\) is the affine map defined in (3) and \(\rho_{\mathbf{m}^{(l)}}:\mathbb{R}^{p^{(l)}}\rightarrow\mathbb{R}^{p^{(l)}}\) for \(l\in[L]\) is the masked-RELU activation function defined as
\[\rho_{\mathbf{m}^{(l)}}\left(\begin{array}{c}x_{1}\\ \vdots\\ x_{p^{(l)}}\end{array}\right):=\mathbf{m}^{(l)}\odot\left(\begin{array}{c}\max (x_{1},0)\\ \vdots\\ \max(x_{p^{(l)}},0)\end{array}\right).\]
The model is parameterized by \(\mathbf{M}\) and \(\mathbf{\theta}\), where \(\mathbf{M}\in\{0,1\}^{\sum_{l=1}^{L}p^{(l)}}\) is the concatenate of the all masking vectors
\[\mathbf{M}:=\left(\mathbf{m}^{(1)}{}^{\top},\ldots,\mathbf{m}^{(L)}{}^{\top}\right)^{\top}\]
and \(\mathbf{\theta}\) is the concatenation of the weight matrices and the bias vectors in the standard DNN model.
Note that the mDNN is nothing but a standard DNN with the architecture \((L,\vec{\mathbf{p}})\) where \((\vec{\mathbf{p}})_{l}=\left|\mathbf{m}^{(l)}\right|_{0}.\) That is, the mDNN is a reparameterization of the standard DNN using the masking vectors. This reparameterization, however, allows us to develop an efficient MCMC algorithm, in particular for searching good architectures (i.e. good masking vectors).
### Prior and posterior distribution
We adopt a data-dependent prior \(\Pi_{n}\) on the parameters \(\mathbf{M}\) and \(\mathbf{\theta}\) (as well as \(\sigma^{2}\) for regression problems). We consider a data-dependent prior to ensure the optimal posterior concentration rate and adaptiveness. We assume that a priori \(\mathbf{M}\) and \(\mathbf{\theta}\) (as well as \(\sigma^{2}\) for regression problems) are independent.
For \(\mathbf{M}\), we use the following hierarchical prior. For each \(\mathbf{m}^{(l)}\) with \(l\in[L]\), let \(s^{(l)}:=|\mathbf{m}^{(l)}|_{0}\) be the sparsity of the masking vector \(\mathbf{m}^{(l)}\). We put a prior mass on \(s^{(l)}\) by
\[\Pi_{n}(s^{(l)})\propto e^{-(\lambda\log n)^{5}s^{(l)^{2}}}\qquad\text{ for }s^{(l)}\in[p^{(l)}], \tag{4}\]
where \(\lambda>0\) is a hyper-parameter. Note that the prior on \(s^{(l)}\) regularizes the width of the network, and more strong regularization is enforced as more data are accumulated. Given the sparsity level \(s^{(l)}\), the masking vector \(\mathbf{m}^{(l)}\) is sampled from the set \(\{\mathbf{m}^{(l)}\in\{0,1\}^{p^{(l)}}:|\mathbf{m}^{(l)}|_{0}=s^{(l)}\}\) uniformly. In other words,
\[\Pi_{n}(\mathbf{m}^{(l)}|s^{(l)})\overset{iid}{\propto}\frac{1}{\binom{p^{(l)}}{s ^{(l)}}}\mathbb{I}(|\mathbf{m}^{(l)}|_{0}=s^{(l)}) \tag{5}\]
and the prior of \(\mathbf{m}^{(l)}\) is the product of (4) and (5)
For the prior of \(\mathbf{\theta}\), we assume that
\[\theta_{i}\overset{iid}{\sim}\mathfrak{p}(\theta_{i})\qquad i\in[T]\,, \tag{6}\]
where \(T:=\sum_{l=0}^{L}(p^{(l)}+1)p^{(l+1)}\) is the length of the vector \(\mathbf{\theta}\), and choose \(\mathfrak{p}\) carefully to ensure desirable theoretical properties. For high-dimensional linear regression problems, Castillo & van der Vaart (2012) and Castillo et al. (2015) notice that using a heavy-tailed distribution for the prior of the regression coefficients is essential for theoretical optimality. Motivated by these observations, we consider a heavy-tailed distribution for \(\mathfrak{p}\). Let \(\mathfrak{P}\) be the class of polynomial tail distributions on \(\mathbb{R}\) defined as
\[\mathfrak{P}:=\left\{\mathfrak{p}:\lim_{x\rightarrow\infty}\frac{x^{-\log x }}{\mathfrak{p}(x)}\to 0\text{ and }\lim_{x\rightarrow\infty}\frac{x^{-\log x}}{\mathfrak{p}(-x)} \to 0\right\}.\]
Examples of polynomial tail distributions are the Cauchy distribution and Student's t-distribution. On the other hand, the Gaussian and Laplace distributions do not belong to \(\mathfrak{P}\). We assume that \(\mathfrak{p}\) belongs to \(\mathfrak{P}\). We will show in Section 4 that any prior in \(\mathfrak{P}\) yields the optimal posterior concentration rate.
For the prior of \(\sigma^{2}\) in regression problems, a standard distribution such as the inverse-gamma distribution can be used. Any distribution whose density is positive at the true \(\sigma_{0}^{2}\) works for theoretical optimality.
### Comparison with MC-dropout
The idea of masking nodes in DNN has been already used in various algorithms. Dropout (Srivastava et al., 2014) and MC-dropout (Gal & Ghahramani, 2016) are two representative examples, where they randomly mask the nodes of DNN during the training phase. While Dropout abolishes the masking vectors and uses the scaled-down version of the trained weights in the prediction phase, MC-dropout uses the trained weights obtained in the training phase multiplied by a random masking vectors. Since the masking vectors is treated as a random vectors following its posterior distribution at the prediction phase, our mBNN is similar to MC-dropout. A key difference between the mBNN and MC-dropout, however, is that the mBNN learns the distribution of the masking vectors from data via the posterior distribution but MC-dropout does not. That is, the mBNN learns the architecture of DNN from data, which makes the mBNN have good theoretical and empirical properties.
## 4 Theoretical optimalities
In this section, we derive the posterior concentration rates of the mBNNs for regression and classification problems, which are minimax optimal up to a logarithmic factor. In addition, we show that the mBNN achieves the optimal sparsity
asymptotically. We assume that \(\mathcal{D}^{(n)}:=\{(\mathbf{X}_{i},Y_{i})\}_{i\in[n]}\) are independent copies following the true distribution \(\mathbb{P}_{0}\) specified by either the model (1) or model (2).
### Posterior concentration rate for nonparametric regression
We consider the nonparametric regression model (1). We assume the true regression function \(f_{0}\) belongs to the \(\beta\)-Holder class \(\mathcal{H}_{d}^{\beta}\)1 Here, the \(\beta\)-Holder class \(\mathcal{H}_{d}^{\beta}\) is given as
Footnote 1: Theoretical results for hierarchical composition functions are provided in Appendix B.
\[\mathcal{H}_{d}^{\beta}:=\{f:[-1,1]^{d}\to\mathbb{R};||f||_{\mathcal{H}^{\beta} }<\infty\},\]
where \(||f||_{\mathcal{H}^{\beta}}\) denotes the Holder norm defined by
\[||f||_{\mathcal{H}^{\beta}} :=\sum_{\mathbf{\alpha}:|\mathbf{\alpha}|_{1}<\beta}\|\partial^{\mathbf{ \alpha}}f\|_{\infty}\] \[+\sum_{\mathbf{\alpha}:|\mathbf{\alpha}|_{1}=\lfloor\beta\rfloor}\sup_{ \begin{subarray}{c}\mathbf{x}_{1},\mathbf{x}_{2}\in[-1,1]^{d}\end{subarray}}\frac{| \partial^{\mathbf{\alpha}}f(\mathbf{x}_{1})-\partial^{\mathbf{\alpha}}f(\mathbf{x}_{2})|}{| \mathbf{x}_{1}-\mathbf{x}_{2}|_{\infty}^{\beta-\lfloor\beta\rfloor}}.\]
For inference, we consider the probabilistic model
\[Y_{i}\stackrel{{ ind.}}{{\sim}}N(f^{\rm mDNN}_{\mathbf{M},\mathbf{\theta} [-F,F]}(\mathbf{X}_{i}),\sigma^{2}),\]
where \(f^{\rm mDNN}_{\mathbf{M},\mathbf{\theta}}(\mathbf{X}_{i})\) has the \((L_{n},\mathbf{p}_{n})\) architecture with \(L_{n}\) and \(\mathbf{p}_{n}\) given as
\[L_{n}:= \left\lceil C_{L}\log n\right\rceil, \tag{7}\] \[p_{n}:= \left\lceil C_{p}\sqrt{n}\right\rceil,\] \[\mathbf{p_{n}}:= (d,p_{n},\dots,p_{n},1)^{\top}\in\mathbb{N}^{L_{n}+2} \tag{8}\]
for positive constants \(C_{L}\) and \(C_{p}\) that are defined in Lemma A.1. Then, the likelihood \(\mathcal{L}(w|\mathcal{D}^{(n)})\) of \(w:=(\mathbf{M},\mathbf{\theta},\sigma^{2})\) is expressed as
\[(2\pi\sigma^{2})^{-\frac{n}{2}}\exp\left(-\frac{\sum_{i=1}^{n}(Y_{i}-f^{\rm mDNN }_{\mathbf{M},\mathbf{\theta}\left\lfloor-F,F\right\rfloor}(\mathbf{X}_{i}))^{2}}{2\sigma ^{2}}\right),\]
and the corresponding posterior distribution is given as
\[\Pi_{n}(w\mid\mathcal{D}^{(n)})\propto\Pi_{n}(w)\mathcal{L}(w|\mathcal{D}^{(n )}),\]
where \(\Pi_{n}\) is the prior defined on Section 3.3.
In the following theorem, we show that the mBNN model achieves the optimal (up to a logarithmic factor) posterior concentration rate to the true regression function.
**Theorem 4.1** (Posterior Concentration of the mBNN for regression problems).: _Assume \(f_{0}\in\mathcal{H}_{d}^{\beta}\), \(\beta<d\), and there exist \(F>0\) and \(\sigma_{max}^{2}>0\) such that \(\|f_{0}\|_{\infty}\leq F\) and \(\sigma_{0}^{2}\leq\sigma_{max}^{2}\). Consider the mDNN model with the (\(L_{n},\mathbf{p}_{n}\)) architecture, where \(L_{n}\) and \(\mathbf{p}_{n}\) are given in (7) and (8). If we put the prior given as (4), (5) and (6) over \(\mathbf{M}\), \(\mathbf{\theta}\) and any prior on \(\sigma^{2}\) whose density (with respect to Lebesgue measure) is positive on its support \((0,\sigma_{max}^{2}]\), the posterior distribution concentrates to the \(f_{0}\) and \(\sigma_{0}^{2}\) at the rate \(\varepsilon_{n}=n^{-\beta/(2\beta+d)}\log^{\gamma}(n)\) for \(\gamma>\frac{5}{2}\) in the sense that_
\[\Pi_{n}\Big{(}(f,\sigma^{2}): ||f-f_{0}||_{2,\mathbb{P}_{X}}\] \[+|\sigma^{2}-\sigma_{0}^{2}|>M_{n}\varepsilon_{n}\Bigm{|}\mathcal{ D}^{(n)}\Big{)}\stackrel{{\mathbb{P}_{0}^{n}}}{{\to}}0\]
_as \(n\to\infty\) for any \(M_{n}\to\infty\), where \(\mathbb{P}_{0}^{n}\) is the probability measure of the training data \(\mathcal{D}^{(n)}\)._
The convergence rate \(n^{-\beta/(2\beta+d)}\) is known to be minimax lower bound when estimating the \(\beta\)-Holder smooth function (Tsybakov, 2009). Our concentration rate is near optimal up to a logarithmic factor and adaptive to the smoothness of the true model.
Comparison with other worksSimilar convergence rates are derived in the non-bayesian theoretical deep learning literature (Schmidt-Hieber, 2020; Kohler and Langer, 2021). However, the architectures considered in Schmidt-Hieber (2020) and Kohler and Langer (2021) depend on the smoothness \(\beta\) of the true regression function, which is rarely known in practice. In contrast, architecture and prior of the mBNN do not depend on the smoothness \(\beta\), which makes the mBNN very attractive.
Polson and Rockova (2018), Cherief-Abdellatif (2020) and Bai et al. (2020) also derive near optimal concentration rates for edge-sparse BNNs. Posterior computations for their smoothness-adaptive models, however, are almost impossible and inferential cost of edge-sparse BNNs could be large. Jantre et al. (2021) provides the posterior concentration rate for node-sparse BNN, but their result does not guarantee minimax optimality and is not adaptive to the smoothness of the true model. Theorem 4.1 is the first result for theoretical optimality of the Bayesian analysis for node-sparse DNNs.
### Posterior concentration rate for binary classification
Theoretical results for regression problem can be extended to classification problem. We consider the classification problem (2) with \(K=2\), and denote \(f_{0}:=\left(\mathbf{f}_{0}\right)_{2}-\left(\mathbf{f}_{0}\right)_{1}.\) We assume that \(f_{0}\) belongs to the \(\beta\)-Holder class \(\mathcal{H}_{d}^{\beta}\). For inference, we consider the probabilistic model
\[Y_{i}\stackrel{{ ind.}}{{\sim}}\mathrm{Bernoulli}(\phi\circ f^{\rm mDNN }_{\mathbf{M},\mathbf{\theta}[-F,F]}(\mathbf{X}_{i})),\]
where \(\phi\) is the sigmoid function and \(f^{\rm mDNN}_{\mathbf{M},\mathbf{\theta}}\) has the \((L_{n},\mathbf{p}_{n})\) architecture with \(L_{n}\) and \(\mathbf{p}_{n}\) given as (7) and (8). Then, the likelihood \(\mathcal{L}(w|\mathcal{D}^{(n)})\) of \(w:=(\mathbf{M},\mathbf{\theta})\) is ex
pressed as
\[\prod_{i=1}^{n}(\phi\circ f_{\mathbf{M},\mathbf{\theta}[-F,F]}^{\rm mDNN}(\mathbf{X}_{i}))^{Y_ {i}}(1-\phi\circ f_{\mathbf{M},\mathbf{\theta}[-F,F]}^{\rm mDNN}(\mathbf{X}_{i}))^{1-Y_{i}}.\]
In the following theorem, we prove that the mBNN model achieves the optimal (up to a logarithmic factor) posterior concentration rate to the true conditional class probability adaptive to the smoothness of the true model.
**Theorem 4.2** (Posterior Concentration of the mBNN for classification problems).: _Assume \(f_{0}\in\mathcal{H}_{d}^{\beta}\), \(\beta<d\), and there exists \(F>0\) such that \(\|f_{0}\|_{\infty}\leq F\). Consider the mDNN model with the (\(L_{n},\mathbf{p}_{n}\)) architecture where \(L_{n}\) and \(\mathbf{p}_{n}\) are given in (7) and (8). If we put the prior given as (4), (5) and (6) over \(\mathbf{M}\) and \(\mathbf{\theta}\), the posterior distribution concentrates to the true conditional class probability at the rate \(\varepsilon_{n}=n^{-\beta/(2\beta+d)}\log^{7}(n)\) for \(\gamma>\frac{5}{2}\) in the sense that_
\[\Pi_{n}\Big{(}f:||\phi\circ f-\phi\circ f_{0}||_{2,\mathbb{P}_{X}}>M_{n} \varepsilon_{n}\;\Big{|}\;\mathcal{D}^{(n)}\Big{)}\stackrel{{ \mathbb{P}_{0}^{n}}}{{\to}}0\]
_as \(n\to\infty\) for any \(M_{n}\to\infty\), where \(\mathbb{P}_{0}^{n}\) is the probability measure of the training data \(\mathcal{D}^{(n)}\)._
### Guaranteed sparsity of the mBNN
Not only the fast concentration rate, the mBNN achieves the optimal sparsity too. Theorem 4.3, which is a by-product of the proofs for Theorems 4.1 and 4.2, gives the level of sparsity of the mBNN.
**Theorem 4.3** (Guaranteed sparsity of the mBNN).: _Under the assumptions in Theorem 4.1 or Theorem 4.2, we have_
\[\Pi_{n}\Big{(}f:f=f_{\mathbf{M},\mathbf{\theta}[-F,F]}^{\rm mDNN},\;\max_{l\in[L]}|\mathbf{ m}^{(l)}|_{0}>\mathfrak{s}_{n}\;\Big{|}\;\mathcal{D}^{(n)}\Big{)}\stackrel{{ \mathbb{P}_{0}^{n}}}{{\to}}0\]
_as \(n\to\infty\), where \(\mathfrak{s}_{n}\) is defined as_
\[\mathfrak{s}_{n}:=\left\lceil C_{p}\left(n^{\frac{d}{2\beta+d}}(\log n)\right) ^{1/2}\right\rceil\]
_and \(\mathbb{P}_{0}^{n}\) is the probability measure of the training data \(\mathcal{D}^{(n)}\)._
Yarotsky (2017) proves that the lower bound of the sparsity of DNNs to approximate functions in \(\mathcal{H}_{d}^{\beta}\) is equal to \(\mathfrak{s}_{n}\) up to a logarithmic factor. That is, the mBNN automatically learns the optimal sparsity from data.
## 5 Posterior inference
Let \(\mathcal{D}^{(n)}:=\{(\mathbf{X}_{i},Y_{i})\}_{i\in[n]}\) be training data. Let \(w\) denote all of the parameters in the mBNN, that is, \(w:=(\mathbf{M},\mathbf{\theta},\sigma^{2})\) for the regression problem (1) and \(w:=(\mathbf{M},\mathbf{\theta})\) for the classification problem (2). For a new test example \(\mathbf{x}\in\mathcal{X}\), the prediction with the mBNN is done by the predictive distribution:
\[p(y\mid\mathbf{x},\mathcal{D}^{(n)})=\int_{w}p(y\mid\mathbf{x},w)\Pi_{n}(w\mid \mathcal{D}^{(n)})dw,\]
where \(\Pi_{n}(w\mid\mathcal{D}^{(n)})\) is the posterior posterior distribution of \(w\). When the integral is difficult to be evaluated, it is common to approximate it by the Monte Carlo method
\[p(y\mid\mathbf{x},\mathcal{D}^{(n)})\approx\frac{1}{T}\sum_{t=1}^{T}p(y\mid\mathbf{x},w^{(t)}),\]
where \(w^{(t)}\sim\Pi_{n}(w\mid\mathcal{D}^{(n)})\). In this section, we develop a MCMC algorithm to sample \(w\) efficiently from \(\Pi_{n}(w\mid\mathcal{D}^{(n)})\).
### MCMC algorithm
The proposed MCMC algorithm samples \(\mathbf{\theta}\) (and \(\sigma^{2}\)) given \(\mathbf{M}\) and \(\mathcal{D}^{(n)}\) and then samples \(\mathbf{M}\) given \(\mathbf{\theta}\) (and \(\sigma^{2}\)) and \(\mathcal{D}^{(n)}\), and iterates these two samplings until convergence.
There are various efficient sampling algorithms for \(\mathbf{\theta}\) (and \(\sigma^{2}\)) given \(\mathbf{M}\) and \(\mathcal{D}^{(n)}\) such as Hamiltonian Monte Carlo (HMC) (Neal et al., 2011), Stochastic Gradient Langevin Dynamics (SGLD) (Welling and Teh, 2011) and Stochastic Gradient HMC (SGHMC) (Chen et al., 2014). In practice, we select a sampling algorithm among those depending on the sizes of data and model.
For generating \(\mathbf{M}\) from its conditional posterior, we consider the Metropolis-Hastings (MH) algorithm. The hardest part is to design a good proposal distribution since the dimension of \(\mathbf{M}\) is quite large and all entries are binary. In the next subsection, we propose an efficient proposal distribution for \(\mathbf{M}\).
### Proposal for the MH algorithm
Essentially, sampling \(\mathbf{M}\) is equivalent to sampling a large dimensional binary vector. A well known strategy for sampling a large dimensional binary vector is to use the MH algorithm with the locally informed proposal (Umrigar, 1993; Zanella, 2020) given as
\[q\left(\mathbf{M}^{\star}\mid\mathbf{M}\right)\propto e^{\frac{1}{2}(l(\mathbf{M}^{\star})- l(\mathbf{M}))}\mathbb{I}\left(\mathbf{M}^{\star}\in H(\mathbf{M})\right), \tag{9}\]
where \(l(\mathbf{M})\) is the log-posterior of \(\mathbf{M}\) and \(H(\mathbf{M})\) is the Hamming ball of a certain size around \(\mathbf{M}\). The key point of (9) is to give more probability to \(\mathbf{M}^{\star}\) whose posterior probability is high. While powerful, this locally informed proposal requires to compute \(l(\mathbf{M}^{\star})\) for every \(\mathbf{M}^{\star}\in H(\mathbf{M})\), which is time consuming. To resolve this problem, we propose a proposal distribution which only uses the information of the current \(\mathbf{M}\).
First, we select either _birth_ or _death_, where _birth_ makes some inactive nodes become active and _death_ makes some active nodes become inactive. When _death_ is selected, motivated by the pruning algorithms (Lee et al., 2018; Tanaka et al., 2020), our strategy is to prune less sensitive nodes. That is, we delete nodes which do not affect much to the
current DNN model when their values are changed. On the other hand, when _birth_ is selected, we choose some of inactive nodes with equal probabilities and make them active. We use equal probabilities because the posterior of the edges connected to the inactive nodes is the same as the prior and thus there is no reason to prefer certain inactive nodes more. Moreover, the proposal with equal probabilities for _birth_ is helpful for increasing the acceptance rate of _death_ to result in fast mixing.
To be more specific, we first select the move \(u\in\{0,1\}\) with probability 1/2, where \(u=0\) and \(u=1\) indicates _birth_ and _death_, respectively. In addition, we select an integer \(N\) from \([N_{\max}]\) uniformly, where \(N_{\max}\) is a prespecified positive integer. Then, we select randomly \(N\) nodes among the nodes whose masking values are \(u\) following \(\mathrm{Multi}_{(N,\mathbf{Q}_{u})}\), where
\[\mathbf{Q}_{0}\propto\mathbb{I}(\mathbf{m}_{j}^{(l)}=0),\] (_birth_) (10) \[\mathbf{Q}_{1}\propto\mathbb{I}(\mathbf{m}_{j}^{(l)}=1)\exp\left(-|\nabla l (\mathbf{M})|/2\right).\] (_death_) (11)
Even though \(l(\mathbf{M})\) is defined only on binary vectors, the gradient \(\nabla l(\mathbf{M})=\partial l(\mathbf{M})/\partial\mathbf{M}\) of \(l(\mathbf{M})\) can be defined by extending the domain of \(l(\mathbf{M})\) appropriately. Finally, we flip the masking values of the selected nodes to have a new proposal \(\mathbf{M}^{*}\). To sum up, the proposal distribution first selects a set of nodes \(\{i_{1},\ldots,i_{N}\}\) following \(\mathrm{Multi}_{(N,\mathbf{Q}_{u})}\) and changes their mask values to \((1-\mathbf{M}_{i_{1}},\ldots,1-\mathbf{M}_{i_{N}})\). Then, we accept the new proposal \(\mathbf{M}^{*}\) with probability
\[\min\left(1,\frac{\Pi_{n}(\mathbf{M}^{*}\mid X^{(n)},Y^{(n)},\mathbf{\theta})}{\Pi_{n }(\mathbf{M}\mid X^{(n)},Y^{(n)},\mathbf{\theta})}\frac{q\left(\mathbf{M}\mid\mathbf{M}^{*} \right)}{q\left(\mathbf{M}^{*}\mid\mathbf{M}\right)}\right), \tag{12}\]
where
\[\frac{q\left(\mathbf{M}\mid\mathbf{M}^{*}\right)}{q\left(\mathbf{M}^{*}\mid\mathbf{M}\right)} =\frac{\mathrm{Multi}_{(N,\mathbf{Q}_{1-u})}(\{i_{1},\ldots,i_{N}\})}{\mathrm{Multi }_{(N,\mathbf{Q}_{u})}(\{i_{1},\ldots,i_{N}\})}\]
and \(\mathbf{Q}_{1-u}^{*}\) is the selection probability vector defined by (10) or (11) with the mask vectors \(\mathbf{M}^{*}\). Note that some inactive nodes become active when \(u=0\) (i.e. _birth_ of nodes) and some active nodes become inactive when \(u=1\) (i.e. _death_ of nodes). The proposed MH algorithm is summarized in Algorithm 1.
```
1: Sample \(u\sim\mathrm{Bernoulli}(0.5)\), \(N\sim\mathrm{Uniform}([N_{\max}])\).
2: Calculate the selection probability \(\mathbf{Q}_{u}\) using (10) or (11).
3: Sample \(N\) many nodes \(\{i_{1},\ldots,i_{N}\}\) by \[\{i_{1},\ldots,i_{N}\}\sim\mathrm{Multi}_{(N,\mathbf{Q}_{u})}.\]
4:\(\mathbf{M}^{*}=\mathrm{flip}(\mathbf{M}^{\text{curr}},\{i_{1},\ldots,i_{N}\})\), where \(\mathbf{M}^{\text{curr}}\) is the current masking vectors.
5: Accept or reject \(\mathbf{M}^{*}\) with the acceptance probability (12).
```
**Algorithm 1** The algorithm of the proposed MH algorithm
One may consider a linear approximation of \(l\) in (9) using the gradient information at \(\mathbf{M}\), as is suggested by Grathwohl et al. (2021) and Zhang et al. (2022). However, we found that the linear approximation is not accurate for the mBNN, which is partly because the corresponding DNN is not locally smooth enough. In section 6.5, we provide the results of the experiment for comparing several proposal distributions which support the choice of (10) and (11).
## 6 Experiment
In this section, we perform experiments to empirically justify the usefulness of the mBNN. In Section 6.1 and 6.2, we conduct an experiment to demonstrate the necessity of masking variables using simulation and real datasets. In Section 6.3, we apply the mBNN to a Bayesian structural time series model to illustrate its practical usefulness. In Section 6.4, we extend the mBNN to CNN and experimentally show it is also useful for compressing large complex DNNs. In Section 6.5, we investigate the efficiency of the proposal distribution (10) and (11) in the MH algorithm. All the experimental details as well as the results of additional numerical experiments are given in Appendix D and E. The code is available at [https://github.com/ggong369/mBNN](https://github.com/ggong369/mBNN).
### Simulation
We obtain the predictive distribution when the true regression function is the noisy polynomial regression problem considered in Hernandez-Lobato and Adams (2015). Inputs \(x_{i}\) are sampled from \(x_{i}\sim\mathrm{Uniform}(-4,4)\) and the corresponding outputs \(y_{i}\) are obtained by \(y_{i}=x_{i}^{3}+\epsilon_{i}\), where \(\epsilon_{i}\sim N(0,9)\). We generate 20 training examples and compare the predictive distribution of the mBNN with that of BNN by generating 1000 MCMC samples from each model. For each model, the two hidden layer MLP with the layer sizes (1000,1000) is used.
The mean and 95% predictive intervals of the predictive
Figure 1: **Simulated data.** Predictive distributions of BNN and the mBNN on simulated data. The true polynomial function and the noisy observations are plotted by the red line and black dots, respectively. For each predictive distribution, the mean is drawn by the blue line and the 95% predictive interval is shown as the shaded areas.
distributions of BNN and the mBNN are presented in Figure 1. The figure illustrates that BNN quantifies uncertainty on a data sparse region overly to have a too wide predictive interval. Note that the true regression function is assumed to smooth and hence it is possible to transfer information on data dense regions to data sparse regions. Thus, proper uncertainty quantification even on data sparse regions could be possible. The coverage probability of the predictive interval of the mBNN is 95.3%, which is obtained by generating additional 1000 test sample, while that of BNN is 98.6%. The results indicate that deleting unnecessary nodes is important not only for fast convergence rates but also proper uncertainty quantification.
### Real dataset
We evaluate BNN, node-sparse VI (NS-VI) (Louizos et al., 2017) and the mBNN on four UCI regression datasets (_Boston, Concrete, Energy, Yacht_). For each dataset, we construct 20 random 90-to-10 train-test splits to provide the standard errors. For each method, the two hidden layer MLP with the layer sizes (1000,1000) is used. We select 20 models from several thousands MCMC samples and compare the predictive distributions obtained by the selected 20 models in terms of generalization and uncertainty quantification.
In Table 1, we report the means and standard errors of the performance measures on the 20 repeated experiments. For the performance measures, the coverage probability of the 95% predictive interval (Coverage), the root mean square error (RMSE) of the Bayes estimator, the negative log-likelihood (NLL) and continuous ranked probability score (CRPS) (Gneiting and Raftery, 2007) on test data are considered.
It is obvious that the mBNN outperform the other two competitors with respect to all of the four measures. That is, the mBNN is good at not only estimating the regression function and but also uncertainty quantification. It is noticeable that NS-VI performs too badly, which suggests that full Bayesian analysis of DNN is must. More detailed results such as the sparsity and the sensitivity to the hyper-parameter selection are provided in Appendix E.
### Application to the Bayesian structural time series model
Bayesian structural time series (BSTS) models (Scott and Varian, 2014; Qiu et al., 2018) provide a useful tool for time series forecasting, nowcasting, inferring causal relationships and anomaly detection (Brodersen et al., 2015; Feng and Tian, 2021). The key of BSTS is the state space model, which is given as
\[y_{t} =\mu_{t}+\boldsymbol{\beta}^{\top}\boldsymbol{x}_{t}+\epsilon_{t}, \epsilon_{t} \sim\mathcal{N}(0,\sigma_{e}^{2})\] \[\mu_{t+1} =r\mu_{t}+\eta_{t}, \eta_{t} \sim\mathcal{N}(0,\sigma_{\eta}^{2})\] \[\mu_{0} \sim N(a_{0},\sigma_{0}^{2})\]
where \(\boldsymbol{x}_{t}\in\mathbb{R}^{d}\), \(y_{t}\in\mathbb{R}\) and \(\mu_{t}\in\mathbb{R}\) denote observed input variables, output variable and unobserved local trend at time \(t\), respectively. For inference and prediction, MCMC algorithm using Kalman filter (Welch et al., 1995; Durbin and Koopman, 2002) is mainly used.
In many cases, the linear component \(\boldsymbol{\beta}^{\top}\boldsymbol{x}_{t}\) may not be insufficient to explain complicate relations between inputs and outputs, and DNN can be considered instead. In turn, the mBNN is a useful inferential tool for this nonlinear BSTS. To illustrate that the mBNN works well for BSTS, we analyze a real dataset consisting of daily search volumes of several keywords collected by a Korea search platform company. The dataset consists of daily search volumes of keywords in year 2021 associated with a pre-specified product (eg. shampoo). The aim is to predict the search volume of the pre-specified product based on the search
\begin{table}
\begin{tabular}{c|c|c c c c} \hline
**Dataset** & **Method** & **Coverage** & **RMSE** & **NLL** & **CRPS** \\ \hline \multirow{3}{*}{Boston} & BNN & 0.912(0.000) & 3.411(0.145) & 2.726(0.087) & 1.738(0.052) \\ & NS-VI & 0.746(0.010) & 3.079(0.179) & 4.242(0.568) & 1.661(0.069) \\ & mBNN & **0.933(0.007)** & **2.092(0.143)** & **2.472(0.085)** & **1.462(0.055)** \\ \hline \multirow{3}{*}{Concrete} & BNN & 0.948(0.008) & 5.580(0.178) & 3.901(0.055) & 2.658(0.072) \\ & NS-VI & 0.568(0.011) & 5.0466(0.149) & 9.713(0.086) & 2.965(0.088) \\ & mBNN & **0.912**(0.009) & **4.913(0.180)** & **3.027**(0.046) & **2.628(0.087)** \\ \hline \multirow{3}{*}{Energy} & BNN & **0.948(0.045)** & 0.591(0.017) & 0.910(0.025) & 3.522(0.007) \\ & NS-VI & 0.913(0.006) & 1.322(0.117) & 1.792(0.121) & 0.720(0.055) \\ \cline{1-1} & mBNN & **0.945**(0.004) & **0.744**(0.015) & **6.760**(0.034) & **0.256(0.006)** \\ \hline \multirow{3}{*}{Yacht} & BNN & 0.977(0.006) & 0.575(0.048) & 1.011(0.056) & 0.332(0.013) \\ \cline{1-1} & NS-VI & 0.932(0.011) & 1.842(0.096) & 1.915(0.063) & 0.969(0.042) \\ \cline{1-1} & mBNN & **0.953**(0.008) & **6.664**(0.044) & **0.932**(0.062) & **0.318**(0.015) \\ \hline \end{tabular}
\end{table}
Table 1: **Real dataset.** Performance comparison of BNN, NS-VI and the mBNN on UCI datasets. The boldface numbers are the best one among the three methods.
Figure 2: **Daily search volume dataset.** Predictive distribution of BSTS model using linear, DNN and mDNN. X-axis and y-axis represent time (in day) and search volumes (in thousand), respectively. For each method, the means of predictive distributions and 95% predictive intervals are shown as the blue line and shaded area, respectively. The observed values are plotted by the red line.
volumes of other related keywords.
We apply the three BSTS models corresponding to the three regression components - linear, BNN and the mBNN. For BNN and the mBNN, the two hidden layer MLP with the layer sizes (100,100) is used. The predictive intervals with the NLL and RMSE values on the data from \(t=241\) to \(t=365\) obtained by the posterior distribution inferred on the data from \(t=1\) to \(t=240\) are presented in figure 2. It is clearly observed that the mBNN is superior in both nowcasting ability and uncertainty quantification. It is interesting that the predictive intervals of the linear and BNN models are much wider than those of the mBNN, which amply indicates that the choice of an appropriate architecture of DNN is crucial for desirable uncertainty quantification. More details about the dataset, methodology and additional experimental results are provided in Appendix D.3.
### Extension to convolution neural network
We also extend the mBNN to masked Bayesian convolution neural network (mBCNN) for image dataset. We construct a masked CNN (mCNN) by adding masking vectors to the CNN model. See Appendix C.2 for details of the mBCNN.
We evaluate compression ability of the mBCNN on image datasets. For comparison, we use node-sparse versions of approximated Bayesian methodologies : NS-VI, node-sparse Deep ensemble (NS-Ens) and node-sparse MC-dropout (NS-MC). Detail description of competitors are provided in Appendix D.4. For each methods, ResNet18 (He et al., 2016) architecture is used. For fair comparison, we use five networks for inference in all the methods. We repeat the experiments 3 times with different seeds.
In Table 2, we report the means and standard errors of the performance measures on the repeated experiments. For the performance measures, accuracy of the Bayes estimator, NLL, expected calibration error (ECE) (Kumar et al., 2019) of test data are considered. In addition, we report inferential costs (FLOPs) and the numbers of nonzero parameters (Capacity) relative to the non-sparse model. The results in Table 2 show that mBCNN provides better generalization and uncertainty quantification with less inferential cost and model capacity compared to the other competitors. That is, the mBNN is a useful tool for compressing complex DNNs without hampering uncertainty quantification much.
### Ablation study: Efficiency of the proposal
To illustrate the efficiency of our proposal distribution in Algorithm 1, we conduct a comparative experiment. As an alternative to the proposal distribution with selection probability (10) for _birth_ and (11) for _death_, we consider the following candidates for the selection probability of either _birth_ or _death_:
\[\mathbf{Q}_{u} \propto\mathbb{I}(\mathbf{m}_{j}^{(l)}=u), \tag{13}\] \[\mathbf{Q}_{u} \propto\mathbb{I}(\mathbf{m}_{j}^{(l)}=u)\exp\left(-|\nabla l(\mathbf{M}) |/2\right),\] (14) \[\mathbf{Q}_{u} \propto\mathbb{I}(\mathbf{m}_{j}^{(l)}=u)\exp\left((1-2u)\nabla l(\bm {M})/2\right). \tag{15}\]
While (13) gives the same selection probability on each active (or inactive) node, (14) and (15) put proposals depending on the role of each node. The proposal of (14) is devised to select less sensitive nodes more by giving proposal probabilities reciprocally proportional to \(|\nabla l(\mathbf{M})|\), and (15) uses a linear approximation of \(l\) in (9) (Grathwohl et al., 2021).
We compare the speeds of convergence of the MCMC algorithm with the 9 proposal distributions which are all \(3^{2}\) combinations of the three candidates (13), (14) and (15) and the two moves for _birth_ and _death_. Figure 3 presents how the ratios of the activated nodes relative to the largest DNN decrease in the burn-in phase of the MCMC algorithm. It is obvious that our proposal (the red solid lines) is most fast to eliminate unnecessary nodes. Note that the proposals involving the linear approximation of \(l(\mathbf{M})\) do not work well, which suggests that DNNs are not smooth enough to be approximated linearly.
\begin{table}
\begin{tabular}{c|c|c c c c} \hline & **Measure** & **NS-VI** & **NS-Ens** & **NS-MC** & **mBCNN** \\ \hline \multirow{4}{*}{**mBCNN**} & ACC & 0.9606(0.002) & 0.926(0.003) & 0.918(0.002) & **0.932(0.001)** \\ & NLL & 0.298(0.004) & 0.255(0.011) & 0.525(0.006) & **0.220**(0.005) \\ & ECE & 0.009(0.001) & 0.012(0.002) & 0.020(0.001) & **0.088**(0.001) \\ & FLOPs & 4.485(0.076) & 22.45(1.647) & 41.16(0.009) & **12.26**(0.06)\% \\ & Capacity & 23.24(1.56\%) & 12.78(0.75)\% & 41.07(0.00\%) & **3.47**(0.03)\% \\ \hline \multirow{4}{*}{**mBCNN**} & ACC & 0.600(0.004) & 0.735(0.003) & 0.679(0.002) & **0.737**(0.001) \\ & NLL & 1.977(0.039) & 1.076(0.014) & 2.792(0.061) & **1.064**(0.008) \\ \cline{1-1} & ECE & 0.006(0.000) & **0.002**(0.000) & 0.006(0.000) & **0.002**(0.000) \\ \cline{1-1} & FLOPs & 37.38(0.61\%) & 27.02(2.30\%) & 53.50(0.00\%) & **18.34**(0.27)\% \\ \cline{1-1} & Capacity & 41.61(0.49\%) & 20.08(1.15\%) & 53.45(0.00\%) & **12.21**(0.07)\% \\ \hline \end{tabular}
\end{table}
Table 2: **Image datasets.** Performance comparison of NS-VI, NS-Ens, NS-MC and mBCNN on image datasets. When inferring the posterior of mBCNN, we use SGLD (Welling & Teh, 2011).
Figure 3: **Efficiency of the proposal.** The ratios of activated nodes (y-axis) relative to the largest DNN are drawn as the MCMC algorithms iterate in the burn-in period (x-axis). The solid, dashed and dotted lines correspond to (13), (14) and (15) for _birth_, respectively while blue, red and green lines correspond to (13), (14) and (15) for _death_, respectively. It is obvious that the number of activated nodes with our proposal distribution (the solid red line) decreases much faster than the other proposals.
## 7 Discussion
We have proposed the mBNN which searches for a DNN with an appropriate complexity. We prove theoretical optimalities of the mBNN and develop an efficient MCMC algorithm. By extensive numerical studies, we illustrate that the proposed BNN discovers well condensed DNN architectures with better prediction accuracy and uncertainty quantification compared to large DNNs.
A node-sparse network can be considered as a dense network with a smaller width because all edges connected to survived nodes are active. This property is sharply contrast with edge-sparse networks. A key difference of the node-sparse network and a dense network with a smaller width is that the widths of each layer can vary for node-sparse networks while they should be fixed in advance for dense network. In practice, it would be difficult to find the optimal width in advance before analyzing data. mBNN is a kind of tools to find the optimal width data adaptively.
We do not insist that the proposal distribution in our MCMC algorithm is optimal. There would be more efficient proposals. More data adaptive proposals for _birth_ would be possible which we leave as a future work.
Condensing the posterior distribution is also interesting. We have to employ multiple DNNs in the prediction phase, which would be expensive. Summarizing multiple posterior samples into a single random DNN would be useful. For bootstrapping, Shin et al. (2021) proposes a similar method, which can be modified for the mBNN.
## Acknowledgements
This work was supported by National Research Foundation of Korea(NRF) grant funded by the Korea government (MSIT) (No. 2020R1A2C3A0100355014), Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government(MSIT) [NO.2022-0-00184, Development and Study of AI Technologies to Inexpensively Conform to Evolving Policy on Ethics ] and INHA UNIVERSITY Research Grant.
|
2306.02325 | Random Feedback Alignment Algorithms to train Neural Networks: Why do
they Align? | Feedback alignment algorithms are an alternative to backpropagation to train
neural networks, whereby some of the partial derivatives that are required to
compute the gradient are replaced by random terms. This essentially transforms
the update rule into a random walk in weight space. Surprisingly, learning
still works with those algorithms, including training of deep neural networks.
This is generally attributed to an alignment of the update of the random walker
with the true gradient - the eponymous gradient alignment -- which drives an
approximate gradient descend. The mechanism that leads to this alignment
remains unclear, however. In this paper, we use mathematical reasoning and
simulations to investigate gradient alignment. We observe that the feedback
alignment update rule has fixed points, which correspond to extrema of the loss
function. We show that gradient alignment is a stability criterion for those
fixed points. It is only a necessary criterion for algorithm performance.
Experimentally, we demonstrate that high levels of gradient alignment can lead
to poor algorithm performance and that the alignment is not always driving the
gradient descend. | Dominique Chu, Florian Bacho | 2023-06-04T10:50:13Z | http://arxiv.org/abs/2306.02325v1 | # Random Feedback Alignment Algorithms to train Neural Networks: Why do they Align?
###### Abstract
Feedback alignment algorithms are an alternative to backpropagation to train neural networks, whereby some of the partial derivatives that are required to compute the gradient are replaced by random terms. This essentially transforms the update rule into a random walk in weight space. Surprisingly, learning still works with those algorithms, including training of deep neural networks. This is generally attributed to an alignment of the update of the random walker with the true gradient -- the eponymous gradient alignment -- which drives an approximate gradient descend. The mechanism that leads to this alignment remains unclear, however. In this paper, we use mathematical reasoning and simulations to investigate gradient alignment. We observe that the feedback alignment update rule has fixed points, which correspond to extrema of the loss function. We show that gradient alignment is a stability criterion for those fixed points. It is only a necessary criterion for algorithm performance. Experimentally, we demonstrate that high levels of gradient alignment can lead to poor algorithm performance and that the alignment is not always driving the gradient descend.
keywords: Neural networks, Feedback alignment, Random walk +
Footnote †: journal:
## 1 Introduction
The backpropagation algorithm (BP) [1] underpins a good part of modern neural network (NN) based AI. BP-based training algorithms continue to be the state of the art in many areas of machine learning ranging from benchmark problems such as the MNIST dataset[2] to the most recent transformer-based architectures [3]. While its success is undeniable, BP has some disadvantages. The main one is that BP is computationally expensive. It is so in two ways. Firstly, BP has high requirements on pure processing power. It also requires sequential processing of layers during both the forward and backward pass, limiting its scope for parallelisation. This is sometimes called _backward locking_[4].
Secondly, BP is biologically implausible. One issue is that for every neuronal feedforward connection, BP would require neurons to also have a symmetric feedback connection with the same weight. This has never been observed. From a purely machine learning point of view, the lack of biological plausibility may not be too concerning, since the aim of applied AI is more often performance, rather than neuroscientific realism. However, there is a sense in which biological plausibility becomes a real concern after all: The update of any particular connection weight in a neural network requires global information about the entire network. This entails intense data processing needs [5], which in turn leads to high energy consumption [6; 7]. The electricity consumption required for the training of large scale NNs is a barrier to adoption and environmentally unsustainable and is widely recognised as problematic [8; 9]. BP is also not compatible with neuromorphic hardware platforms [10], such as Loihi [11] or SpiNNaker [12].
In the light of this, there has been some recent interest in alternatives to BP that alleviate these issues [13]. One particularly intriguing example are _random feedback alignment_ (FA) algorithms [14]. The basic FA algorithm is just BP with the symmetric feedback weights replaced by a randomly chosen, but fixed, feedback matrix. A variant of FA is _direct feedback alignment_ (DFA) [15], which bypasses backpropagation through layers and transmits the error directly to weights from the output layer via appropriately chosen feedback matrices. This enables layer-wise parallel updates of NNs. Furthermore, training no longer requires global knowledge of the entire network, which makes it amenable to implementation on neuromorphic hardware. FA and DFA have been found to perform surprisingly well on a number of benchmark problems [16, 17]. Recently, it has been reported that they even work on large-scale architectures such as transformers [18], often reaching performances that are comparable, albeit not exceeding, those of BP-based algorithms. Both algorithms do not work well on convolutional neural networks [18].
FA algorithms replace partial derivatives in the gradient computation by random matrices. Mathematically, the resulting update will no longer be a gradient of the loss function, but must be expected to be orthogonal to the gradient. It is therefore, at a first glance, surprising that DFA and FA work at all. A key insight into why they work was already given in the original paper by Lillicrap [14] who showed that the update direction of FA is not orthogonal to the gradient after all. They observed so-called _weight alignment_, whereby the weights of the network align with (i.e. point into approximately the same direction as) the feedback matrices and _gradient alignment_ where the updates of the FA algorithm align with the gradient as computed by BP. They conjectured that this alignment drives the approximate gradient descent of FA.
A mechanism that could lead to this alignment was suggested by Refinetti and co-workers [19]. They modelled a linear two-layer network using a student-teacher setup based on an approach by Saad and Solla [20]. This showed that, at least in their setup, when starting from initially zero weights, the weight update is in the direction of the feedback matrix, leading to weight alignment and consequently gradient alignment. A corollary of their results is the prediction that alignment is particularly strong when the weights are initially vanishing. Another important theoretical contribution is by Nokland [15] who formulated a stability criterion for DFA.
The above results were obtained using mathematically rigorous methods, but also rely on restrictive simplifying assumptions (e.g. linear networks in a student-teacher setup), which may or may not be relevant for realistic NNs. There is therefore a need to understand how FA operates in unrestricted NN, and whether the insights derived from simplified setups remain valid. The aim of the present contribution is to shed more light onto why FA works. To do this, we will complement existing approaches and view FA as a random walk [21], or more specifically a spatially inhomogeneous random walk in continuous weight space where the distribution of jump lengths and directions varies according to the position of the walker. In the present case, the update is entirely determined by the FA update rule and the distribution of training examples. The latter acts as the source of randomness.
We will show below that across weight space there are particular points, that is specific choices of weights, where the jump length vanishes. In a slight abuse of notation, we will refer to those as _fixed points_ of the random walk. As will become clear, these correspond to local extrema of the loss function, and as such correspond to valid solutions of the BP algorithm. If the random walker landed exactly on one of those, then it would remain there. However, typically these fixed points are not stable under the FA update rule, that is they are not attractors of the random walker. In this case, a walker initialised in the neighbourhood of the fixed point would move away from the fixed point. As one of the main contributions of this paper we will show that gradient alignment is the condition for fixed points to be stable. This stability criterion is different from the one derived by Nokland [15] who showed that under certain conditions gradient alignment can lead to loss minimisation. We show here that feedback alignment **is** the stability criterion to first order approximation and that it can be derived in a general way without any simplifying assumptions.
Furthermore, we will also show that gradient alignment while necessary for FA to find good solution, is not a sufficient criterion. Based on simulation results, we will conjecture that alignment is not a driving the approximate gradient descent, but rather is a side-product of a random walk that is attracted by local extrema of the loss function. Finally, based on extensive simulations, we will also propose a model of how NN learning under FA works.
## 2 Results
### Notation and basic setup
We will start by introducing the notation and the basic setup on which the remainder of this paper is based. Throughout, we will consider a feedforward neural network (multi-layer perceptron) parametrised by some weights \(\mathbf{w}\). The network takes the vectorised input \(\mathbf{x}\) and returns the output vector \(\mathbf{m}(\mathbf{x};\mathbf{w})\). When the input is irrelevant, then we will use the shorthand notation \(\mathbf{m}(\mathbf{w})\) to describe the neural network. We consider a network of \(L\) layers, where each layer \(1\leq l\leq L\) comprises \(n_{l}\) artificial neurons, whose output is a scalar non-linear functions \(f_{i}^{(l)}(\cdot)\), where the index \(1\leq i\leq n_{l}\) labels the neuron to which this output belongs. The argument to those activation function is the pre-activation function
\[h_{j}^{(l)}:=\sum_{i=1}^{n_{l-1}}w_{ji}^{(l)}f_{i}^{(l-1)}\]
with \(w_{ji}^{(l)}\in\mathbb{R}\) denoting the parameters (or "weights") of \(h_{j}^{(l)}\). For convenience, we will write \(x_{i}^{(l)}:=f_{i}^{(l)}\) and in particular \(f_{i}^{(0)}:=x_{i}\) is the input to the network. Throughout this manuscript, we denote the loss function by \(\mathcal{L}(\mathbf{m}(\mathbf{x}))\), and assume that it is to be minimised via gradient descend, although all our conclusions will remain valid for gradient ascend problems.
Finally, we define the _alignment measure_ of two vectors or two matrices \(\mathbf{a}\) and \(\mathbf{b}\). In the case of two matrices, this is computed by flattening the matrices in some way, for example by stacking the columns to obtain \(\mathbf{a}^{\prime}\) and \(\mathbf{b}^{\prime}\). The alignment measure is then computed as the inner product of the vectors divided by their norms,
\[\frac{\mathbf{a}^{\prime}\cdot\mathbf{b}^{\prime}}{\|\mathbf{a}^{\prime}\|\| \mathbf{b}^{\prime}\|}.\] (alignment measure)
The maximal value of the alignment is \(1\). This can be interpreted as \(\mathbf{a}^{\prime}\) and \(\mathbf{b}^{\prime}\) being completely parallel. The minimal value is \(-1\), which indicates that they are anti-parallel. In high dimensional spaces, two randomly chosen matrices/vectors will typically have an alignment of \(0\), indicating that they are orthogonal to one another.
### BP and FA algorithms
Using this notation, we can formulate the BP update rule for layer \(l\) of a feedforward multi-layer perceptron as
\[\Delta^{\text{BP}}w_{pq}^{(l)}:=\frac{\partial\mathcal{L}}{\partial f_{i}^{(L) }}\frac{\partial f_{i}^{(L)}}{\partial h_{j}^{(L)}}\frac{\partial h_{j}^{(L)}} {\partial f_{l}^{(L-1)}}\frac{\partial f_{l}^{(L-1)}}{\cdots}\cdots\frac{ \cdots}{\partial f_{k}^{(l)}}\frac{\partial f_{k}^{(l)}}{\partial h_{s}^{(l)} }\frac{\partial h_{s}^{(l)}}{\partial w_{pq}^{(l)}}. \tag{1}\]
Here (and in the following), we use the convention that repeated indices are summed over, that is \(a_{i}b_{i}:=\sum_{i}a_{i}b_{i}\); note that this convention does not apply to the superscripts in parenthesis that indicate the layer.
Equation 1 can be evaluated, by noting that the function \(f_{i}^{(l)}\) only depends on \(h_{i}^{(l)}\), and furthermore for the common type of neural network, \(\frac{\partial h_{i}^{(l)}}{\partial w_{jk}^{(l)}}=\delta_{ij}f_{k}^{(l-1)}\), where \(\delta_{ij}\) is \(1\) if \(i=j\) and \(0\) otherwise. Thus, eq. 1 reduces to
\[\Delta w_{pq}^{(l)} =\partial\mathcal{L}_{i}\widetilde{B}_{ik}^{(l)}\frac{\partial f_ {k}^{(l)}}{\partial h_{s}^{(l)}}\frac{\partial h_{s}^{(l)}}{\partial w_{pq}^{ (l)}}\] \[=\partial\mathcal{L}_{i}\widetilde{B}_{ik}^{(l)}\partial f_{ks}^{ (l)}\delta_{ps}f_{q}^{(l-1)}\] \[=\partial\mathcal{L}_{i}\widetilde{B}_{ik}^{(l)}\partial f_{kp}^{ (l)}f_{q}^{(l-1)}. \tag{2}\]
Here we abbreviated the partial derivative of the loss by \(\partial\mathcal{L}_{i}\), \(\partial f_{ks}^{(l)}:=\partial f_{k}^{(l)}/\partial h_{s}^{(l)}\), and \(\widetilde{B}_{ik}^{(l)}\) is a shorthand for the middle terms of the chain rule of eq. 1. Note that, \(\partial f_{ks}^{(l)}\) is zero for \(k\neq s\).
FA is the same as BP, except that the terms \(\partial h_{i}^{(\cdot)}/\partial f_{j}^{(\cdot)}\) appearing in \(\widetilde{B}_{ik}^{(l)}\) are replaced by randomly chosen (but fixed) numbers \(R_{ij}\), drawn from some user-determined distribution. This leads to the partially random feedback matrices \(\mathbf{B}\) with elements \(B_{ik}^{(l)}\). The _FA pseudo-gradient_ in layer \(l\) is defined by
\[\Delta^{\text{FA}}w_{pq}^{(l)}:=\partial\mathcal{L}_{i}B_{ik}^{(l)}\partial f_ {kp}^{(l)}x_{q}^{(l)}. \tag{3}\]
Note, that the rhs of the equation is not a gradient of a a particular function.
DFA is the same as FA, except that the elements of the matrix \(B_{ik}^{(l)}\) are fixed but chosen randomly.
### Deriving the stability criterion for FA
The dynamics induced by the FA-pseudo-gradient (eq. 3) constitutes a random walk in weight space. The randomness is introduced via the choice of the particular training example for the next weight update. Therefore, FA is not a gradient descent/ascent algorithm, except of course in the output layer which follows a gradient to a local extremum of the loss function (in exactly the same way as BP).
We will now show that as a consequence of this, the FA pseudo-gradient update shares with BP a number of fixed points in weight space. These correspond to local extrema of the loss function. Under certain conditions, FA will converge to those. In order to understand the difference between the FA and BP it is instructive to consider the update in the penultimate layer of the network (which has the simplest form). In the case of BP, this is:
\[\Delta^{\text{BP}}w_{pq}^{(l)} =\frac{\partial\mathcal{L}}{\partial f_{i}^{(L)}}\frac{\partial f _{i}^{(L)}}{\partial h_{j}^{(L)}}\frac{\partial h_{j}^{(L)}}{\partial f_{l}^{ (L-1)}}\frac{\partial f_{l}^{(L-1)}}{\partial h_{k}^{(L-1)}}\frac{\partial h_{ k}^{(L-1)}}{\partial w_{pq}^{(L-1)}}\] \[=\frac{\partial\mathcal{L}}{\partial f_{i}^{(L)}}\frac{\partial f _{i}^{(L)}}{\partial h_{j}^{(L)}}w_{jl}^{(L)}\frac{\partial f_{l}^{(L-1)}}{ \partial h_{k}^{(L-1)}}\frac{\partial h_{k}^{(L-1)}}{\partial w_{pq}^{(L-1)}}\] (gradient)
The corresponding expression in the case of FA is then:
\[\Delta^{\text{FA}}w_{pq}^{(l)}=\frac{\partial\mathcal{L}}{\partial f_{i}^{(L) }}\frac{\partial f_{i}^{(L)}}{\partial h_{j}^{(L)}}R_{jl}^{(L)}\frac{\partial f _{l}^{(L-1)}}{\partial h_{k}^{(L-1)}}\frac{\partial h_{k}^{(L-1)}}{\partial w_ {pq}^{(L-1)}}\] (FA pseudo-gradient)
where \(\mathbf{R}\) is a randomly chosen, but fixed matrix.
We see from the above equations that for BP the gradient vanishes at each layer when the the derivative of the loss \(\left(\partial\mathcal{L}/\partial f_{i}^{(L)}\right)\left(\partial f_{i}^{( L)}/\partial h_{j}^{(L)}\right)\) vanishes for all indices \(j\). If the weight matrices are full rank, then this is indeed the only way for the gradient to vanish. We observe that random matrix \(\mathbf{R}\) will be maximal rank for as long as elements are chosen _iid_. Its nullspace vanishes and local extrema of the loss function are therefore also points where the update of the FA pseudo-gradient vanishes. For as long as \(\mathbf{R}\) is full rank, then all fixed points of the FA pseudo-gradient will be local extrema of the loss function. As a consequence, the fixed points of FA will be local extrema of the loss function, and local attractors under the BP. This means, that the optimiser in the output layer pushes the entire network, to fixed points of the FA pseudo-gradient. Note that these fixed points need not be attractors of the FA pseudo-gradient. When initialised close to such a fixed point, it is conceivable that FA moves away from the neighbourhood of the fixed point. Indeed, it can be easily seen that this happens (see fig. 1 for an example).
The question is now whether or not there are fixed points that are attractive under the FA pseudo-gradient. The condition for stability is that under the FA update \(\Delta^{\text{FA}}\mathbf{w}\) the loss does not increase, such that
\[\mathcal{L}\left(\mathbf{m}\left(\mathbf{w}\right)\right)-\mathcal{L}\left( \mathbf{m}\left(\mathbf{w}-\Delta^{\text{FA}}\mathbf{w}^{(l)}\right)\right) \geq 0. \tag{4}\]
Here, we suppressed the label superscripts for clarity and wrote \(\mathbf{w}\) instead of \(\mathbf{w}^{(l)}\). Assuming the weight update is a small one, we can now expand to first order and obtain
\[\mathbf{m}\left(\mathbf{w}-\Delta^{\text{FA}}\mathbf{w}\right)\approx\mathbf{ m}\left(\mathbf{w}\right)-\mathbf{m}^{\prime}\left(\mathbf{w}\right)\Delta^{ \text{FA}}\mathbf{w}, \tag{5}\]
where \(\mathbf{m}^{\prime}\) is a three-dimensional matrix with elements \(m^{\prime}_{ijk}:=\partial m_{i}/\partial w_{jk}\). We can now further expand the loss function to first order to obtain
\[\mathcal{L}\left(m\left(\mathbf{w}\right)\right)-\mathcal{L}\left(\mathbf{m} \left(\mathbf{w}-\Delta^{\mathrm{FA}}\mathbf{w}\right)\right)\approx\frac{ \partial\mathcal{L}}{\partial\mathbf{m}}\mathbf{m}^{\prime}(\mathbf{w})\Delta^ {\mathrm{FA}}\mathbf{w}. \tag{6}\]
Thus the stability criterion becomes, in first order approximation
\[\frac{\partial\mathcal{L}}{\partial\mathbf{m}}\mathbf{m}^{\prime}(\mathbf{w}) \Delta^{\mathrm{FA}}\mathbf{w}\geq 0 \tag{7}\]
We observe that \(\frac{\partial\mathcal{L}}{\partial\mathbf{m}}\mathbf{m}^{\prime}(\mathbf{w})\) is just \(\Delta^{\mathrm{BP}}\mathbf{w}\), and \(\Delta^{\mathrm{FA}}\mathbf{w}\) is the FA pseudo-gradient. Furthermore, we see that the rhs of eq. 7 is up to normalisation the alignment measure of the FA update with the BP gradient. Thus, eq. 7 formulates a necessary condition for the stability of a local extremum with respect to the update by the FA pseudo-gradient. This stability criterion states that the alignment measure needs to be positive.
\[\frac{\left(\Delta\mathbf{w}\right)^{\mathrm{BP}}\cdot\left(\Delta\mathbf{w} \right)^{\mathrm{FA}}}{\left\|\Delta\mathbf{w}\right\|^{\mathrm{BP}}\cdot \left\|\Delta\mathbf{w}\right\|^{\mathrm{FA}}}\geq 0 \tag{8}\]
Put differently, we can now say that gradient alignment is a stability criterion for FA.
### A conjectured mechanism
The FA pseudo-random update rule is a spatially inhomogeneous random walk, whose update steps depend both on the input \(\mathbf{x}\) to the network and the current weights \(\mathbf{w}\). A priori, there cannot be an expectation that the walker moves along a trajectory that reduces the loss because the FA pseudo-gradient is not a gradient, except in the output layer.
Based on the form of the FA pseudo-gradient, we can still conjecture a mechanism for the walker to find a local extremum of the loss function.
* Initially, the walker moves randomly through parameter space in the input and hidden layers.
* In the output layer, the error is minimised, driving the derivative of the loss function \(\partial\mathcal{L}_{i}\) to zero, and hence (as discussed above) towards a fixed point of the FA pseudo-gradient.
Figure 1: FA and BP were trained on the same sequence of examples from MNIST starting from the same initial conditions. The graphs show the weight alignment over time between FA and BP for the hidden and output layer. A value of 1 means that FA and BP have the same weights up to a constant factor. (a) Initially, the two algorithms diverge. After 25000 update steps the weights of the FA network were transferred to the network trained using BP. Following this, they remain aligned, demonstrating that the solution found by FA remains stable under BP. The inset shows the accuracy for both for reference.(b) Same, but at 25000 the FA takes the weight found by BP. This is not stable and the two solutions diverge quickly following the swap. Note how the accuracy of the FA drops immediately following the weight swap, indicating that it is repelled from the loss extremum.
* The relevant fixed point may be unstable under the FA pseudo-gradient update rule. In this case, the FA random walker in the input and hidden layers will have a systematic drift, making it impossible for the gradient descent process in the output layer to settle on its local extremum.
* Convergence to a fixed point will only happen when the trajectory towards this fixed point is compatible with the stability criterion for all layers.
There is no guarantee that such a compatible extremum is found. For example, the relevant loss extrema may not be accessible from the initial state, or there may be no extrema for which the stability criterion holds. It could also be that the random walk gets stuck in a region with near-zero update step sizes. For example, the local derivative \(\partial f_{ij}^{(l)}\) vanishes as weights go to infinity (see A for a discussion of this). This prevents the random walker from exploring the weight space.
### Experiments
In the following, we will try to understand the behaviour of the random walker experimentally by focussing on the particular example of a 3 layer feed-forward neural network trained on the MNIST problem. As we will see below, for this problem and our parameter settings, FA works reasonably well, which makes this a suitable example to gain some insights into how FA works. Note that in what follows the focus is on understanding FA. The NN used and the MNIST benchmark are chosen as illustrative examples. We are not trying to optimise the performance of FA on this task, nor are we attempting to give novel approaches to solve MNIST. Instead, the sole aim of the experiments below is to gain some new insights about FA by observing how it works. The example itself (MNIST) is convenient, because it is easy to solve, but the precise choice of example is to some extent irrelevant, and others could have been used to obtain the same insights.
#### 2.5.1 Alignment
There are two meanings to alignment in FA. (\(i\)) Weight alignment and (\(ii\)) gradient alignment. The consensus view in the literature on FA is that FA (somehow) brings about weight alignment, which implies gradient alignment. The latter then drives the FA towards extrema of the loss function. In that sense, FA approximates BP. In this section, we will show some examples where the approach to the local extremum is not driven by weight alignment, at least not during some stages of the simulations.
One of the key results by Refinetti _et al_ is that when starting from initially vanishing weights, the update of weights is in the direction of the feedback matrices. If this is true and gradient alignment drives FA learning, then one would expect that small initial weights lead to initially larger alignment than non-vanishing initial weights and that low initial weights lead to faster convergence of FA to a good solution. Our simulations are consistent with this. Fig. 2 shows that there is higher weight alignment (fig. 2c) and gradient alignment (fig. 2d) when initial weights are small. Interestingly, weight alignment remains rather modest in comparison to the gradient alignment. Within 10 epochs, it does not even reach a value of 0.1. Still, as expected, FA finds good solutions faster when starting with lower weights (fig. 2a). This conclusion also holds in the long run. Even after 50 epochs, the initial conditions matter for the achieved accuracy. The higher the initial weights, the lower the accuracy (see fig. 2b).
These results are consistent with the view that rapid feedback alignment during early updates is important for the eventual performance of the algorithm. A closer examination, however, reveals some additional complexities, which provide further insight. The first one is highlighted by fig. 3, which shows the norm of the gradient for the input, hidden and output layers after the first update step, so after the algorithm has been presented with the first example of the training set (figs. 3a-3c). The main observation to be made from these figures is that the norm reduces rapidly as the initial weights increase. If we take the norm as an indicator for the step size of the random walker, then this suggests that walkers initialised with high weights suffer from slow speed as a result of small update steps. High weights, therefore mean an effectively reduced learning rate at the beginning of learning. Note the dramatic decrease of the learning rate as the weights start to differ from 0.
Figure 2: The upper panel shows (a) the accuracy as a function of the update step for altogether 10 epochs. (b) The accuracy after 50 epochs as a function of the initial weights (see Methods for an explanation). An approximate linear dependence is discernible. The lower panel shows the (c) weight alignment and (d) gradient alignment for various initial weights as a function of updates. All results are averaged over 3 independent repetitions and used standard parameters (see Methods).
Figure 4: (a) We trained a gradient using BP. For the input and hidden layer we perturbed the gradient so that it had an angle relative to the actual gradient. The graph shows the accuracy after one epoch relative to BP. A value of 1 indicates that the perturbed network performs equally well as BP. The red line indicates the performance of a network where only the last layer is trained. The accuracy after 5 updates against the mean gradient alignment for the (b) input and (c) hidden layer. The average is taken over 500 repetitions. The error bars show the standard deviations. For comparison, the perturbed BP is also shown. Clearly, FA does better than the perturbed BP in those examples.
Figure 3: (a)-(c) The infinite norm of the gradient at the first update step as a function of the initial value scale. (d)-(f) Same, but the norm of the gradient after 3 epochs. Note the different scale on the vertical axis, showing how the norm of the gradients has reduced over time. Each point corresponds to a single simulation.
This suggests a new explanation for the improved performance of networks initialised with low weights: FA with initially vanishing weights may perform better because the initial speed of exploration is faster, and hence accuracy can increase over fewer update steps. This means that gradient alignment is not necessarily the only reason why FA performs. At least, we have shown one example which suggests a different explanation. We hasten to add that the two explanations are not mutually exclusive.
This begs now the question whether or not gradient alignment drives accuracy, or whether gradient alignment is merely a by-product of the FA dynamics. This is best explored during the earliest stages of learning, before a substantial alignment has formed. In order to be able to investigate this, we first need to understand the relationship between gradient alignment and loss. To this end, we generated a baseline curve as follows: We used the BP algorithm to train the network for 1 epoch on the MNIST dataset. However, each time the gradient was computed in the hidden and input layer, we randomly perturbed it, such that the actual gradient used for updating the weights was different from the gradient determined by BP. We could then, for each experiment, determine the alignment between the true gradient and the perturbed gradient. We did this systematically in fig. 4a, which shows the accuracy after 1 epoch as a function of the average alignment. By design, this set of experiments isolates the effect of the gradient alignment, while all other details of the algorithm are left the same. The figure also shows in red, the baseline of a multi-layer network where only the last layer is trained, whereas all other layers remain at the initial weights.
A number of observations can be made. Firstly, the accuracy of the network intersects the red line for an alignment of 0. In this case, the perturbed gradients are orthogonal to the actual gradients one would obtain from BP which means that weight updates do not have an overall drift either into the direction of better accuracy or away from it. Consistently, the accuracy of the network then corresponds to simulations where only the output layer is trained (indicated by the red line in fig. 4a). Further increasing the alignment, only improves the performance a little bit, reflecting the well known fact that training only the output layer leads to high accuracies in many cases. On the other hand, for negative alignments, the performance quickly drops to random guessing, which corresponds to an accuracy of 0.1. (Note that the effect of updating against the gradient is a randomisation of the performance, rather than guiding the network to an accuracy below 0.1, which would require updating the weights of the output layer against the gradient as well.) A further observation to be made from this is that the gradient descent process operating at the output layer cannot settle on a minimum when the weights in the lower layers have a systematic drift in one direction.
Fig. 4a shows how alignment (or rather misalignment) with the BP gradient impacts learning performance. We can now use this to see whether the FA performance is driven solely by the alignment of the pseudo-gradient or whether there is more to it. If FA performs exactly as well as the perturbed BP with the same alignment, then we know that gradient alignment is driving performance. If, on the other hand, it performs better, then there is some additional driver.
Fig. 4b gives some relevant insight. It shows the performance as a function of the average alignment for FA, gradient perturbed BP and and a network where only the top layer is trained. Fig. 4b and 4c compare the performance of the three algorithms after 5 updates against the alignment of the input and hidden layer. This shows that, for similar alignment, FA does much better than the other two algorithms. This suggests two things: \((i)\) Gradient alignment is not the only driver of performance of FA, at least not during early stages. \((ii)\) The performance of the FA during early stages is not entirely driven by the gradient descent in the output layer, but the update of the input and hidden layer does add to performance.
#### 2.5.2 Alignment can reduce performance
So far, we have established that alignment between the gradient and pseudo-gradient is not driving performance during early stages of learning. We will now show that alignment is not sufficient for performance, even during later stages, and indeed can be outright detrimental.
Fig. 5 shows, as an example, a set of three different simulations of FA with identical hyper-parameters. The only difference between them is the initialisation of the weights. The blue line is just a standard simulation, with weights being initially drawn from a normal distribution before being scaled by 0.05. The green simulation is the same, but the signs of the initial weights were set equal to the entries of the corresponding feedback matrices. As a consequence, there is a large initial alignment between the weights and the feedback matrices. Finally, the red points show the results for a simulation where the initial weights
were set to be identical to the feedback matrices before being scaled by \(0.05\). For all three simulations, we drew the elements of the feedback matrices from a normal distribution, rather than from the set \(\{-1,1\}\). We did this so that the initial weights in the blue simulation are statistically indistinguishable from the other two simulations, while the initial alignment between the weights is different in the three cases. By construction, the blue simulation is unaligned initially, the red simulation is perfectly aligned and the green simulation is somewhere in between those two cases.
We find from the simulations presented in fig. 5, as expected, that good weight alignment translates to high gradient alignment. The key message of the graph is that the initially unaligned simulation performs best, amongst the three runs. While we are not claiming that high gradient alignment is always detrimental, from this single example we can still draw two conclusions: (_i_) Weight alignment is not a necessary condition for performance of the FA algorithm. (_ii_) High gradient alignment is not sufficient for performance of the FA algorithm.
While the performance of the red simulation is still good in the sense that it is apparently learning something, it is possible to construct examples of FA networks that have almost perfect alignment throughout, but learn nothing (data not shown). The simplest way to do this is to use feedback matrices that are initialised by randomly drawing elements from the set \(\{-1,1\}\), and set the initial weights equal to the feedback matrices. These initial conditions are not conducive to algorithm performance, and the network does not train well. Altogether, we find, that gradient alignment is not always sufficient to explain the performance of FA algorithms. We could show at least one example, where other mechanisms are required.
## 3 Discussion
At a first glance, FA and DFA should not work. By replacing a key term in the update equation, the gradient descent of BP is effectively transformed into a random walk in weight-space. The key observation that had been made early on was that this random walk aligns with the "true" BP gradient. In the literature this alignment is commonly assumed to be the driver for the performance of the FA algorithm. It remains unclear, however, why FA aligns. Existing mathematical models only cover special simplified cases of neural networks. They suggest that the update dynamics of FA leads to weight alignment, which then implies gradient alignment. In that sense, FA approximates the true gradient. However, weight alignment is typically much weaker than gradient alignment. This suggests that some other explanations are required.
Our theoretical results suggest a somewhat different view: FA is not approximating BP at all, and indeed does not descend or ascend the gradient of a loss function or an approximation thereof. It is not "learning" in the sense one normally understands this term. Instead, it performs a random walk in weight space. It works based on the following conjectured mechanism: At the output layer, the gradient descend process drives the network as a whole towards a fixed point, corresponding to a vanishing gradient of the loss function. Many
Figure 5: Feedback alignment, with initial weights chosen at random (blue), randomly but the sign of each element was the same as the feedback matrix (green) and exactly the same as the feedback matrix (red). We show (a) the angle between the true gradient and the feedback gradient of the hidden layer, (a) the alignment between the weights of the hidden layer and the feedback matrix, (c) the accuracy over time. There is no trend between higher alignment and better accuracy, or better increase of accuracy. Each point represents a value taken after an update step. Altogether, the simulations here represent 10 epochs.
of those fixed points are unstable for the random walker and the updates in the hidden and input layer will drive the network away from the fixed point, until a fixed point is found for which the stability criterion is valid. Then, the network will converge towards this fixed point.
There is no guarantee that FA finds extrema that are compatible with the alignment criterion. Failure to do so could be either because there are no compatible extrema, or because FA is initialised in a part of parameter space from which it cannot find a route to compatible fixed points. The latter scenario could realise when extrema of the loss function are sparse in parameter space or when the weights are initialised in an area that has small update steps, for example for large initial weights. The often cited failure of FA for convolutional neural networks is likely a consequence of the sparseness of loss extrema for those networks.
Throughout this article we concentrated on FA, but clearly the conclusions can be transferred directly to DFA. The only difference between FA and DFA is that in the latter each layer performs an independent walk, whereas in FA the training process is a single random walker in a higher dimensional space. The basic mechanism of how DFA finds extrema remains the same as in the case of FA.
Some open questions remain. The first one relates to the distribution of local extrema of the loss function in parameter space. In particular, it may be that for certain types of problems there are conditions that guarantee that FA and DFA find local extrema or alternatively, that they do not find such extrema (as seems to be the case for convolutional neural networks). There is also a lack of theoretical understanding of spatially inhomogeneous random walks. In order to come to a complete theoretical description of FA, we need a sound justification of how and under which conditions random walks approach fixed points.
## 4 Methods
Unless otherwise stated, we used a feed forward multi-layer perceptron with an input/hidden layer of \(700/1000\) neurons. The size of the network was chosen to be large, while still being fast to execute. Our results are not sensitive to variations of the size of the network, although classification performance clearly is. Throughout, we used \(\tanh\) as the activation function; we also experimented with relu which led to qualitatively the same results (data not shown). For the final layer we used the softmax function and as a loss function we used the cross-entropy. The batch size was chosen to be \(100\) and the learning rate was set to \(0.05\). Again, our results are not sensitive to those parameters.
Unless stated otherwise, feedback matrices were chosen randomly by drawing matrix elements from the set \(\{-1,1\}\). We found FA not to be overly sensitive to the particular choice of this set. However, we found algorithm performance to depend on it to some extent. The particular choice we made gives good performance, but we made no attempt to choose the optimal one.
For all layers, initial weights were drawn from a normal distribution and then scaled with a weight scaling factor, resulting in both negative and positive weights. If the factor is zero, then all weights were initially zero. The larger the factor, the larger the (absolute value of) initial weights.
All simulations were done using Julia (1.8.5) Flux for gradient computations and network construction, and CUDA for simulations on GPUs. Throughout no optimisers were used. Weight updates were made by adding the gradient, scaled by the learning rate, to the weights. The networks were trained on the MNIST dataset as included with the Flux library. Whenever an accuracy is reported it was computed based on the test-set of the MNIST dataset.
|
2303.06945 | CoGANPPIS: A Coevolution-enhanced Global Attention Neural Network for
Protein-Protein Interaction Site Prediction | Protein-protein interactions are of great importance in biochemical
processes. Accurate prediction of protein-protein interaction sites (PPIs) is
crucial for our understanding of biological mechanism. Although numerous
approaches have been developed recently and achieved gratifying results, there
are still two limitations: (1) Most existing models have excavated a number of
useful input features, but failed to take coevolutionary features into account,
which could provide clues for inter-residue relationships; (2) The
attention-based models only allocate attention weights for neighboring
residues, instead of doing it globally, which may limit the model's prediction
performance since some residues being far away from the target residues might
also matter.
We propose a coevolution-enhanced global attention neural network, a
sequence-based deep learning model for PPIs prediction, called CoGANPPIS.
Specifically, CoGANPPIS utilizes three layers in parallel for feature
extraction: (1) Local-level representation aggregation layer, which aggregates
the neighboring residues' features as the local feature representation; (2)
Global-level representation learning layer, which employs a novel
coevolution-enhanced global attention mechanism to allocate attention weights
to all residues on the same protein sequences; (3) Coevolutionary information
learning layer, which applies CNN & pooling to coevolutionary information to
obtain the coevolutionary profile representation. Then, the three outputs are
concatenated and passed into several fully connected layers for the final
prediction. Extensive experiments on two benchmark datasets have been
conducted, demonstrating that our proposed model achieves the state-of-the-art
performance. | Jiaxing Guo, Xuening Zhu, Zixin Hu, Xiaoxi Hu | 2023-03-13T09:27:34Z | http://arxiv.org/abs/2303.06945v4 | CoGANPPIS: Coevolution-enhanced Global Attention Neural Network for Protein-Protein Interaction Site Prediction
###### Abstract
Protein-protein interactions are of great importance in biochemical processes. Accurate prediction of the protein-protein interaction sites (PPIs) from protein sequences deepens our understanding of biological mechanism and is crucial for new drug design. However, conventional experimental methods for PPIs prediction are costly and time-consuming so that many computational approaches, especially ML-based approaches, have been developed recently. Although these approaches have achieved gratifying results, there are still two limitations: (1) Most existing models have excavated a number of useful input features, but failed to take coevolutionary features into account, which could provide clues for inter-residue relationships and could be helpful for PPIs prediction; (2) The attention-based models only allocate attention weights for neighboring residues, instead of doing it globally, which may limit the model's prediction performance since some residues being far away from the target residues in the protein sequence might also matter.
We propose a coevolution-enhanced global attention neural network, a sequence-based deep learning model for PPIs prediction, called CoGANPPIS. Specifically, CoGANPPIS utilizes three layers in parallel for feature extraction: (1) Local-level representation aggregation layer, which aggregates the neighboring residues' features as the local feature representation similar to previous studies; (2) Global-level representation learning layer, which employs a novel coevolution-enhanced global attention mechanism to allocate attention weights to all the residues on the same protein sequences; (3) Coevolutionary information learning layer, which applies CNN & pooling to coevolutionary information to obtain the coevolutionary profile representation. Then, the three outputs are concatenated and passed into several fully connected layers for the final prediction. Extensive experiments on two benchmark datasets have been conducted, demonstrating that our proposed model achieves the state-of-the-art performance. The source code is publicly available at [https://github.com/Slam1423/CoGANPPIS_source_code](https://github.com/Slam1423/CoGANPPIS_source_code).
## 1 Introduction
Proteins participate in a variety of biological processes in organisms. They rarely act alone, instead, they usually carry out various functions by interacting with different kinds of molecules, such as DNA, lipids, carbohydrates, and other proteins (Branden and Tooze, 2012; Murray et al., 2009; Ardejani et al., 2017). The process of establishing physical contacts of high specificity between two or more protein molecules is known as protein-protein interaction, which plays an important role in many biochemical processes including immune response, muscle contraction, and signal transduction. Considering the high practical and research values of PPIs prediction, many approaches have been proposed so far. There are some conventional experimental
methods, such as two-hybrid screening, relationship purification, and intragenic complementation, being commonly applied to identify PPIs (Westermarck et al., 2013; Terentive et al., 2009; Brettner and Masel, 2012; Smallwood et al., 2002). However, these experimental methods suffer from being costly and time-consuming so that more accurate and efficient computational predictors for PPIs are of great value for biologists.
With the rapid development of computer science, a lot of computational approaches, especially ML-based approaches, have been developed, which take protein sequences or structures as input and be known as sequence-based and structure-based respectively (Hou et al., 2016). Although Structure-based methods have achieved some promising progress in recent years (Gainza et al., 2020; Yuan et al., 2022; Huang et al., 2023), they may cause problems for biological researchers since the number of proteins with available structures is limited. For example, AlphaFold2 has shown promising performance in protein structure prediction, but its effectiveness on some newly-discovered proteins and some exotic proteins still remains to be tested. At the same time, its requirement for computational resources could be too high for most researchers (Jumper et al., 2021). In contrast, sequence-based methods are more practical since protein sequences are easier to be obtained with the noticeable development of high-throughput techniques.
Sequence-based methods could be classified as partner-specific and non partner-specific sequence-based PPIs prediction (Casadio et al., 2022), and in this paper we focus on the later one. The partner-specific sequence-based PPIs prediction aims to identify the interaction residue pairs of two given proteins, which has not been covered in our present work.
Sequence-based methods can be further classified into 2 categories: traditional machine learning approaches and deep learning approaches. The commonly-used traditional machine learning approaches include SVM (Yan et al., 2004; Wang et al., 2006; Chen and Li, 2010; Chen et al., 2012; Porollo and Meller, 2007), Naive Bayes (Yan et al., 2004; Murakami and Mizuguchi, 2010), shallow neural network (Ofran and Rost, 2003, 2007), random forest (Chen and Jeong, 2009; Northey et al., 2018; Wang et al., 2019), and logistic regression (Dhole et al., 2014; Zhang and Kurgan, 2019). However, these methods cannot capture the relationships among different residues located on the same protein sequences since they treat every residue as an independent sample.
In recent years, due to the great success of deep learning in many fields such as computer vision, speech recognition and natural language processing, the models based on deep learning have also been used in PPIs prediction. Among these, DELPHI, a fine-tuned ensemble model combining recurrent neural networks and convolutional neural networks, showed significant improvement in PPIs prediction compared with traditional machine learning models (Li et al., 2021). DeepPPISP, a convolutional neural network-based model, achieved good outcomes by combining both local and global sequence contexts as input and processing them respectively (Zeng et al., 2020). On the basis of DeepPPISP, ACNN, an attention-based convolutional neural network, made a better understanding of the local environment of the target residues by giving different attention weights to the neighboring residues (Lu et al., 2021). HANPPIS improved the performance and interpretability by using a double-layer attention mechanism (Tang et al., 2021). Besides, ensnet_p, an ensemble model combining several neural net architectures, achieved stable and peak prediction accuracy (Stringer et al., 2022).
Conventional sequence-based input features can be roughly classified into 4 categories: raw sequence features, evolutionary information, residue physiochemical properties, and predicted structural features (Casadio et al., 2022). Raw sequence features refer to the features that can be straightly obtained by the protein sequences, the most commonly used of which are amino acid types. Evolutionary information usually refers to the position-specific scoring matrices (PSSM) as well as other conservative scores of the proteins, which are mainly calculated from multiple sequence alignments (MSA) and very informative for protein-related prediction tasks. Residue physiochemical properties (such as residue's charge and polarity) have been applied in many models in recent years, which can be obtained from databases or some specific predictors. Besides, in the absence of protein structure, the predicted structural features (such as hydrophobicity, secondary structure, disorder) can also be helpful.
Recently, another sequence-based feature, coevolutionary information based feature, has been applied to another important protein-related problem, protein contact-map prediction, and brings about significant performance improvement (Wang et al., 2017; Hanson et al., 2018; Li et al., 2021). These features are mainly obtained by direct coupling analysis (DCA) and could quantify the inter-residue coevolutionary relationships. Intuitively, for a residue, its properties and behaviours should be more similar to the residues closely related
to it, which inspires us to introduce it into our model.
To evaluate the performance of our model, we compare it with other seven sequence-based models (PSIVER, ISIS, SPRINGS, DELPHI, DeepPPISP, ACNN and ensnet_p) on two benchmark datasets Dset422 and Dset448. The experimental results show that our model achieves state-of-the-art performance for PPIs prediction. The main contributions of this paper are as follows:
(1) To the best of our knowledge, this is the first time to introduce coevolutionary information as input features into deep learning based PPIs prediction model and we verify its usefulness in PPIs prediction by ablation analysis.
(2) We propose a novel coevolution-enhanced global attention mechanism for global-level representation learning, which allocates attention weights based on a better understanding of the whole protein sequences and the coevolutionary relationships among the residues.
(3) We conduct extensive experiments on two benchmark datasets, the results of which demonstrate that CoGANPPIS achieves state-of-the-art performance.
The rest of this paper is organized as follows: Section 2 introduces the datasets and the input features. Section 3 presents the architecture of our model, the experimental results and experiment analysis. In the end, section 4 summarizes the discussion of this paper.
## 2 Materials and Methods
### Datasets
In this study, two benchmark datasets, Dset422 and Dset448, are used in experiments. Dset422 consists of Dset72 (Murakami and Mizuguchi, 2010), Dset186, and Dset164 (Singh et al., 2014), whose protein sequences were collected from Protein Data Bank (Sussman et al., 1999). The protein sequence homology is less than 25% and if an amino acid has absolute solvent proximity less than 1 A\({}^{2}\) before and after binding with other proteins, it will be defined as an interaction site (Zeng et al., 2020). Dset448 is sourced from the BioLip database, where residues are defined as interaction sites if the distance between an atom of this residue and an atom of a given protein-partner less than 0.5 A plus the sum of the Van der Waal's radii of the two atoms (Zhang and Kurgan, 2019). First, the protein sequences were mapped into Uniprot databases to collect binding residues across different complexes. Then, they were clustered by Blastclust at 25% similarity, after which one protein was selected from each cluster to ensure that proteins in Dset448 share similarities less than 25%. Besides, a dataset of 4392 protein sequences was constructed in the paper of PIPENN (Stringer et al., 2022). Here, we name it Dset4392. The data of Dset4392 is from the BioLip Database, where binding sites are defined similar to Dset448. We utilize it for pretraining.
For Dset422 and Dset448, we randomly divided them into training set (about 83% of randomly selected proteins), validation set (about 5% of randomly selected proteins), and test set (the remaining proteins) respectively. Consequently, for Dset422, there are 352 proteins in the training set, 21 proteins in the validation set, and 49 proteins in the test set. And for Dset448, there are 373 proteins in the training set, 22 proteins in the validation set, and 53 proteins in the test set.
We count the distribution of sequence lengths of the three datasets. As shown in Table 1, only a small proportion of protein sequences in the two datasets are longer than 500. Hence, for the convenience of model training, we preprocess the protein sequences by unifying their lengths to 500, that is, if a protein sequence is longer than 500, we truncate it to 500; if a protein sequence is shorter than 500, we pad their length to 500 with zeros.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Length range & 1-100 & 101-200 & 201-300 & 301-400 & 400-500 & 500+ & total \\ \hline Dset422 & 85(20.1\%) & 176(41.7\%) & 68(16.1\%) & 56(13.3\%) & 23(5.5\%) & 14(3.3\%) & 422(100.0\%) \\ Dset448 & 34(7.6\%) & 127(28.3\%) & 134(29.9\%) & 95(21.2\%) & 38(8.5\%) & 20(4.5\%) & 448(100.0\%) \\ Dset4392 & 345(7.9\%) & 1266(28.8\%) & 969(22.1\%) & 716(16.3\%) & 511(11.6\%) & 585(13.3\%) & 4392(100.0\%) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Statistics of protein sequence lengths
### Input Features
The features commonly used in previous research, including raw sequence features, the position-specific scoring matrix, and predicted secondary structures, are applied in our model. Besides, we also introduce coevolutionary information into our model, which, to the best of our knowledge, is the first time to be used as input features in deep learning based PPIs prediction. These features are described in detail as follows.
#### 2.2.1 Raw sequence features
In this study, we include two raw features, amino acid type and sequence length into our model. Most proteins consist of 20 different amino acids. Hence, we encode each amino acid residue as a 20D one-hot vector representing the amino acid type at this position. Besides, we utilize an integer to represent the length of the sequence for each residue.
#### 2.2.2 Position-specific scoring matrix
Position-specific scoring matrix (PSSM) contains evolutionary information, which has been shown effective for PPIs prediction (cheol Jeong et al., 2010). We perform the PSI-BLAST algorithm on each input sequence against NCBI's non-redundant sequence database with three iterations and an E-value threshold of 0.001 to obtain its PSSM, where every amino acid residue on the underlying sequence is encoded as a vector with 20 elements.
#### 2.2.3 Predicted secondary structures
NetSurfP-3.0 is a tool for predicting secondary structures from protein sequences (Hoe et al., 2022). Here we utilize it to predict relative solvent accessibility (RSA), access surface area (ASA), 3-state secondary structure as well as 8-state secondary structure for each residue. Each amino acid residue is encoded as a vector with 13 elements, which represents the predicted RSA, predicted ASA, and the probabilities of being the corresponding secondary structure states at the position.
#### 2.2.4 Coevolutionary information
Coevolutionary relationships between amino acid residues refers to the interdependent changes that occur in pairs of residues on the same protein sequences, which help maintain proteins' stability, function, and folding (De Juan et al., 2013). As mentioned earlier, coevolutionary information based input features brought about great performance improvement in protein-contact map prediction, inspiring us to apply it in this study.
Direct-coupling analysis (DCA) is one of the main computational approaches to capture proteins' coevolutionary information. The key idea of DCA is to disentangle direct pairwise couplings of each two amino acid residues on the same protein sequences. For each protein, DCA takes its multiple sequence alignments (MSA) as the input, which is obtained by BLASTP, and returns a \(N\times N\) matrix, where \(N\) refers to the length of this protein sequence. The \((i,j)\) element of this matrix refers to the direct coupling degree of the \(i\)th residue and the \(j\)th residue on this protein sequence. The larger this value is, the higher the coevolutionary relationship exists between these two residues. In our study of predicting whether an amino acid residue is an interaction site or not, the corresponding column of the target amino acid residue in the DCA matrix has been extracted as its coevolutionary information feature.
There are three usual DCA algorithms, mpDCA (Weigt et al., 2009), mfDCA (Morcos et al., 2011), and plmDCA (Ekeberg et al., 2013). The mpDCA uses a semi-heuristic message-passing approach, and its slow computation speed makes it difficult to be applied to a large-scale dataset. The mfDCA uses a mean-field approach based on the maximum-entropy model, which greatly improves the computational speed. On the basis of mfDCA, the plmDCA applies pseudo-likelihood to the Potts model and achieves higher accuracy than mfDCA. Based on the above comparison, we utilize plmDCA to generate DCA matrices in this study.
### Model Architecture
Figure 1 is an overview of the proposed framework. First, we extract the local features, global features, and coevolutionary features of the primary protein sequence with different components respectively, which, in
this paper, are referred to as local-level representation aggregation layer, global-level representation learning layer, and coevolutionary information learning layer. Each layer outputs a feature representation vector. Then we concatenate three feature representation vectors as the output of the feature extraction and pass it into the prediction layer consisting of four fully connected layers for the final prediction about whether the target amino acid residue is an interaction site or not. Now we introduce the three layers in feature extraction in detail.
#### 2.3.1 Local-level representation aggregation layer
For each target residue, a sliding window of length (\(2n+1\)) is used to aggregate the features of itself and its neighboring \(2n\) residues. For the \(i\)th residue on the protein sequence, we denote its local feature representation as \(h_{i}^{local}\).
#### 2.3.2 Global-level representation learning layer
It has been shown that the global features of protein sequences are critical for PPIs prediction (Zeng et al., 2020). Also, the coevolutionary information could quantify the inter-residue coevolutionary relationships (Wang et al., 2017; Hanson et al., 2018; Li et al., 2021). Hence, we consider to utilize a coevolution-enhanced global attention mechanism to distinguish the importance of residues. Assuming that the \(i\)th residue is the predicting target, all residues on the same sequence are linearly added according to the attention scores,
\[h_{i}^{p}=\sum_{j=1}^{N}\alpha_{ij}h_{j}, \tag{1}\]
where \(h_{j}\) refers to the PSSM, predicted secondary structure, and raw protein sequence features of residue \(j\). \(\alpha_{ij}\) refers to the attention score, which estimates the importance weight of residue \(j\) on target residue \(i\). Intuitively, different residues on the same protein sequence should have different importance for the target residue, and those which match the characteristics of the whole protein and share a close relationship with the target residue should be paid more attention to. Therefore, \(\alpha_{ij}\) is calculated as follows:
\[\alpha_{ij}=\text{softmax}(\pi(i,j))=\frac{\exp{(\pi(i,j))}}{\sum_{k=1}^{N} \exp(\pi(i,k))}, \tag{2}\]
\[\pi(i,j)=q_{1}^{\text{T}}\text{LeakyRelu}([W_{1}(p\odot h_{j})\|W_{2}(h_{i} \odot h_{j})]\|w_{ij}). \tag{3}\]
Figure 1: The model architecture of CoGANPPIS. (A) An overall illustration of the proposed model. The feature extraction consists of three layers: local-level representation aggregation layer, global-level representation learning layer and coevolutionary information learning layer, whose outputs are passed into the prediction layer for the final prediction. (B) The structure of CNN & pooling layer in coevolutionary information learning layer. We use batchnormalization, multiple convolution kernels, activation as well as maxpooling to capture the coevolutionary information from DCA. (C) The structure of the prediction layer. We concatenate the outputs from feature extraction and utilize four fully-connected layers to predict whether the residue is an interaction site or not.
Here, we use LeakyRelu as the activation function. \(\odot\) indicates element-wise product, \(\|\) indicates concatenation operation, and \(w_{ij}\in\mathbb{R}^{1}\) refers to the \((i,j)\) element of DCA matrix of the current protein, which provides us some clues for the coevolutionary relationship between the residue \(i\) and residue \(j\). \(W_{1}\in\mathbb{R}^{d\times d}\), \(W_{2}\in\mathbb{R}^{d\times d}\) and \(q_{1}\in\mathbb{R}^{2d+1}\) are trainable parameters. \(p\) can be seen as the feature representation of the whole protein sequence, which is obtained by mean-pooling on all the residues' features on this protein,
\[p=\frac{1}{N}\sum_{j=1}^{N}h_{j}. \tag{4}\]
Our approach makes the attention weights between the target residue and other residues dependent on not only the whole protein sequence feature representation but also the coevolutionary relationships between them, suggesting that those residues which match the characteristics of the whole protein and are closely related to the target residue will be attached more importance.
Then we concatenate the global representation of the target residue \(h_{i}^{p}\) and its original feature \(h_{i}\),
\[h_{i}^{global}=h_{i}^{p}\|h_{i}, \tag{5}\]
where \(h_{i}^{global}\) is the result of the global-level representation learning layer.
#### 2.3.3 Coevolutionary information learning layer
We have introduced coevolutionary information into the attention mechanism to exploit the relationship among residues as above. Now we further utilize the coevolutionary information on a larger scale, i.e, on the whole protein sequence level. Suppose we are predicting the \(i\)th residue on the protein sequence. First, we take its corresponding column in the DCA matrix as its coevolutionary information. Then we pass it into the CNN & pooling layer as shown in Figure 1:
\[h_{i,k}^{dca}=\text{Relu}(\text{conv1d}^{(k)}(\text{BN}(\text{ DCA}[:,i]))),k\in[1,K], \tag{6}\]
\[h_{i}^{dca}=\|_{k=1}^{K}h_{i,k}^{dca}, \tag{7}\]
where \(\text{DCA}[:,i]\) is the \(i\)th column of the DCA matrix of the underlying protein. BN refers to the Batch-Normalization operation and the 1D convolution operation conv1d extracts the normalized coevolutionary features. We use Relu as the activation. Here we use \(K\) different convolution kernels for a better extraction of coevolutionary features. Finally, we obtain the coevolutionary representation \(h_{i}^{dca}\) by concatenating all \(K\) results linearly.
#### 2.3.4 Prediction layer
To predict whether an amino acid residue is an interaction site or not, first we concatenate three former feature extraction results to obtain the final representation:
\[h_{i}^{pred}=h_{i}^{local}\|h_{i}^{global}\|h_{i}^{dca}, \tag{8}\]
where \(h_{i}^{local}\), \(h_{i}^{global}\) and \(h_{i}^{dca}\) are the results of local-level representation aggregation layer, global-level representation learning layer, and coevolutionary information learning layer. \(h_{i}^{pred}\) is the final representation of the residue, which will be passed into fully connected layers:
\[x^{(t)}=\text{Relu}(W^{(t)}x^{(t-1)}+b^{(t)}),t\in[1,T], \tag{9}\]
where \(x^{(t)}\) and \(x^{(t-1)}\) refer to the input vector and output vector of the \(t\)th fully connected layer, respectively. Here, \(x^{(0)}=h_{i}^{pred}\). \(W^{(t)}\) denotes the weight matrix and \(b^{(t)}\) denotes the bias. Besides, ReLU and dropout are utilized in each layer except the last one. After the last layer, a Sigmoid function is used to generate the final prediction:
\[\hat{y}=\frac{1}{1+e^{-x^{(T)}}}, \tag{10}\]
where \(\hat{y}\) denotes the predicted probability of the residue being the interaction site. And \(1-\hat{y}\) is the predicted probability of the residue being the non-interaction site.
### Evaluation Metrics
PPIs prediction can be seen as a binary classification problem for identifying whether an amino acid residue is an interaction site or not. Consequently, there could be four types of results based on the residue's true category and predicted category, i.e., true positives (TP), true negatives (TN), false positives (FP), and false negatives (FN). Here, TP and TN refer to the correctly predicted interaction sites and non-interaction sites respectively; FP and FN refer to the incorrectly predicted interaction sites and non-interaction sites respectively.
We select six evaluation metrics to comprehensively evaluate the predictive performance, including area under the precision-recall curve (AUPRC), accuracy (ACC), recall, precision, F-measure (F\({}_{1}\)), and Matthews correlation coefficient (MCC). Considering that our dataset is imbalanced with more non-interaction sites than interaction sites, F\({}_{1}\) and MCC indices deserve more attention. The formulas for calculating these metrics are as follows:
\[\text{ACC}=\frac{\text{TP}+\text{TN}}{\text{TP}+\text{TN}+\text{FP}+\text{ FN}}, \tag{11}\]
\[\text{Recall}=\frac{\text{TP}}{\text{TP}+\text{FN}}, \tag{12}\]
\[\text{Precision}=\frac{\text{TP}}{\text{TP}+\text{FP}}, \tag{13}\]
\[\text{F}_{1}=\frac{2\times\text{Precision}\times\text{Recall}}{\text{ Precision}+\text{Recall}}, \tag{14}\]
\[\text{MCC}=\frac{\text{TP}\times\text{TN}-\text{FP}\times\text{FN}}{\sqrt{( \text{TP}+\text{FP})(\text{TP}+\text{FN})(\text{TN}+\text{FP})(\text{TN}+ \text{FN})}}. \tag{15}\]
## 3 Results and Discussion
### Implementation
In the feature extraction part, the sliding window length in the local-level representation aggregation layer is set as 7. We use three convolution kernels in the CNN & pooling layer and the sizes are set as 3, 5 and 7, respectively. In the classification part, we conducted four fully connected layers of 1024, 256, 8, and 1 node, with the first three fully connected layers accompanied by a dropout ratio of 0.1. We use weighted cross-entropy loss as the loss function:
\[L=-\frac{1}{m}\sum_{i=1}^{m}(wy_{i}log(\hat{y_{i}})+(1-y_{i})log(1-\hat{y_{i}} )), \tag{16}\]
where \(m\) is the number of training samples. \(w\) refers to the weight and is set to 4. Interaction site is labeled as 1 (\(y_{i}=1\)) and non-interaction site is labeled as 0 (\(y_{i}=0\)). \(\hat{y}_{i}\) is the predicted probability of being interaction site of the sample \(i\). Besides, we utilize Adaptive Momentum (Adam) as the optimizer with a learning rate of 0.0001. The batch size is set to 256. The model is implemented by PyTorch and trained on NVIDIA GTX 1080 Ti.
Fine tuning is used in this model. Before training on Dset422 and Dset448, we first trained our model on Dset4392. In the epoch of achieving the best performance on Dset4392, the parameters of the model are saved to files. When training on Dset422 and Dset448, we loaded the saved weights into the feature extraction part from the file and froze the weights so that during the process of training, the parameters in feature extraction stayed unchanged. Training and validation data are used only to train the fully connected layers in the prediction layer.
### Comparison with competing methods
To evaluate the predictive performance of our model (CoGANPPIS), we compare it with seven popular sequence-based competing methods (PSIVER, ISIS, SPRINGS, DELPHI, DeepPPISP, ACNN and ensnet_p). Specifically, PSIVER (Murakami and Mizuguchi, 2010) utilizes Naive Bayes Classifier and kernel density estimation method to predict PPIs based on sequence features. ISIS (Ofran and Rost, 2007) combines predicted structural features with evolutionary information to predict PPIs based on shallow neural networks. SPRINGS utilizes an artificial neural network to generate PPIs predictions (Singh et al., 2014). DELPHI (Li et al., 2021) employs a fine-tuned ensemble model by combining several recurrent neural networks and convolutional neural networks. DeepPPISP (Zeng et al., 2020) considers both local contextual and global information and applies a convolutional neural network to predict PPIs. ACNN (Lu et al., 2021) employs a local attention mechanism to make PPIs prediction. And ensnet_p is an ensemble model combining different neural net models (Stringer et al., 2022). In words, all these traditional approaches do not apply coevolutionary information or global attention mechanism.
Table 2 presents the experimental results of seven sequence-based competitive PPIs prediction models and our proposed model. It can be observed that CoGANPPIS achieves the best performance across both two datasets in terms of all six metrics consistently, which ascertains its effectiveness. The ROC and PR curves of CoGANPPIS and other competing methods on Dset422 and Dset448 are shown in Figure 2. It demonstrates that AUPRC and AUC of CoGANPPIS are higher than that of other competing methods.
### Ablation analysis
In this part, we test the usefulness of introducing coevolutionary information into PPIs prediction. First, we evaluate the performance of the model with coevolutionary information included. Then we remove coevolutionary information from the model and train the model again to measure the comparing performance. The model without coevolutionary information is named as CoGANPPIS\({}^{\odot}\). Table 3 demonstrates the performance of CoGANPPIS and CoGANPPIS\({}^{\odot}\). We can find that CoGANPPIS outperforms CoGANPPIS\({}^{\odot}\) on
\begin{table}
\begin{tabular}{c c c c c c c c c c c c} \hline \hline Dataset & \multicolumn{6}{c}{Dset422} & \multicolumn{6}{c}{Dset448} \\ \cline{2-13} Method & AUPRC & ACC & Recall & Precision & F\({}_{1}\) & MCC & AUPRC & ACC & Recall & Precision & F\({}_{1}\) & MCC \\ \hline PSIVER & 0.230 & 0.553 & 0.426 & 0.204 & 0.276 & 0.086 & 0.238 & 0.631 & 0.337 & 0.225 & 0.270 & 0.131 \\ ISIS & 0.284 & 0.671 & 0.532 & 0.249 & 0.339 & 0.198 & 0.316 & 0.624 & 0.514 & 0.293 & 0.373 & 0.255 \\ SPRINGS & 0.279 & 0.547 & 0.772 & 0.189 & 0.303 & 0.120 & 0.305 & 0.601 & 0.546 & 0.190 & 0.282 & 0.134 \\ DELPHI & 0.311 & 0.628 & 0.533 & 0.288 & 0.371 & 0.232 & 0.320 & 0.633 & 0.588 & 0.280 & 0.379 & 0.234 \\ DeepPPISP & 0.320 & 0.655 & 0.577 & 0.303 & 0.397 & 0.206 & 0.351 & 0.661 & 0.588 & 0.307 & 0.404 & 0.276 \\ ACNN & 0.306 & 0.689 & 0.598 & 0.254 & 0.356 & 0.224 & 0.301 & 0.693 & 0.603 & 0.259 & 0.362 & 0.232 \\ ensnet_p & 0.377 & 0.696 & 0.600 & 0.291 & 0.392 & 0.246 & 0.385 & 0.745 & 0.580 & 0.302 & 0.397 & 0.259 \\ CoGANPPIS (not pretrained) & 0.364 & **0.753** & 0.431 & **0.382** & 0.405 & 0.251 & 0.355 & 0.674 & 0.551 & **0.324** & 0.408 & 0.252 \\ CoGANPPIS & **0.392** & 0.702 & **0.625** & 0.323 & **0.425** & **0.304** & **0.393** & **0.746** & **0.630** & **0.324** & **0.428** & **0.307** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance comparison
Figure 2: ROC and PR plots of CoGANPPIS and other seven competing models. (a) ROC plot on Dset422, (b) PR plot on Dset422, (c) ROC plot on Dset448, (d) PR plot on Dset448. CoGANPPIS clearly outperforms other models
both two datasets, which indicates that introducing coevolutionary information could help improve predictive accuracy and thus validates its effectiveness in PPIs prediction.
### Model performance on proteins of different lengths
Considering that protein sequence length varies greatly from each other, it could be necessary to study the predictive performance on proteins of different length. To answer this question, we plot the experimental results of CoGANPPIS, CoGANPPIS\({}^{\ominus}\) as well as ensnet_p in terms of \(F_{1}\) and MCC under proteins of different lengths on the two datasets in Figure 3.
Through the results, we have the following observations: First, we can observe that with the protein sequence length increasing, the performance of three models on two datasets all show an overall downward trend. This can be explained as the protein structure and function become more complex with the increase of length, making PPIs more difficult to be predicted. Second, it is interesting that the performance improvement of CoGANPPIS and CoGANPPIS\({}^{\ominus}\) compared with ensnet_p increases as the increase of the protein sequence length. Take the \(F_{1}\) on Dset422 in Figure 3 as an example, when the length is less than 100, the \(F_{1}\) of
\begin{table}
\begin{tabular}{c c c c c c c c c c c c} \hline \hline Dataset & \multicolumn{8}{c}{Dset422} & \multicolumn{8}{c}{Dset448} \\ \cline{2-13} Method & AUPRC & ACC & Recall & Precision & F\({}_{1}\) & MCC & AUPRC & ACC & Recall & Precision & F\({}_{1}\) & MCC \\ \hline CoGANPPIS\({}^{\ominus}\) & 0.383 & 0.647 & 0.613 & 0.316 & 0.417 & 0.293 & 0.388 & 0.670 & **0.637** & 0.316 & 0.422 & 0.300 \\ CoGANPPIS & **0.392** & **0.702** & **0.625** & **0.323** & **0.425** & **0.304** & **0.393** & **0.746** & 0.630 & **0.324** & **0.428** & **0.307** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation analysis
Figure 3: Models’ performance under various protein lengths. (a) \(F_{1}\) on Dset422, (b) \(F_{1}\) on Dset448, (c) MCC on Dset422, (d) MCC on Dset448
CoGANNPPIS and CoGANNPPIS\({}^{\odot}\) are 0.450 and 0.443 respectively, which are only 0.023 and 0.016 higher than that of ensnet_p (0.427). When the protein length is between 200 and 300, the improvements increase to 0.029 and 0.019 (0.414 vs 0.385 and 0.404 vs 0.385). When the protein length is greater than 500, the gaps further increase to 0.031 and 0.02 (0.403 vs 0.372 and 0.392 vs 0.372). This clearly shows that the longer the protein sequence is, the more PPIs prediction relies on global information extraction, which can be better captured by our global attention mechanism (even without coevolutionary information). Third, we pay attention to the comparison between CoGANNPPIS and CoGANNPPIS\({}^{\odot}\). The two metrics \(F_{1}\) and MCC of CoGANNPPIS on both two datasets are better than the ones of CoGANNPPIS\({}^{\odot}\), which also verifies the effectiveness of coevolutionary information in PPIs prediction and confirms the conclusion that we obtained in ablation analysis.
### Impact of coevolution-enhanced global attention mechanism
We have verified the effectiveness of coevolutionary information and global attention mechanism. Now let's further study how the coevolution-enhanced global attention mechanism works. First, for each pair of residues, we examine the relationships between its direct coupling degree and labels. As shown in Figure 4(a), the larger the direct coupling degree, the higher the probability that the pairs of residues have the same label. Further, let's take protein Q2T3W4 in Dset448 as an example. We first extract the attention weights of the first residue on Q2T3W4 in the training process. Then we plot a scatter diagram and fit a trend line of the points with attention weights larger than 0.001 as shown in Figure 4(b). We find that the slope of the trend line is positive, which implies that in general, the larger the direct coupling degrees, the higher the attention weights. Hence, the target residue could be paid more attention to the residues with high correlation during training process, which is an noticeable advantage of this attention mechanism.
### Visualization
As mentioned above, the attention weights between two residues show a positive correlation with their direct coupling degrees. It prompts us to explore to which extent our attention mechanism enhances compared with allocating weights purely according to DCA. For a protein sequence with \(N\) amino acid residues, we sort residue pairs according to their attention weights and direct coupling degrees respectively, and keep the \(5N\), \(10N\) as well as \(20N\) highest-ranking pairs. Figure 5 shows an example of the protein 7cei_A from Dset422. The three rows refer to the \(5N\), \(10N\) and \(20N\) situations in turn (Only the corresponding number of selected residue pairs are colored). The pairs with same labels are indicated in red and the pairs with different labels are
Figure 4: Analysis of coevolution-enhanced global attention mechanism. (a) We plot the probabilities of residue pairs having the same label (orange) and the probabilities of having different labels (green) under different direct coupling degrees in our dataset, (b) We take protein Q2T3W4 as an example and plot the scatter diagram of its attention weights under different direct coupling degrees. We also paint the trend line of the scattered points with attention weights larger than 0.001, which shows the positive correlation between the two variables
shown in green. It is evident that our attention mechanism works consistently better than pure DCA because of the obviously higher proportion of red points in all three situations. To become more quantitative, we have binned the predicted pairs according to their separation along the protein sequence as shown in the third column in Figure 5. We observe that our attention mechanism captures the residues with same labels more accurately than pure DCA. Also, our attention mechanism could attach more attention weights to distant residues, whereas pure DCA tends to pay more attention to neighboring aggregated residues, which could be attributed to the consideration of the whole protein feature representation of our attention mechanism.
Figure 5: Visualization of attention weights of protein 7cei_A using our attention mechanism (a, d and g) and pure DCA (b, e and h). Resistive pairs are sorted according to the attention weights and the direct coupling degrees respectively, and kept the SN (a-c), 10N (d-f) as well as 20N (g-i) highest-ranking pairs. These pairs with same labels are indicated in red and the pairs with different labels are shown in green. The right-most panels (c, f and i) bin weights of residue pairs according to their separation along the protein sequence. The overall bars count all predictions, the shaded parts refer to the TPs. Note that our attention mechanism captures the residues with same labels more accurately than pure DCA. Besides, the residues paid attention to by our attention mechanism are more evenly distributed than pure DCA
Conclusion
The aim of this paper is to improve the PPIs prediction performance solely based on protein sequences, which is important for understanding the biological mechanism of proteins both experimentally and theoretically. A dozen of sequence-based PPIs predictors have been developed in recent years. However, most of these works just utilize some commonly used features without considering coevolutionary information which provides rich clues for inter-residue relationships. Also, they are not good at predicting PPIs of long-length proteins.
Here, we propose a coevolution-enhanced global attention neural network (CoGANPPIS). Specifically, we employ a coevolution-enhanced global attention mechanism both for better inter-residue relationship capture and for better prediction of long-length proteins. We further aggregate the local residue features and apply a CNN & pooling layer to the coevolutionary information features as a supplement. Then we utilize several fully connected layers to generate the final prediction. Extensive experiments of CoGANPPIS and other seven popular methods on two standardized datasets show that our proposed model CoGANPPIS achieves the state-of-the-art performance.
Further experimental analysis shows that: (1) Coevolutionary information can improve the performance of PPIs prediction. (2) CoGANPPIS can bring more performance improvement compared with previous methods as the protein sequence becomes longer, implying that CoGANPPIS has a better understanding of the whole protein sequences. (3) Compared with allocating attention weights according to pure DCA, the proposed coevolution-enhanced global attention mechanism pays more attention to the residues with same labels and shows a more evenly distributed attention weights instead of locally aggregated attention weights.
Although CoGANPPIS shows advantages over previous methods, it has some limitations: First, CoGANPPIS takes a lot of computation time due to its usage of multiple sequence alignments and direct coupling analysis to generate coevolutionary information. In addition, DCA's accuracy depends on the number of homologs, making it difficult for those proteins with little homologs. In the future, we would be commited to find useful, practical and time-saving features to make prediction faster and more accurate.
## Funding
This study was supported by funding from the National Natural Science Foundation of China (72222009, 71991472 to X.Z), the National Natural Science Foundation of China (3210040426 to Z.H.), the Shanghai Rising-Star Program (21QB1400900 to Z.H.), and was also partly supported by a grant from the major project of Study on Pathogenesis and Epidemic Prevention Technology System (2021YFC2302500) by the Ministry of Science and Technology of China.
|
2303.08968 | A parsimonious neural network approach to solve portfolio optimization
problems without using dynamic programming | We present a parsimonious neural network approach, which does not rely on
dynamic programming techniques, to solve dynamic portfolio optimization
problems subject to multiple investment constraints. The number of parameters
of the (potentially deep) neural network remains independent of the number of
portfolio rebalancing events, and in contrast to, for example, reinforcement
learning, the approach avoids the computation of high-dimensional conditional
expectations. As a result, the approach remains practical even when considering
large numbers of underlying assets, long investment time horizons or very
frequent rebalancing events. We prove convergence of the numerical solution to
the theoretical optimal solution of a large class of problems under fairly
general conditions, and present ground truth analyses for a number of popular
formulations, including mean-variance and mean-conditional value-at-risk
problems. We also show that it is feasible to solve Sortino ratio-inspired
objectives (penalizing only the variance of wealth outcomes below the mean) in
dynamic trading settings with the proposed approach. Using numerical
experiments, we demonstrate that if the investment objective functional is
separable in the sense of dynamic programming, the correct time-consistent
optimal investment strategy is recovered, otherwise we obtain the correct
pre-commitment (time-inconsistent) investment strategy. The proposed approach
remains agnostic as to the underlying data generating assumptions, and results
are illustrated using (i) parametric models for underlying asset returns, (ii)
stationary block bootstrap resampling of empirical returns, and (iii)
generative adversarial network (GAN)-generated synthetic asset returns. | Pieter M. van Staden, Peter A. Forsyth, Yuying Li | 2023-03-15T22:37:33Z | http://arxiv.org/abs/2303.08968v1 | A parsimonious neural network approach to solve portfolio optimization problems without using dynamic programming
###### Abstract
We present a parsimonious neural network approach, which does not rely on dynamic programming techniques, to solve dynamic portfolio optimization problems subject to multiple investment constraints. The number of parameters of the (potentially deep) neural network remains independent of the number of portfolio rebalancing events, and in contrast to, for example, reinforcement learning, the approach avoids the computation of high-dimensional conditional expectations. As a result, the approach remains practical even when considering large numbers of underlying assets, long investment time horizons or very frequent rebalancing events. We prove convergence of the numerical solution to the theoretical optimal solution of a large class of problems under fairly general conditions, and present ground truth analyses for a number of popular formulations, including mean-variance and mean-conditional value-at-risk problems. We also show that it is feasible to solve Sortino ratio-inspired objectives (penalizing only the variance of wealth outcomes below the mean) in dynamic trading settings with the proposed approach. Using numerical experiments, we demonstrate that if the investment objective functional is separable in the sense of dynamic programming, the correct time-consistent optimal investment strategy is recovered, otherwise we obtain the correct pre-commitment (time-inconsistent) investment strategy. The proposed approach remains agnostic as to the underlying data generating assumptions, and results are illustrated using (i) parametric models for underlying asset returns, (ii) stationary block bootstrap resampling of empirical returns, and (iii) generative adversarial network (GAN)-generated synthetic asset returns.
**Keywords**: Asset allocation, portfolio optimization, neural network, dynamic programming
**JEL classification**: G11, C61
## 1 Introduction
We present, and analyze the convergence of, a parsimonious and flexible neural network approach to obtain the numerical solution of a large class of dynamic (i.e. multi-period) portfolio optimization problems that can be expressed in the following form,
\[\inf_{\xi\in\mathbb{R}}\inf_{\mathcal{P}\in\mathcal{A}}\left\{\,E_{\mathcal{P} }^{t_{0},w_{0}}\left[F\left(W\left(T\right),\xi\right)+G\left(W\left(T\right), E_{\mathcal{P}}^{t_{0},w_{0}}\left[W\left(T\right)\right],w_{0},\xi\right)\, \right]\,\right\}. \tag{1.1}\]
While rigorous definitions and assumptions are discussed in subsequent sections, here we simply note that in general, \(F:\mathbb{R}^{2}\rightarrow\mathbb{R}\) and \(G:\mathbb{R}^{4}\rightarrow\mathbb{R}\) denote some continuous functions and \(\xi\in\mathbb{R}\) some auxiliary variable, with \(T>0\) denoting the investment time horizon, \(W\left(t\right),t\in[t_{0},T]\), the controlled wealth process, and \(\mathcal{P}\) representing the investment strategy (or control) implemented over \([t_{0},T]\). Typically, \(\mathcal{P}\) specifies the amount or fraction of wealth to invest in each of (a potentially large number of) the underlying assets at each portfolio rebalancing event, which in practice occurs at some discrete subset of rebalancing times in \([t_{0},T]\). \(\mathcal{A}\) denotes the set of admissible investment strategies encoding the (possibly multiple) investment constraints faced by the investor. Finally, \(E_{\mathcal{P}}^{t_{0},w_{0}}\left[\cdot\right]\) denotes the expectation given control \(\mathcal{P}\) and initial wealth \(W\left(t_{0}\right)=w_{0}\).
Although (1.1) is written for objective functions involving the terminal portfolio wealth \(W\left(T\right)\), the approach and convergence analysis could be generalized without difficulty to objective functions that are wealth path-dependent, i.e. functions of \(\left\{W\left(t\right):t\in\mathcal{T}\right\}\) for some subset \(\mathcal{T}\subseteq\left[t_{0}\text{,}T\right]\) - see Forsyth et al. (2022); Van Staden et al. (2022a) for examples. However, since a sufficiently rich class of problems are of the form (1.1), this will remain the main focus of this paper.
The proposed approach does not rely on the separability of the objective functional in (1.1) in the sense of dynamic programming, remains agnostic as to the underlying data generation assumptions, and is sufficiently flexible such that practical considerations such as multiple investment constraints and discrete portfolio rebalancing can be incorporated without difficulty.
Leaving the more formal treatment for subsequent sections, for introductory purposes we highlight some specific examples of problems of the form (1.1):
* Utility maximization (see for example Vigna (2014)), in which case there is no outer optimization problem and \(G\equiv 0\), while \(w\to U\left(w\right)\) denotes the investor's utility function, so that (1.1) therefore reduces to \[\sup_{\mathcal{P}\in\mathcal{A}}\left\{E_{\mathcal{P}}^{t_{0},w_{0}}\left[U \left(W\left(T\right)\right)\right]\right\}.\] (1.2)
* Mean-variance (MV) optimization (see e.g. Li and Ng (2000); Zhou and Li (2000)), with \(\rho>0\) denoting the scalarization (or risk aversion) parameter, where the problem \[\sup_{\mathcal{P}\in\mathcal{A}}\left\{E_{\mathcal{P}}^{t_{0},w_{0}}\left[W \left(T\right)\right]-\rho\cdot Var_{\mathcal{P}}^{t_{0},w_{0}}\left[W\left(T \right)\right]\right\},\] (1.3) can also be written in the general form (1.1).
* Mean-Conditional Value-at-Risk (CVaR) optimization, in which case we do have both an inner and an outer optimization problems (see e.g. Forsyth (2020); Miller and Yang (2017), resulting in a problem of the form \[\inf_{\xi\in\mathbb{R}}\inf_{\mathcal{P}\in\mathcal{A}}\left\{E_{\mathcal{P}}^ {t_{0},w_{0}}\left[F\left(W\left(T\right),\xi\right)\right]\right\},\] (1.4) for a particular choice of the function \(F\) (see (2.15) below).
* To illustrate the flexibility and generality of the proposed approach, we also consider a "mean semi-variance" portfolio optimization problem that is inspired by the popular Sortino ratio (Bodie et al. (2014)) in the case of one-period portfolio analysis, where only the variance of downside outcomes relative to the mean is penalized. In the case of dynamic trading strategies, this suggests an objective function of the form \[\sup_{\mathcal{P}\in\mathcal{A}}\left\{E_{\mathcal{P}}^{t_{0},w_{0}}\left[W \left(T\right)-\rho\cdot\left(\min\left\{W\left(T\right)-E_{\mathcal{P}}^{t_ {0},w_{0}}\left[W\left(T\right)\right],0\right\}\right)^{2}\right]\right\},\] (1.5) where, as in the case of (1.3), the parameter \(\rho>0\) encodes the trade-off between risk and return. Note that (1.5) is not separable in the sense of dynamic programming, and in the absence of embedding results (analogous to those of Li and Ng (2000); Zhou and Li (2000) in the case of MV optimization (1.3)), problem (1.5) cannot be solved using traditional dynamic programming-based methods.
However, we emphasize that (1.2)-(1.5) are only a selection of examples, and the proposed approach and theoretical analysis remains applicable to problems that can be expressed in the general form (1.1).
Portfolio optimization problems of the form (1.1) can give rise to investment strategies that are not time-consistent due to the presence of the (possibly non-linear) function \(G\) (Bjork et al. (2021)). Since the objective in (1.1) is therefore potentially not separable in the sense of dynamic programming (see for example (1.3) or (1.5)). This gives rise to two related problems: (i) Since (1.1) cannot be solved using a dynamic programming-based approach, some other solution methodology has to be implemented, or some re-interpretation of the problem or the concept of "optimality" might be required (see for example Bjork and Murgoci (2014); Vigna (2022)), (ii) if the investment strategies are time-inconsistent, this can raise questions as to whether these strategies are feasible to implement as practical investment strategies.
We make the following general observations:
* It may be desirable to avoid using dynamic programming (DP) even if (1.1) _can_ be solved using DP techniques, such as in the special case where \(G\equiv 0\) in (1.1) and the investment strategies are time-consistent. For example, it is well known that DP has an associated "curse of dimensionality", in that as the
number state variables increases linearly, the computational burden increases exponentially (Fernandez-Villaverde et al. (2020); Han and Weinan (2016)). In addition, since DP techniques necessarily incur estimation errors at each time step, significant error amplification can occur which is further exacerbated in high-dimensional settings (see for example Li et al. (2020); Tsang and Wong (2020); Wang and Foster (2020)).
However, instead of relying on DP-based techniques and attempting to address the challenges of dimensionality using machine learning techniques (see for example Bachouch et al. (2022); Dixon et al. (2020); Fernandez-Villaverde et al. (2020); Gao et al. (2020); Henry-Labordere (2017); Hure et al. (2021); Lucarelli and Borrotti (2020); Park et al. (2020)), the proposed method fundamentally avoids DP techniques altogether. This is especially relevant in our setting, since we have shown that in the case of portfolio optimization problems specifically, DP can be _unnecessarily_ high-dimensional even in simple settings (see Van Staden et al. (2022b)). This occurs since the objective functional (or performance criteria (Oksendal and Sulem (2019)) ) is typically high-dimensional while the optimal investment strategy (the fundamental quantity of concern) remains relatively low-dimensional. The proposed method therefore forms part of the significant recent interest in developing machine learning techniques to solve multi-period portfolio optimization problems that avoids using DP techniques altogether (see for example Li and Forsyth (2019); Ni et al. (2022); Tsang and Wong (2020); Van Staden et al. (2022b)).
* Time-inconsistent problems naturally arise in financial applications (see Bjork et al. (2021) for numerous examples), and as a result their solution is often an area of active research due to the unique challenges involved in solving these problems without resorting to DP techniques. Examples include the mean-variance problem, which remained an open problem for decades until the solution using the embedding technique of Li and Ng (2000); Zhou and Li (2000). As a result, being able to obtain a numerical solution to problems of the form (1.1) directly is potentially very valuable for research. The solution of time-inconsistent problems is also practical interest, since in many cases, there exists an induced time consistent objective function (Forsyth (2020); Strub et al. (2019a,b)). The optimal policy for this induced time consistent objective function is identical to the pre-commitment policy at time zero. The induced time consistent strategy is, of course implementable (Forsyth (2020)), in the sense that the investor has no incentive to deviate from the strategy determined at time zero, at later times. An alternative approach to handling time-inconsistent problems is to search for the equilibrium control (Bjork et al. (2021)). A fascinating result obtained in Bjork and Murgoci (2010) is that for every equilibrium control, there exists a standard, time consistent problem which has the same control, under a different objective function. This essentially means that the question of time-consistency is a often matter of perspective, since there may be alternative objective functions which give rise to the same pre-commitment control, yet are time-consistent. In fact, other subtle issues arise in comparing pre-commitment and time consistent controls, see Vigna (2020, 2022) for further discussion. Furthermore, over very short time horizons such as those encountered in optimal trade execution, time consistency or its absence may not be of much concern to the investor or market participant (see for example Forsyth et al. (2011); Tse et al. (2013)). In addition, as noted by Bernard and Vanduffel (2014), if the strategy is realized in an investment product sold to a retail investor, then the optimal policy from the investor's point of view is in fact of pre-commitment type, since the retail client does not herself trade in the underlying assets during the lifetime of the contract.
As a result of these observations, we will consider problem (1.1) in its general form. Our method builds on and formalizes the initial results described in Li and Forsyth (2019) where a shallow NN was applied to a portfolio optimization problem with an objective that is separable in the sense of DP. The contributions of this paper are as follows:
* We present a flexible neural network (NN) approach to solve problems of the form (1.1) that does not rely on DP techniques. Our approach only requires the solution of a _single_ optimization problem, and therefore avoids the error amplification problems associated with the time-recursion in DP-based techniques, including for example Reinforcement Learning algorithms such as Q-learning (see for example Dixon et al. (2020); Gao et al. (2020); Park et al. (2020)) or other algorithms relying at some level on the DP principle
for a time-stepping backward recursion (see for example Bachouch et al. (2022); Van Heeswijk and Poutre (2019)). Perhaps the best descriptor of our approach is Policy Function Approximation, in the taxonomy in Powell (2023).
We make very limited assumptions regarding the underlying asset dynamics. In particular, if underlying asset (and by extension wealth) dynamics are specified, this can be incorporated as easily as the case where the underlying dynamics can only be observed without any parametric assumptions.
The proposed solution methodology is _parsimonious_, in that the number of parameters does not scale with the number of rebalancing events. This contrasts the proposed methodology with for example that of Han and Weinan (2016); Hure et al. (2021); Tsang and Wong (2020), and ensures that our approach remains feasible even for problems with very long time horizons (for example the accumulation phase of a pension fund - see Forsyth et al. (2019)) or with shorter time horizon but with frequent trading/rebalancing (for example the trade execution problems encountered in Forsyth et al. (2011)). The solution approach only places very weak requirements on the form of the investment objective in (1.1). In addition, we find that using relatively shallow neural networks (at most two hidden layers) in our approach achieve very accurate results in ground truth testing, thereby ensuring that the resulting NN in the proposed approach is relatively easy and efficient to train since it is less likely to be susceptible to problems of vanishing or exploding gradients associated with very deep neural networks (Goodfellow et al. (2016)).
2. We analyze the convergence of the proposed approach, and show that the theoretical optimal investment strategy of (1.1), provided it exists, can be attained by the numerical solution.
3. Finally, we present ground truth analyses confirming that the proposed approach is very effective in solving portfolio optimization problems of the form (1.1). The results illustrate numerically that if (1.1) is not separable in the sense of DP, our approach recovers the correct pre-commitment (time-inconsistent) optimal control, otherwise it recovers the correct time-consistent optimal control. To emphasize that the approach remains agnostic to the underlying data generation assumptions, results are illustrated using (i) parametric models for asset dynamics, (ii) stationary block bootstrap resampling of empirical asset returns, and (ii) generative adversarial network (GAN)-generated synthetic asset returns.
The remainder of the paper is organized as follows: Section 2 presents the problem formulation, while Section 3 provides a summary of the proposed approach, with additional technical and practical details provided in Appendix A and Appendix B. Section 4 presents the convergence analysis of the proposed approach. Finally, Section 5 provides ground truth analyses, with Section 6 concluding the paper and discussing possible avenues for future research.
## 2 Problem formulation
We start by formulating portfolio optimization problems of the form (1.1) more rigorously in a setting of discrete portfolio rebalancing and multiple investment constraints. Throughout, we work on filtered probability space \(\left(\Omega,\mathcal{F},\left\{\mathcal{F}\left(t\right)\right\}_{t\in[t_{0 },T]},\mathbb{P}\right)\) satisfying the usual conditions, with \(\mathbb{P}\) denoting the actual (and not the risk-neutral) probability measure.
Let \(\mathcal{T}\) denote the set of \(N_{rb}\) discrete portfolio rebalancing times in \(\left[t_{0}=0,T\right]\), which we assume to be equally-spaced to lighten notation,
\[\mathcal{T} = \left\{\left.t_{m}=m\Delta t\right|m=0,...,N_{rb}-1\right\}, \qquad\Delta t=T/N_{rb}, \tag{2.1}\]
where we observe that the last rebalancing event occurs at time \(t_{N_{rb}-1}=T-\Delta t\).
At each rebalancing time \(t_{m}\in\mathcal{T}\), the investor observes the \(\mathcal{F}\left(t_{m}\right)\)-measurable vector \(\boldsymbol{X}\left(t_{m}\right)=\left(X_{i}\left(t_{m}\right):i=1,...,\eta_ {X}\right)\in\mathbb{R}^{n_{X}}\), which can be interpreted informally as the information taken into account by the investor in reaching their asset allocation decision. As a concrete example, we assume below that \(\boldsymbol{X}\left(t_{m}\right)\) includes at least the wealth available for investment, an assumption which can be rigorously justified using analytical results (see for example Van Staden et al. (2022)).
Given \(\boldsymbol{X}\left(t_{m}\right)\), the investor then rebalances a portfolio of \(N_{a}\) assets to new positions given by the vector
\[\boldsymbol{p}_{m}\left(t_{m},\boldsymbol{X}\left(t_{m}\right)\right) = \left(p_{m,i}\left(t_{m},\boldsymbol{X}\left(t_{m}\right)\right) :i=1,..,N_{a}\right)\in\mathbb{R}^{N_{a}}, \tag{2.2}\]
where \(p_{m,i}\left(t_{m},\mathbf{X}\left(t_{m}\right)\right)\) denotes the fraction of wealth \(W\left(t_{m}\right)\) invested in the \(i\)th asset at rebalancing time \(t_{m}\). The subscript "\(m\)" in the notation \(\mathbf{p}_{m}\) emphasizes that in general, each rebalancing time \(t_{m}\in\mathcal{T}\) could be associated with potentially a different function \(\mathbf{p}_{m}:\mathbb{R}^{n_{X}+1}\rightarrow\mathbb{R}^{N_{n}}\), while the subscript is removed below when we consider a single function that is simply evaluated at different times, in which case we will write \(\mathbf{p}:\mathbb{R}^{n_{X}+1}\rightarrow\mathbb{R}^{N_{n}}\).
For purposes of concreteness, we assume that the investor is subject to the constraints of (i) no short-selling and (ii) no leverage being allowed, although the proposed methodology can be adjusted without difficulty to treat different constraint formulations1. For illustrative purposes, we therefore assume that each allocation (2.2) is only allowed to take values in \(\left(N_{a}-1\right)\)-dimensional probability simplex \(\mathcal{Z}\),
Footnote 1: As discussed in Section 3 and Appendix A, adjustments to the output layer of the neural network may be required.
\[\mathcal{Z} = \left\{\left(y_{1},...,y_{N_{a}}\right)\in\mathbb{R}^{N_{a}}: \sum_{i=1}^{N_{a}}y_{i}=1\text{ and }y_{i}\geq 0\text{ for all }i=1,...,N_{a}\right\}. \tag{2.3}\]
In this setting, an investment strategy or control \(\mathcal{P}\) applicable to \(\left[t_{0},T\right]\) is therefore of the form,
\[\mathcal{P} = \left\{\mathbf{p}_{m}\left(t_{m},\mathbf{X}\left(t_{m}\right)\right)= \left(p_{m,i}\left(t_{m},\mathbf{X}\left(t_{m}\right)\right):i=1,..,N_{a}\right):t _{m}\in\mathcal{T}\right\}, \tag{2.4}\]
while the set of admissible controls \(\mathcal{A}\) is defined by
\[\mathcal{A} = \left\{\left.\mathcal{P}=\left\{\mathbf{p}_{m}\left(t_{m},\mathbf{X} \left(t_{m}\right)\right):t_{m}\in\mathcal{T}\right\}\right|\mathbf{p}_{m}\left(t _{m},\mathbf{X}\left(t_{m}\right)\right)\in\mathcal{Z},\forall t_{m}\in\mathcal{T }\right\}. \tag{2.5}\]
The randomness in the system is introduced through the returns of the underlying assets. Specifically, let \(R_{i}\left(t_{m}\right)\) denote the \(\mathcal{F}\left(t_{m+1}\right)\)-measurable return observed on asset \(i\) over the interval \(\left[t_{m},t_{m+1}\right]\). We make no assumptions regarding the underlying asset dynamics, but at a minimum, we do require \(\left(\mathbb{P}\right)\) integrability, i.e. \(\mathbb{E}\left|R_{i}\left(t_{m}\right)\right|<\infty\) for all \(i\in\left\{1,...,N_{a}\right\}\) and \(m\in\left\{0,...,N_{rb}-1\right\}\). Informally, we will refer to the set
\[\mathbf{Y} = \left\{\left(Y_{i}\left(t_{m}\right):=1+R_{i}\left(t_{m}\right):i= 1,...,N_{a}\right)^{\top}:m\in\left\{0,...,N_{rb}-1\right\}\right\} \tag{2.6}\]
as the _path_ of (joint) asset returns over the investment time horizon \(\left[t_{0},T\right]\).
To clarify the subsequent notation, for any functional \(\psi\left(t\right),t\in\left[t_{0},T\right]\) we will use the notation \(\psi\left(t^{-}\right)\) and \(\psi\left(t^{+}\right)\) as shorthand for the one-sided limits \(\psi\left(t^{-}\right)=\lim_{\epsilon\downarrow 0}\psi\left(t-\epsilon\right)\) and \(\psi\left(t^{+}\right)=\lim_{\epsilon\downarrow 0}\psi\left(t+\epsilon\right)\), respectively.
Given control \(\mathcal{P}\in\mathcal{A}\), asset returns \(\mathbf{Y}\), initial wealth \(W\left(t_{0}^{-}\right)\coloneqq w_{0}>0\) and a (non-random) cash contribution schedule \(\left\{q\left(t_{m}\right):t_{m}\in\mathcal{T}\right\}\), the portfolio wealth dynamics for \(m=0,...,N_{rb}-1\) are given by the general recursion
\[W\left(t_{m+1}^{-};\mathcal{P},\mathbf{Y}\right) = \left[W\left(t_{m}^{-};\mathcal{P},\mathbf{Y}\right)+q\left(t_{m} \right)\right]\cdot\sum_{i=1}^{N_{a}}p_{m,i}\left(t_{m},\mathbf{X}\left(t_{m} \right)\right)\cdot Y_{i}\left(t_{m}\right). \tag{2.7}\]
Note that we write \(W\left(u\right)=W\left(u;\mathcal{P},\mathbf{Y}\right)\) to emphasize the dependence of wealth on the control \(\mathcal{P}\) and the (random) path of asset returns in \(\mathbf{Y}\) that relates to the time period \(t\in\left[t_{0},u\right]\). In other words, despite using \(\mathbf{Y}\) in the notation for simplicity, \(W\left(u;\mathcal{P},\mathbf{Y}\right)\) is \(\mathcal{F}\left(u\right)\)-measurable. Finally, since there are no contributions or rebalancing at maturity, we simply have \(W\left(t_{N_{rb}}^{-}\right)=W\left(T^{-}\right)=W\left(T\right)=W\left(T; \mathcal{P},\mathbf{Y}\right)\).
### Investment objectives
Given this general investment setting and wealth dynamics (2.7), our goal is to solve dynamic portfolio optimization problems of the general form
\[\inf_{\xi\in\mathbb{R}}\inf_{\mathcal{P}\in\mathcal{A}}J\left(\mathcal{P},\xi;t _{0},w_{0}\right), \tag{2.8}\]
where, for some given continuous functions \(F:\mathbb{R}^{2}\rightarrow\mathbb{R}\) and \(G:\mathbb{R}^{3}\rightarrow\mathbb{R}\), the objective functional \(J\) is given by
\[J\left(\mathcal{P},\xi;t_{0},w_{0}\right) = E_{\mathcal{P}}^{t_{0},w_{0}}\left[F\left(W\left(T;\mathcal{P}, \mathbf{Y}\right),\xi\right)+G\left(W\left(T;\mathcal{P},\mathbf{Y}\right),E_{ \mathcal{P}}^{t_{0},w_{0}}\left[W\left(T;\mathcal{P},\mathbf{Y}\right)\right],w_{0 },\xi\right)\right]. \tag{2.9}\]
Note that the expectations \(E^{t_{0},w_{0}}\left[\cdot\right]\) in (2.9) are taken over \(\mathbf{Y}\), given initial wealth \(W\left(t_{0}^{-}\right)=w_{0}\), control \(\mathcal{P}\in\mathcal{A}\) and auxiliary variable \(\xi\in\mathbb{R}\). In addition to the assumption of continuity of \(F\) and \(G\), we will make only the minimal assumptions regarding the exact properties of \(J\), including that \(\xi\to F\left(\cdot,\xi\right)\) and \(\xi\to G\left(\cdot,\cdot,w_{0},\xi\right)\) are
convex for all admissible controls \(\mathcal{P}\in\mathcal{A}\), and the standard assumption (see for example Bjork et al. (2021)) that an optimal control \(\mathcal{P}^{*}\in\mathcal{A}\) exists.
For illustrative and ground truth analysis purposes, we consider a number of examples of problems of the form (2.8)-(2.9).
As noted in the Introduction, the simplest examples of problems of the form (2.8) arise in the special case where \(G\equiv 0\) and there is no outer optimization problem over \(\xi\), such as in the case of standard utility maximization problems. As concrete examples of this class of objective functions, we will consider the quadratic target minimization (or quadratic utility) described in for example Vigna (2014); Zhou and Li (2000),
\[\left(DSQ\left(\gamma\right)\right):\qquad\inf_{\mathcal{P}\in\mathcal{A}} \left\{E^{t_{0},w_{0}}\left[\left(W\left(T;\mathcal{P},\boldsymbol{Y}\right)- \gamma\right)^{2}\right]\right\},\qquad\gamma>0, \tag{2.10}\]
as well as the (closely-related) one-sided quadratic loss minimization used in for example Dang and Forsyth (2016); Li and Forsyth (2019),
\[\left(OSQ\left(\gamma\right)\right):\qquad\inf_{\mathcal{P}\in\mathcal{A}} \left\{E^{t_{0},w_{0}}\left[\left(\min\left\{W\left(T;\mathcal{P}, \boldsymbol{Y}\right)-\gamma,0\right\}\right)^{2}-\epsilon\cdot W\left(T; \mathcal{P},\boldsymbol{Y}\right)\right]\right\},\qquad\gamma>0. \tag{2.11}\]
The term \(\epsilon W(\cdot)\) in equation (2.11) ensures that the problem remains well-posed2 in the event that \(W(t)\gg\gamma\). Observe that problems of the form (2.10) or (2.11) are separable in the sense of dynamic programming, so that the resulting optimal control is therefore time-consistent.
Footnote 2: Although this is a mathematical necessity (see e.g. (Li and Forsyth, 2019)), in practice, if we use a very small value of \(\epsilon\), then this has no perceptible effect on the summary statistics. In the numerical results of Section 5, we use \(\epsilon=10^{-6}\); see Appendix B for a discussion.
As a classical example of the case where \(G\) is nonlinear and the objective functional (2.9) is not separable in the sense of dynamic programming, we consider the mean-variance (MV) objective with scalarization or risk-aversion parameter \(\rho>0\) (see for example Bjork et al. (2017)),
\[\left(MV\left(\rho\right)\right): \sup_{\mathcal{P}\in\mathcal{A}}\left\{E^{t_{0},w_{0}}\left[W \left(T;\mathcal{P},\boldsymbol{Y}\right)\right]-\rho\cdot Var^{t_{0},w_{0}} \left[W\left(T;\mathcal{P},\boldsymbol{Y}\right)\right]\right\},\qquad\rho>0. \tag{2.12}\] \[= \sup_{\mathcal{P}\in\mathcal{A}}E_{\mathcal{P}}^{t_{0},w_{0}} \left[W\left(T;\mathcal{P},\boldsymbol{Y}\right)-\rho\cdot\left(W\left(T; \mathcal{P},\boldsymbol{Y}\right)-E_{\mathcal{P}}^{t_{0},w_{0}}\left[W\left(T ;\mathcal{P},\boldsymbol{Y}\right)\right]\right)^{2}\right].\]
Note that issues relating to the time-inconsistency of the optimal control of (2.12) are discussed in Remark 2.1 below, along with the relationship between (2.10) and (2.12).
As an example of a problem involving both the inner and outer optimization in (2.8), we consider the Mean - Conditional Value-at-Risk (or Mean-CVaR) problem, subsequently simply abbreviated the MCV problem. First, as a measure of tail risk, the CVaR at level \(\alpha\), or \(\alpha\)-CVaR, is the expected value of the worst \(\alpha\) percent of wealth outcomes, with typical values being \(\alpha\in\{1\%,5\%\}\). As in Forsyth (2020), a _larger_ value of the CVaR is preferable to smaller value, since our definition of \(\alpha\)-CVaR is formulated in terms of the terminal _wealth_, not in terms of the _loss_. Informally, if the distribution of terminal wealth \(W\left(T\right)\) is continuous with PDF \(\hat{\psi}\), then the \(\alpha\)-CVaR in this case is given by
\[\text{CVAR}_{\alpha} = \frac{1}{\alpha}\int_{-\infty}^{w_{\alpha}^{*}}W\left(T\right) \cdot\hat{\psi}\left(W\left(T\right)\right)\cdot dW\left(T\right), \tag{2.13}\]
where \(w_{\alpha}^{*}\) is the corresponding Value-at-Risk (VaR) at level \(\alpha\) defined such that \(\int_{-\infty}^{w_{\alpha}^{*}}\hat{\psi}\left(W\left(T\right)\right)dW\left(T \right)=\alpha\). We follow for example Forsyth (2020) in defining the MCV problem with scalarization parameter \(\rho>0\) formally as
\[\sup_{\mathcal{P}\in\mathcal{A}}\left\{\rho\cdot E^{t_{0},w_{0}}\left[W\left(T \right)\right]+\text{CVAR}_{\alpha}\right\},\qquad\rho>0. \tag{2.14}\]
However, instead of (2.13), we use the definition of CVaR from Rockafellar and Uryasev (2002) that is applicable to more general terminal wealth distributions, so that the MCV problem definition used subsequently aligns with the definition given in Forsyth (2020); Miller and Yang (2017)),
\[\left(MCV\left(\rho\right)\right):\qquad\inf_{\xi\in\mathbb{R}}\inf_{\mathcal{P }\in\mathcal{A}}E^{t_{0},w_{0}}\left[-\rho\cdot W\left(T;\mathcal{P}, \boldsymbol{Y}\right)-\xi+\frac{1}{\alpha}\max\left(\xi-W\left(T;\mathcal{P}, \boldsymbol{Y}\right),0\right)\right],\quad\rho>0. \tag{2.15}\]
Finally, as noted in the Introduction, we apply the ideas underlying the Sortino ratio where the variance of
returns below the mean are penalized, to formulate the following objective function for dynamic trading,
\[\left(MSemiV\left(\rho\right)\right):\sup_{\mathcal{P}\in\mathcal{A}}\left\{E_{ \mathcal{P}}^{t_{0},w_{0}}\left[W\left(T;\mathcal{P},\mathbf{Y}\right)-\rho\cdot \left(\min\left\{W\left(T;\mathcal{P},\mathbf{Y}\right)-E_{\mathcal{P}}^{t_{0},w_{0 }}\left[W\left(T;\mathcal{P},\mathbf{Y}\right)\right],0\right\}\right)^{2}\right] \right\}, \tag{2.16}\]
which we refer to as the "Mean- Semi-variance" problem, with scalarization (or risk-aversion) parameter \(\rho>0\).3
Footnote 3: In continuous time, the unconstrained Mean-Semi-variance problem is ill-posed (Jin et al. (2005)). However, we will impose bounded leverage constraints, which is, of course, a realistic condition. This makes problem \(\left(MSemiV\left(\rho\right)\right)\) well posed.
The following remark discusses issues relating to the possible time-inconsistency of the optimal controls of (2.12), (2.15) and (2.16).
**Remark 2.1**.: (Time-_in_consistency and induced time-consistency) Formally, the optimal controls for problems \(MV\left(\rho\right)\), \(MCV\left(\rho\right)\) and \(MSemiV\left(\rho\right)\) are not time-consistent, but instead are of the pre-commitment type (see Basak and Chabakauri (2010); Bjork and Murgoci (2014); Forsyth (2020)). However, in many cases, there exists an induced time consistent problem formulation which has the same controls at time zero as the pre-commitment problem (see Forsyth (2020); Strub et al. (2019, 2019)).
As a concrete example of induced time-consistency, the embedding result of Li and Ng (2000); Zhou and Li (2000) establishes that the \(DSQ\left(\gamma\right)\) objective is the induced time-consistent objective function associated with the \(MV\left(\rho\right)\) problem, which is a result that we exploit for ground truth analysis purposes in Section 5.
Similarly, there is an induced time consistent objective function for the Mean-CVAR problem \(MCV\left(\rho\right)\) in (2.15) - see Forsyth (2020).
Consequently, when we refer to a strategy as optimal, for either the Mean-CVAR \(\left(MCV\left(\rho\right)\right)\) or Mean-Variance \(\left(MV\left(\rho\right)\right)\) problems, this will be understood to mean that at any \(t>t_{0}\), the investor follows the associated induced time-consistent strategy rather than a pre-commitment strategy.
In the Mean-Semi-variance \(\left(MSemiV\left(\rho\right)\right)\) case as per (2.16), there is no obvious induced time consistent objective function. In this case, we seek the pre-commitment policy.
For a detailed discussion of the many subtle issues involved in the case of time-inconsistency, induced time-consistency, and equilibrium controls, see for example Bjork et al. (2021); Bjork and Murgoci (2014); Forsyth (2020); Strub et al. (2019, 2019); Vigna (2020, 2022).
## 3 Neural network approach
In this section, we provide an overview of the neural network (NN) approach. Additional technical details and practical considerations are discussed in Appendices A and B, while the theoretical justification via convergence analysis will be discussed in Section 4 (and Appendix B).
Recall from (2.2) that \(\mathbf{X}\left(t_{m}\right)\in\mathbb{R}^{\eta_{X}}\) denotes the information taken into account in determining the investment strategy (2.2) at rebalancing time \(t_{m}\). Using the initial experimental results of Li and Forsyth (2019) and the analytical results of Van Staden et al. (2022b) applied to this setting, we assume that \(\mathbf{X}\left(t_{m}\right)\) includes at least the wealth available for investment at time \(t_{m}\), so that
\[W\left(t_{m}^{+};\mathcal{P},\mathbf{Y}\right)\coloneqq W\left(t_{m}^{-};\mathcal{ P},\mathbf{Y}\right)+q\left(t_{m}\right)\ \ \in\ \ \mathbf{X}\left(t_{m}\right),\qquad\forall t_{m}\in\mathcal{T}. \tag{3.1}\]
However, we emphasize that \(\mathbf{X}\left(t_{m}\right)\) may include additional variables in different settings. For example, in non-Markovian settings or in the case of certain solution approaches involving auxiliary variables, it is natural to "lift the state space" by including additional quantities in \(\mathbf{X}\) such as relevant historical quantities related to market variables, or other auxiliary variables - see for example Forsyth (2020); Miller and Yang (2017); Tsang and Wong (2020).
Let \(\mathcal{D}_{\mathbf{\phi}}\subseteq\mathbb{R}^{\eta_{X}+1}\) be the set such that \(\left(t_{m},\mathbf{X}\left(t_{m}\right)\right)\in\mathcal{D}_{\mathbf{\phi}}\) for all \(t_{m}\in\mathcal{T}\). Let \(C\left(\mathcal{D}_{\mathbf{\phi}},\mathcal{Z}\right)\) denote the set of all continuous functions from \(\mathcal{D}_{\mathbf{\phi}}\) to \(\mathcal{Z}\subset\mathbb{R}^{N_{\mathbf{\ast}}}\) (see (2.3)). We will use the notation \(\mathbf{X}^{\ast}\) to denote the information taken into account by the optimal control, since in the simplest case implied by (3.1), we simply have \(\mathbf{X}^{\ast}=W^{\ast}\), where \(W^{\ast}\) denotes the wealth under the optimal strategy. We make the following assumption.
**Assumption 3.1**.: _(Properties of the optimal control) Considering the general form of the problem (2.8), we assume that there exists an optimal feedback control \(\mathcal{P}^{\ast}\in\mathcal{A}\). Specifically, we assume that at each rebalancing time \(t_{m}\in\mathcal{T}\), the time \(t_{m}\) itself together with the information vector under optimal behavior \(\mathbf{X}^{\ast}\left(t_{m}\right)\), which includes at least the wealth \(W^{\ast}\left(t_{m}^{+}\right)\) available for investment (see (3.1)), are sufficient to fully determine the optimal asset allocation \(\mathbf{p}_{m}^{\ast}\left(t_{m},\mathbf{X}^{\ast}\left(t_{m}\right)\right)\)._
_Furthermore, we assume that there exists a continuous function \(\mathbf{p}^{*}\in C\left(\mathcal{D}_{\mathbf{\phi}},\mathcal{Z}\right)\) such that \(\mathbf{p}^{*}_{m}\left(t_{m},\mathbf{X}^{*}\left(t_{m}\right)\right)=\mathbf{p}^{*}\left(t _{m},\mathbf{X}^{*}\left(t_{m}\right)\right)\) for all \(t_{m}\in\mathcal{T}\), so that the optimal control \(\mathcal{P}^{*}\) can be expressed as_
\[\mathcal{P}^{*} = \left\{\mathbf{p}^{*}\left(t_{m},\mathbf{X}^{*}\left(t_{m}\right)\right) :\forall t_{m}\in\mathcal{T}\right\},\quad\text{where}\quad\mathbf{p}^{*}\in C \left(\mathcal{D}_{\mathbf{\phi}},\mathcal{Z}\right). \tag{3.2}\]
We make the following observations regarding Assumption 3.1:
* Continuity of \(\mathbf{p}^{*}\) in space _and time_: While assuming the optimal control is a continuous map in the state space \(\mathbf{X}\) is fairly standard in the literature, especially in the context of using neural network approximations (see for example Han and Weinan (2016); Hure et al. (2021); Tsang and Wong (2020)), the assumption of continuity in time in (3.2) is therefore worth emphasizing. This assumption enforces the requirement that in the limit of continuous rebalancing (i.e. when \(\Delta t\to 0\)), the control remains a continuous function of time, which is a practical requirement for any reasonable investment policy. In particular, this ensures that the asset allocation retains its smooth behavior as the number of rebalancing events in \(\left[0,T\right]\) is increased, which we consider a fundamental requirement ensuring that the resulting investment strategy is reasonable. In addition, in Section 5 we demonstrate how the known theoretical solution to a problem assuming continuous rebalancing (\(\Delta t\to 0\)) can be approximated very well using \(\Delta t\gg 0\) in the NN approach, even though the resulting NN approximation is only truly optimal in the case of \(\Delta t\gg 0\).
* The control is a _single_ function for _all_ rebalancing times; note that the function \(\mathbf{p}^{*}\) is not subscripted by time. If the portfolio is rebalanced only at discrete time intervals, the investment strategy can be found (as suggested in (3.2)) by evaluating this continuous function at discrete time intervals, i.e. \(\left(t_{m},\mathbf{X}\left(t_{m}\right)\right)\rightarrow\mathbf{p}^{*}\left(t_{m}, \mathbf{X}\left(t_{m}\right)\right)=\left(p_{1}^{*}\left(t_{m},\mathbf{X}\left(t_{m} \right)\right):i=1,...,N_{a}\right)\), for all \(t_{m}\in\mathcal{T}\). We discuss below how we solve for this (single) function directly, without resorting to dynamic programming, which avoids not only the challenge with error propagation due to value iteration over multiple timesteps, but also avoids solving for the high-dimensional conditional expectation (also termed the performance criteria by Oksendal and Sulem (2019)) if we are only interested in the relatively low-dimensional optimal control (see for example Van Staden et al. (2022b)).
These observations ultimately suggest the NN approach discussed below, while the soundness of Assumption 3.1 is experimentally confirmed in the ground truth results presented in Section 5.
Given Assumption 3.1 and in particular (3.2), we therefore limit our consideration to controls of the form
\[\mathcal{P} = \left\{\mathbf{p}\left(t_{m},\mathbf{X}\left(t_{m}\right)\right):\forall t _{m}\in\mathcal{T}\right\},\quad\text{for some}\quad\mathbf{p}\in C\left( \mathcal{D}_{\mathbf{\phi}},\mathcal{Z}\right). \tag{3.3}\]
To simplify notation, we identify an arbitrary control \(\mathcal{P}\) of the form (3.3) with its associated function \(\mathbf{p}=\left(p_{i}:i=1,...,N_{a}\right)\in C\left(\mathcal{D}_{\mathbf{\phi}}, \mathcal{Z}\right)\), so that the objective functional (2.9) is written as
\[J\left(\mathbf{p},\xi;t_{0},w_{0}\right) = E^{t_{0},w_{0}}\left[F\left(W\left(T;\mathbf{p},\mathbf{Y}\right),\xi \right)+G\left(W\left(T\right),E^{t_{0},w_{0}}\left[W\left(T;\mathbf{p},\mathbf{Y} \right)\right],w_{0},\xi\right)\right]. \tag{3.4}\]
In (3.4), \(W\left(\cdot;\mathbf{p},\mathbf{Y}\right)\) denotes the controlled wealth process using a control of the form (3.3), so that the wealth dynamics (2.7) for \(t_{m}\in\mathcal{T}\) (recall \(t_{N_{\mu}}^{-}=T\)) now becomes
\[W\left(t_{m+1}^{-};\mathbf{p},\mathbf{Y}\right) = \left[W\left(t_{m}^{-};\mathbf{p},\mathbf{Y}\right)+q\left(t_{m}\right) \right]\cdot\sum_{i=1}^{N_{a}}p_{i}\left(t_{m},\mathbf{X}\left(t_{m}\right)\right) \cdot Y_{i}\left(t_{m}\right). \tag{3.5}\]
Therefore, using Assumption 3.1 and (3.4)-(3.5), problem (2.8) is therefore expressed as
\[V\left(t_{0},w_{0}\right) = \inf_{\xi\in\mathbb{R}}\inf_{\mathbf{p}\in C\left(\mathcal{D}_{\mathbf{ \phi}},\mathcal{Z}\right)}J\left(\mathbf{p},\xi;t_{0},w_{0}\right). \tag{3.6}\]
We now provide a brief overview of the proposed methodology to solve problems of the form (3.6). This consists of two steps discussed in the following subsections, namely (i) the NN approximation to the control, and (ii) computational estimate of the optimal control.
### Step 1: NN approximation to control
Let \(n\in\mathbb{N}\). Consider a fully-connected, feedforward NN \(\boldsymbol{f}_{n}\) with parameter vector \(\boldsymbol{\theta}_{n}\in\mathbb{R}^{\nu_{n}}\) and a fixed number \(\mathcal{L}^{h}\geq 1\) of hidden layers, where each hidden layer contains \(\hbar\left(n\right)\in\mathbb{N}\) nodes. The NN has \(\left(\eta_{X}+1\right)\) input nodes, mapping feature (input) vectors of the form \(\boldsymbol{\phi}\left(t\right)=\left(t,\boldsymbol{X}\left(t\right)\right) \in\mathcal{D}_{\boldsymbol{\phi}}\) to \(N_{a}\) output nodes. For a more detailed introduction to neural networks, see for example Goodfellow et al. (2016).
Additional technical and practical details can be found in Appendices A and B. For this discussion, we simply note that the index \(n\in\mathbb{N}\) is used for the purposes of the analytical results and convergence analysis, where we fix a choice of \(\mathcal{L}^{h}\geq 1\) while \(\hbar\left(n\right),n\in\mathbb{N}\) is assumed to be a monotonically increasing sequence such that \(\lim_{n\rightarrow\infty}\hbar\left(n\right)=\infty\) (see Section 4 and Appendix A). However, for practical implementation, a fixed value of \(\hbar\left(n\right)\in\mathbb{N}\) is chosen (along with \(\mathcal{L}^{h}\geq 1\)) to ensure the NN has sufficient depth and complexity to solve the problem under consideration (see Appendix B).
Any NN considered is constructed such that \(\boldsymbol{f}_{n}:\mathcal{D}_{\boldsymbol{\phi}}\rightarrow\mathcal{Z}\subset \mathbb{R}^{N_{a}}\). In other words, the values of the \(N_{a}\) outputs are automatically in the set \(\mathcal{Z}\) defined in (2.3) for any \(\boldsymbol{\phi}\in\mathcal{D}_{\boldsymbol{\phi}}\),
\[\boldsymbol{f}_{n}\left(\boldsymbol{\phi}\left(t\right); \boldsymbol{\theta}_{n}\right) = \left(f_{n,i}\left(\boldsymbol{\phi}\left(t\right);\boldsymbol{ \theta}_{n}\right):i=1,...,N_{a}\right)\in\mathcal{Z}. \tag{3.7}\]
As a result, the outputs of the NN \(\boldsymbol{f}_{n}\) in (3.7) can be interpreted as portfolio weights satisfying the required investment constraints. While a more detailed discussion of the structure can be found in Assumption A.1 in Appendix A, we summarize some key aspects of the NN structure illustrated in Figure 3.1:
1. We emphasize that the rebalancing time is an _input_ into the NN as per the feature vector \(\boldsymbol{\phi}\left(t\right)=\left(t,\boldsymbol{X}\left(t\right)\right) \in\mathcal{D}_{\boldsymbol{\phi}}\), so that the NN parameter vector \(\boldsymbol{\theta}_{n}\) itself does not depend on time.
2. While we assume sigmoid activations for the hidden nodes for concreteness and convenience (see Assumption A.1), any of the commonly-used activation functions can be implemented with only minor modifications to the technical results presented in Section 4.
3. Since we are illustrating the approach using the particular form of \(\mathcal{Z}\) in (2.3) because of its wide applicability (no short-selling and no leverage), a softmax output layer is used to ensure the NN output remains in \(\mathcal{Z}\subset\mathbb{R}^{N_{a}}\) for any \(\boldsymbol{\phi}\left(t\right)\) (see (3.7)). However, different admissible control set formulations can be handled without difficulty4. Footnote 4: For example, position limits and limited leverage can be introduced using minor modifications to the output layer. Perhaps the only substantial challenge is offered by unrealistic investment scenarios, such as insisting that trading should continue in the event of bankruptcy, in which case consideration should be given to the possibility of wealth being identically zero or negative.
For some fixed value of the index \(n\in\mathbb{N}\), let \(\mathcal{N}_{n}\) denote the set of NNs constructed in the same way as \(\boldsymbol{f}_{n}\) for the fixed and given values of \(\mathcal{L}^{h}\) and \(\hbar\left(n\right)\). While a formal definition of the set \(\mathcal{N}_{n}\) is provided in Appendix A, here we simply note that each NN \(\boldsymbol{f}_{n}\left(\cdot;\boldsymbol{\theta}_{n}\right)\in\mathcal{N}_{n}\) only differs in terms of the parameter values constituting its parameter vector \(\boldsymbol{\theta}_{n}\) (i.e. for a fixed \(n\), each \(\boldsymbol{f}_{n}\in\mathcal{N}_{n}\) has the same number of hidden layers \(\mathcal{L}^{h}\), hidden nodes \(\hbar\left(n\right)\), activation functions etc.).
Figure 3.1: Illustration of the structure of the NN as per (3.7). Additional construction and implementation details can be found in Appendix A.
Observing that \(\mathcal{N}_{n}\subset C\left(\mathcal{D}_{\mathbf{\phi}},\mathcal{Z}\right)\), our first step is to approximate (3.6) by performing the optimization over \(\mathbf{f}_{n}\left(\cdot;\mathbf{\theta}_{n}\right)\in\mathcal{N}_{n}\) instead. In other words, we approximate the control \(\mathbf{p}\) by a neural network \(\mathbf{f}_{n}\in\mathcal{N}_{n}\),
\[\mathbf{p}\left(\mathbf{\phi}\left(t\right)\right) \simeq \mathbf{f}_{n}\left(\mathbf{\phi}\left(t\right);\mathbf{\theta}_{n}\right), \qquad\text{where}\quad\mathbf{\phi}\left(t\right)=\left(t,\mathbf{X}\left(t\right) \right),\mathbf{p}\in C\left(\mathcal{D}_{\mathbf{\phi}},\mathcal{Z}\right),\mathbf{f}_{n }\in\mathcal{N}_{n}. \tag{3.8}\]
We identify the NN \(\mathbf{f}_{n}\left(\cdot;\mathbf{\theta}_{n}\right)\) with its parameter vector \(\mathbf{\theta}_{n}\), so that the (approximate) objective functional using approximation (3.8) is written as
\[J_{n}\left(\mathbf{\theta}_{n},\xi;t_{0},w_{0}\right) = E^{t_{0},w_{0}}\left[F\left(W\left(T;\mathbf{\theta}_{n},\mathbf{Y} \right),\xi\right)+G\left(W\left(T;\mathbf{\theta}_{n},\mathbf{Y}\right),E^{t_{0},w_{ 0}}\left[W\left(T;\mathbf{\theta}_{n},\mathbf{Y}\right)\right],w_{0},\xi\right)\, \right]. \tag{3.9}\]
Combining (3.7) and (3.8), the wealth dynamics (3.5) is expressed as
\[W\left(t_{m+1}^{-};\mathbf{\theta}_{n},\mathbf{Y}\right) = \left[W\left(t_{m}^{-};\mathbf{\theta}_{n},\mathbf{Y}\right)+q\left(t_{m }\right)\right]\cdot\sum_{i=1}^{N_{a}}f_{n,i}\left(\mathbf{\phi}\left(t_{m} \right);\mathbf{\theta}_{n}\right)\cdot Y_{i}\left(t_{m}\right),\quad m=0,...,N_{ rb}-1. \tag{3.10}\]
Using (3.8) and (3.9), for fixed and given values of \(\mathcal{L}^{h}\) and \(\hbar\left(n\right)\), we therefore approximate problem (3.6) by
\[V_{n}\left(t_{0},w_{0}\right) = \inf_{\xi\in\mathbf{\xi}}\inf_{\mathbf{f}_{n}\left(\cdot;\mathbf{\theta}_{n} \right)\in\mathcal{N}_{n}}J_{n}\left(\mathbf{\theta}_{n},\xi;t_{0},w_{0}\right) \tag{3.11}\] \[= \inf_{\xi\in\mathbf{\xi}}\inf_{\mathbf{\theta}_{n}\in\mathbb{R}^{n-}}J_{ n}\left(\mathbf{\theta}_{n},\xi;t_{0},w_{0}\right)\] (3.12) \[= \inf_{\left(\mathbf{\theta}_{n},\xi\right)\in\mathbb{R}^{n+1}}J_{n} \left(\mathbf{\theta}_{n},\xi;t_{0},w_{0}\right).\]
We highlight that the optimization in (3.12) is unconstrained since, by construction, each NN \(\mathbf{f}_{n}\left(\cdot;\mathbf{\theta}_{n}\right)\in\mathcal{N}_{n}\) always generates outputs in \(\mathcal{Z}\).
The notation \(\left(\mathbf{\theta}_{n}^{*},\xi^{*}\right)\) and the associated NN \(\mathbf{f}_{n}^{*}\left(\cdot;\mathbf{\theta}_{n}^{*}\right)\in\mathcal{N}_{n}\) are subsequently used to denote the values achieving the optimum in (3.12) for given values of \(\mathcal{L}^{h}\) and \(\hbar\left(n\right)\). Note however that we do _not_ assume that the optimal control \(\mathbf{p}^{*}\in C\left(\mathcal{D}_{\mathbf{\phi}},\mathcal{Z}\right)\) satisfying Assumption 3.1 is also a NN in \(\mathcal{N}_{n}\), since by the universal approximation results (see for example Hornik et al. (1989)), we would expect that the error in approximating (3.6) by (3.12) can be made arbitrarily small for sufficiently large \(\hbar\left(n\right)\). These claims are rigorously confirmed in Section 4 below, where we consider a sequence of NNs \(\mathbf{f}_{n}\left(\cdot;\mathbf{\theta}_{n}\right)\in\mathcal{N}_{n}\) obtained by letting \(\hbar\left(n\right)\rightarrow\infty\) as \(n\rightarrow\infty\) (for any fixed value of \(\mathcal{L}^{h}\geq 1\)).
### Step 2 : Computational estimate of the optimal control
In order to solve the approximation (3.12) to problem (3.6), we require estimates of the expectations in (3.9). For computational purposes, suppose we take as given a set \(\mathcal{Y}_{n}\in\mathbb{R}^{n\times N_{a}\times N_{rb}}\), consisting of \(n\in\mathbb{N}\) independent realizations of the paths of joint asset returns \(\mathbf{Y}\),
\[\mathcal{Y}_{n} = \left\{\mathbf{Y}^{\left(j\right)}:j=1,...,n\right\}. \tag{3.13}\]
We highlight that each entry \(\mathbf{Y}^{\left(j\right)}\in\mathcal{Y}_{n}\) consists of a _path_ of joint asset returns (see (2.6)), and we assume that the paths are independent, we do _not_ assume that the asset returns constituting each path are independent. In particular, both cross-correlations and autocorrelation structures within each path of returns are permitted.
Constructing the set \(\mathcal{Y}_{n}\) in practical applications is further discussed in Appendix B. In the numerical examples in Section 5, we use examples where \(\mathcal{Y}_{n}\) is generated using (i) Monte Carlo simulation of parametric asset dynamics, (ii) stationary block bootstrap resampling of empirical asset returns, (Anarkulova et al. (2022)) and (iii) generative adversarial network (GAN)-generated synthetic asset returns (Yoon et al. (2019)). While we let \(n\rightarrow\infty\) in (3.13) for convergence analysis purposes, in practical applications (e.g. the results of Section 5) we simply choose \(n\) sufficiently large such that we are reasonably confident that reliable numerical estimates of the expectations in (3.9) are obtained.
Given a NN \(\mathbf{f}_{n}\left(\cdot;\mathbf{\theta}_{n}\right)\in\mathcal{N}_{n}\) and set \(\mathcal{Y}_{n}\), the wealth dynamics (3.10) along path \(\mathbf{Y}^{\left(j\right)}\in\mathcal{Y}_{n}\) is given by
\[W^{\left(j\right)}\left(t_{m+1}^{-};\mathbf{\theta}_{n},\mathcal{Y}_{n}\right) = \left[W^{\left(j\right)}\left(t_{m}^{-};\mathbf{\theta}_{n},\mathcal{Y} _{n}\right)+q\left(t_{m}\right)\right]\cdot\sum_{i=1}^{N_{a}}f_{n,i}\left(\mathbf{ \phi}^{\left(j\right)}\left(t_{m}\right);\mathbf{\theta}_{n}\right)\cdot Y_{i}^{ \left(j\right)}\left(t_{m}\right), \tag{3.14}\]
for \(m=0,...,N_{rb}-1\). We introduce the superscript \(\left(j\right)\) to emphasize that the quantities are obtained along the \(j\)th entry of (3.13).
The computational estimate of \(J_{n}\left(\mathbf{\theta}_{n},\xi;t_{0},w_{0}\right)\) in (3.9) is then given by
\[\hat{J}_{n}\left(\mathbf{\theta}_{n},\xi;t_{0},w_{0},\mathcal{Y}_{n}\right) = \frac{1}{n}\sum_{j=1}^{n}F\left(W^{\left(j\right)}\left(T;\mathbf{ \theta}_{n},\mathcal{Y}_{n}\right),\xi\right) \tag{3.15}\] \[+\frac{1}{n}\sum_{j=1}^{n}G\left(W^{\left(j\right)}\left(T;\mathbf{ \theta}_{n},\mathcal{Y}_{n}\right),\frac{1}{n}\sum_{k=1}^{n}W^{\left(k\right)} \left(T;\mathbf{\theta}_{n},\mathcal{Y}_{n}\right),w_{0},\xi\right),\]
so that we approximate problem (3.12) by
\[\hat{V}_{n}\left(t_{0},w_{0};\mathcal{Y}_{n}\right) = \inf_{\left(\mathbf{\theta}_{n},\xi\right)\in\mathbb{R}^{n+1}}\hat{ J}_{n}\left(\mathbf{\theta}_{n},\xi;t_{0},w_{0},\mathcal{Y}_{n}\right). \tag{3.16}\]
The numerical solution of (3.16) can then proceed using standard (stochastic) gradient descent techniques (see Appendix B). For subsequent reference, let \(\left(\hat{\mathbf{\theta}}_{n}^{*},\hat{\xi}_{n}^{*}\right)\) denote the optimal point in (3.16) relative to the training data set \(\mathcal{Y}_{n}\) in (3.16).
In the case of sufficiently large datasets (3.13), in other words as \(n\rightarrow\infty\), we would expect that the error in approximating (3.12) by (3.16) can be made arbitrarily small. However, as noted above, as \(n\rightarrow\infty\) and the number of hidden nodes \(\hbar\left(n\right)\rightarrow\infty\) (for any fixed \(\mathcal{L}^{h}\geq 1\)), (3.12) is also expected to approximate (3.6) more accurately. As a result, we obtain the necessary intuition for establishing the convergence of (3.16) to (3.6) under suitable conditions, which is indeed confirmed in the results of Section 4.
Note that since \(\mathcal{Y}_{n}\) is used in (3.16) to obtain the optimal NN parameter vector \(\hat{\mathbf{\theta}}_{n}^{*}\), it is usually referred to as the NN "training" dataset (see for example Goodfellow et al. (2016)). Naturally, we can also construct a "testing" dataset \(\mathcal{Y}_{\hat{n}}^{test}\), that is of a similar structure as (3.13), but typically based on a different implied distribution of \(\mathbf{Y}\) as a result of different data generation assumptions. For example, \(\mathcal{Y}_{\hat{n}}^{test}\) can be obtained using a different time period of historical data for its construction, or different process parameters if there are parametric asset dynamics specified. The resulting approximation \(\mathbf{f}_{n}^{*}\left(\cdot;\hat{\mathbf{\theta}}_{n}^{*}\right)\in\mathcal{N}_{n}\) to the optimal control \(\mathbf{p}^{*}\in C\left(\mathcal{D}_{\hat{\mathbf{\phi}}},\mathcal{Z}\right)\) obtained using the training dataset in (3.16) can then be implemented on the testing dataset for out-of-sample testing or scenario analysis. This is discussed in more detail in Appendix B.
**Remark 3.1**.: (Extension to wealth path-dependent objectives) As noted in the Introduction, the NN approach as well as the convergence analysis of Section 4 can be extended to objective functions that depend on the entire wealth path \(\left\{W\left(t\right):t\in\mathcal{T}\right\}\) instead of just the terminal wealth \(W\left(T\right)\). This is achieved by simply modifying (3.15) appropriately and ensuring the wealth is assessed at the desired intervals using (3.14).
### Advantages of the NN approach
The following observations highlight some advantages of the proposed NN approach:
* semi-variance problem (2.16) in Section 5.
* The proposed methodology is parsimonious, in the sense that the NN parameter vector remains independent of number of rebalancing events. Specifically, we observe that the NN parameter vector \(\mathbf{\theta}_{n}\in\mathbb{R}^{n_{n}}\) of the NN does _not_ depend on the rebalancing time \(t_{m}\in\mathcal{T}\) or on the sample path \(j\). This contrasts our
approach with the approaches of for example Han and Weinan (2016); Tsang and Wong (2020),5 where the number of parameters scale with the number of rebalancing events. As a result, the NN approach presented here can lead to potentially significant computational advantages in the cases of (i) long investment time horizons or (ii) short trading time horizons with a frequent number of portfolio rebalancing events.
Footnote 5: Tsang and Wong (2020) use a stacked NN approach, with a different NN at each rebalancing time.
A natural question might be whether the NNs in the proposed approach are required to be very deep, thus potentially exposing the training of the NN in (3.16) to problem of vanishing or exploding gradients (see for example Goodfellow et al. (2016)). However, the ground truth results presented in Section 5 demonstrate that we obtain very accurate results with relatively shallow NNs (at most two hidden layers). We suspect this might be due to the optimal control being relatively low-dimensional compared to the high-dimensional objective functionals in portfolio optimization problems with discrete rebalancing (see Van Staden et al. (2022b) for a rigorous analysis), while in this NN approach approach the optimal control is obtained directly without requiring the solution of the (high-dimensional) objective functional at rebalancing times.
Note that these advantages also contrast the NN approach with Reinforcement Learning-based algorithms to solve portfolio optimization problems, as the following remark discusses.
**Remark 3.2**.: (Contrast of NN approach to Reinforcement Learning). Reinforcement learning (RL) algorithms (for example, Q-learning) relies fundamentally on the DP principle for the numerical solution of the portfolio optimization problem (see for example Gao et al. (2020); Lucarelli and Borrotti (2020); Park et al. (2020)). This requires, at each value iteration step, the approximation of a (high-dimensional) conditional expectation. As a result, RL is associated with standard DP-related concerns related to error amplification and the curse of dimensionality discussed above, and also cannot solve general problems of the form (1.1) without relying on for example an embedding approach to obtain an associated problem that can be solved using DP methods.
## 4 Convergence analysis
In this section, we present the theoretical justification of the proposed NN approach as outlined in Section 3. We confirm that the numerical solution of (3.16) can be used to approximate the theoretical solution of (3.6) arbitrarily well (in probability) under suitable conditions. This section only summarizes the key convergence results which are among the main contributions of this paper, while additional technical details and proofs are provided in Appendix A.
We start with Theorem 4.1, which confirms the validity of Step 1 (Subsection 3.1), namely using a NN \(\boldsymbol{f}_{n}\left(\cdot;\boldsymbol{\theta}_{n}\right)\in\mathcal{N}_{n}\) to approximate the control. Note that Theorem 4.1 relies on two assumptions, presented in Appendix A.2: We emphasize that Assumption A.3 is purely made for purposes of convenience, since its requirements can easily be relaxed with only minor modifications to the proofs (as discussed in Remark A.1), but at the cost of significant notational complexity and no additional insights. In contrast, Assumption A.2 is critical to establish the result of Theorem 4.1, and requires that the optimal investment strategy (or control) satisfies Assumption 3.1, places some basic requirements on \(F\) and \(G\), and assumes that the sequence of NNs \(\left\{\boldsymbol{f}_{n}\left(\cdot;\boldsymbol{\theta}_{n}\right),n\in \mathbb{N}\right\}\) is constructed such that the number of nodes in each hidden layer \(\hbar\left(n\right)\rightarrow\infty\) as \(n\rightarrow\infty\) (no assumptions are yet required regarding the exact form of \(n\rightarrow\hbar\left(n\right)\)).
**Theorem 4.1**.: _(Validity of NN approximation) We assume that Assumption A.2 holds, and for ease of exposition, we also assume that Assumption A.3 holds. Then the NN approximation to the control in (3.8) is valid, in the sense that \(V\left(t_{0},w_{0}\right)\) in (3.6) can be approximated arbitrarily well by \(V_{n}\left(t_{0},w_{0}\right)\) in (3.12) for sufficiently large \(n\), since_
\[\lim_{n\rightarrow\infty}\left|V_{n}\left(t_{0},w_{0}\right)-V \left(t_{0},w_{0}\right)\right| = \lim_{n\rightarrow\infty}\left|\inf_{\left(\boldsymbol{\theta}_{n },\xi\right)\in\mathbb{R}^{m+1}}J_{n}\left(\boldsymbol{\theta}_{n},\xi;t_{0},w _{0}\right)-\inf_{\xi\in\mathbb{R}}\inf_{\boldsymbol{p}\in C\left(\mathcal{D}_ {\boldsymbol{\phi}},\mathcal{Z}\right)}J\left(\boldsymbol{p},\xi;t_{0},w_{0} \right)\right| \tag{4.1}\] \[= 0.\]
Proof.: See Appendix A.3.
Having justified Step 1 of the approach, Theorem 4.2 now confirms the validity of Step 2 of the NN approach (see Subsection 3.2), namely using the computational estimate \(\boldsymbol{f}_{n}^{\star}\left(\cdot;\hat{\boldsymbol{\theta}}_{n}^{\star} \right)\in\mathcal{N}_{n}\) from (3.16) as an approximation
of the true optimal control \(\mathbf{p}^{*}\in C\left(\mathcal{D}_{\mathbf{\phi}},\mathcal{Z}\right)\). Note that in addition to the assumptions of Theorem 4.1, Theorem 4.2 also requires Assumption A.4, which by necessity includes computational considerations such as the structure of the training dataset \(\mathcal{Y}_{n}\), the rate of divergence of the number of hidden nodes \(\hbar\left(n\right)\rightarrow\infty\) as \(n\rightarrow\infty\), and assumptions regarding the optimization algorithm used in solving problem (3.16).
**Theorem 4.2**.: _(Validity of computational estimate) We assume that Assumption A.2, Assumption A.3 and Assumption A.4 hold. Then the computational estimate to the optimal control (3.2) obtained using (3.8) and (3.16) is valid, in the sense that the value function \(V\left(t_{0},w_{0}\right)\) in (3.6) can be approximated arbitrarily well in probability by \(\hat{V}_{n}\left(t_{0},w_{0};\mathcal{Y}_{n}\right)\) in (3.16) for sufficiently large \(n\), since_
\[\left|\hat{V}_{n}\left(t_{0},w_{0};\mathcal{Y}_{n}\right)-V\left(t _{0},w_{0}\right)\right| = \left|\inf_{\left(\mathbf{\theta}_{n},\xi\right)\in\mathbb{R}^{n+1} }\hat{J}_{n}\left(\mathbf{\theta}_{n},\xi;t_{0},w_{0},\mathcal{Y}_{n}\right)-\inf _{\xi\in\mathbb{R}}\inf_{\mathbf{p}\in C\left(\mathcal{D}_{\mathbf{\phi}},\mathcal{Z} \right)}J\left(\mathbf{p},\xi;t_{0},w_{0}\right)\right| \tag{4.2}\] \[\overset{P}{\longrightarrow} 0,\qquad\text{as }n\rightarrow\infty.\]
Proof.: See Appendix A.3.
Taken together, Theorem 4.1 and Theorem 4.2 establish the theoretical validity of the NN approach to solve problems of the form (1.1).
## 5 Numerical results
In this section, we present numerical results obtained by implementing the NN approach described in Section 3. For illustrative purposes, the examples focus on investment objectives as outlined in Subsection 2.1, and we use three different data generation techniques for obtaining the training data set \(\mathcal{Y}_{n}\) of the NN: (i) parametric models for underlying asset returns, (ii) stationary block bootstrap resampling of empirical returns, and (ii) generative adversarial network (GAN)-generated synthetic asset returns.
### Closed-form solution: \(DSQ\left(\gamma\right)\) with continuous rebalancing
Under certain conditions, some of the optimization problems in Subsection 2.1 can be solved analytically. In this subsection, we demonstrate how a closed-form solution of problem \(DSQ\left(\gamma\right)\) in (2.10), assuming _continuous_ rebalancing (i.e. if we let \(\Delta t\to 0\) in (2.1)), can be approximated very accurately using a very simple NN (1 hidden layer, only 3 hidden nodes) using discrete rebalancing with \(\Delta t\gg 0\) in (2.1). This simultaneously illustrates how parsimonious the NN approach is, as well as how useful the imposition of time-continuity is in ensuring the smooth behavior of the (approximate) optimal control.
In this subsection as well as in Subsection 5.2, we assume parametric dynamics for the underlying assets. For concreteness, we consider the scenario of two assets, \(N_{a}=2\), with unit values \(S_{i},i=1,2\), evolving according to the following dynamics,
\[\frac{dS_{i}\left(t\right)}{S_{i}\left(t^{-}\right)} = \left(\mu_{i}-\lambda_{i}\kappa_{i}^{\left(1\right)}\right)\cdot dt +\sigma_{i}\cdot dZ_{i}\left(t\right)+d\left(\sum_{k=1}^{\pi_{i}\left(t \right)}\left(\vartheta_{i}^{\left(k\right)}-1\right)\right),\qquad i=1,2. \tag{5.1}\]
Note that (5.1) takes the form of the standard jump diffusion models in finance - see e.g. Kou (2002); Merton (1976) for more information. For each asset \(i\) in (5.1), \(\mu_{i}\) and \(\sigma_{i}\) denote the (actual, not risk-neutral) drift and volatility, respectively, \(Z_{i}\) denotes a standard Brownian motion, \(\pi_{i}\left(t\right)\) denotes a Poisson process with intensity \(\lambda_{i}\geq 0\), and \(\vartheta_{i}^{\left(k\right)}\) are i.i.d. random variables with the same distribution as \(\vartheta_{i}\), which represents the jump multiplier of the \(i\)th risky asset with \(\kappa_{i}^{\left(1\right)}=\mathbb{E}\left[\vartheta_{i}-1\right]\) and \(\kappa_{i}^{\left(2\right)}=\mathbb{E}\left[\left(\vartheta_{i}-1\right)^{2}\right]\). While the Brownian motions can be correlated with \(dZ_{1}\left(t\right)dZ_{2}\left(t\right)=\rho_{1,2}\cdot dt\), we make the standard assumption that the jump components are independent (see for example Forsyth and Vetzal (2022)).
For this subsection only, we treat the first asset (\(i=1\) in (5.1)) as a "risk-free" asset, and set \(\mu_{1}=r>0\) where \(r\) is the risk-free rate, so that we have \(\lambda_{1}=0\), \(\sigma_{1j}=0\)\(\forall j\), and \(Z_{1}\equiv 0\), while the second asset (\(i=2\) in (5.1)) is assumed to be a broad equity market index (the "risky asset"). In this scenario, if problem \(DSQ\left(\gamma\right)\) in (2.10) is solved subject to dynamics (5.1) together with the assumptions of costless continuous trading,
infinite leverage, and uninterrupted trading in the event of insolvency, then the \(DSQ\left(\gamma\right)\)-optimal control can be obtained analytically as
\[\boldsymbol{p}^{*}\left(t,W^{*}\left(t\right)\right)=[1-p_{2}^{*}\left(t,W^{*} \left(t\right)\right),p_{2}^{*}\left(t,W^{*}\left(t\right)\right)]\in\mathbb{R} ^{2}, \tag{5.2}\]
where the fraction of wealth in the broad stock market index (asset \(i=2\)) is given by (Zweng and Li (2011))
\[p_{2}^{*}\left(t,W^{*}\left(t\right)\right)=\frac{\mu_{2}-r}{\sigma_{2}^{2}+ \lambda_{2}\kappa_{2}^{(2)}}\cdot\left[\frac{\gamma e^{-r\left(T-t\right)}-W^{ *}\left(t\right)}{W^{*}\left(t\right)}\right],\qquad w_{0}<\gamma e^{-r\left(T -t\right)}. \tag{5.3}\]
By design, the NN approach is not constructed to solve problems with unrealistic assumptions such as continuous trading, infinite leverage and short-selling, or trading in the event of bankruptcy, all of which are required to derive (5.3). However, if the implicit quadratic wealth target for the DSQ problem (i.e. the value of \(\gamma\), see Vigna (2014)) is not too aggressive, the analytical solution (5.3) does not require significant leverage or lead to a large probability of insolvency. In such a scenario, we can use the NN approach to approximate (5.3).
We select \(w_{0}=100\), \(T=1\) year and \(\gamma=138.33\), and simulate \(n=2.56\times 10^{6}\) paths of the underlying assets using (5.1) and parameters as in Table C.1 (Appendix C). On this set of paths, the true analytical solution (5.3) is implemented using 7,200 time steps. In contrast, for the NN approach, we use only 4 rebalancing events in \([0,T=1]\), and therefore aggregate the simulated returns in quarterly time intervals to construct the training data set \(\mathcal{Y}_{n}\). We consider only a very shallow NN, consisting of a single hidden layer and only 3 hidden nodes.
Figure 5.1 compares the resulting optimal investment strategies by illustrating the optimal proportion of wealth invested in the the broad equity market index (asset \(i=2\)) as a function of time and wealth. We emphasize that the NN strategy in Figure 5.1(b) is not expected to be exactly identical to the analytical solution in Figure 5.1(a), since it is based on fundamentally different assumptions such as discrete rebalancing and investment constraints (2.5).
However, requiring that the NN feature vector includes time in the proposed NN approach, together with a NN parameter vector that does not depend on time, we guarantee the smooth behavior in time of the NN approximation observed in Figure 5.1(b). As a result, Table 5.1 shows that the shallow NN strategy trained with \(\Delta t\gg 0\) results in a remarkably accurate and parsimonious approximation to the true analytical solution where \(\Delta t\to 0\), since we obtain nearly identical optimal terminal wealth distributions.
### Ground truth: Problem \(MCV\left(\rho\right)\)
In the case of the Mean-CVaR problem \(MCV\left(\rho\right)\) in (2.15), Forsyth and Vetzal (2022) obtain an MCV-optimal investment strategy subject to the same investment constraints as in Section 2 (namely discrete rebalancing, no short-selling or leverage allowed, and no trading in insolvency) using the partial (integro-)differential equation (PDE) approach of Forsyth (2020).
Figure 5.1: Closed-form solution - \(DSQ\left(\gamma\right)\) with continuous rebalancing: Optimal proportion of wealth invested in the broad equity market index as a function of time and wealth. The NN approximation is obtained for a specific initial wealth of \(w_{0}=\)100, and only four rebalancing events in \([0,T]\).
For ground truth analysis purposes, we therefore consider the same investment scenario as in Forsyth and Vetzal (2022), where two underlying assets are considered, namely 30-day US T-bills and a broad equity market index (the CRSP VWD index) - see Appendix C for definitions. However, in contrast to the preceding section where one asset was taken as the risk-free asset, both assets are now assumed to evolve according to dynamics of the form (5.1), using the double-exponential Kou (2002) formulation for the jump distributions. The NN training data set is therefore constructed by simulating the same underlying dynamics. While further details regarding the context and motivation for the investment scenario can be found in Forsyth and Vetzal (2022), here we simply note that the scenario involves \(T=5\) years, quarterly rebalancing, a set of admissible strategies satisfying (2.5), and parameters for (5.1) as in Table C.2.
As discussed in Appendix B, the inherently higher complexity of the Mean-CVaR optimal control requires the NN to be deeper than in the case of the problem considered in Subsection 5.1. As a result, we consider approximating NNs with two hidden layers, each with 8 hidden nodes, while relatively large mini-batches of 2,000 paths were used in the stochastic gradient descent algorithm (see Appendix B) to ensure sufficiently accurate sampling of the tail of the returns distribution in selecting the descent direction at each step. Note that despite using a deeper NN, this NN structure is still very parsimonious and relatively shallow compared to the rebalancing time-dependent structures considered in for example Han and Weinan (2016), where a new set of parameters is introduced at each rebalancing event.
Table 5.2 compares the PDE results reported in Forsyth and Vetzal (2022) with the corresponding NN results. Note that the PDE optimal control was determined by solving a Hamilton-Jacobi-Bellman PDE numerically. The statistics for the PDE generated control were computed using \(n=2.56\times 10^{6}\) Monte Carlo simulations of the joint underlying asset dynamics in order to calculate the results of Table 5.2, while the NN was trained on \(n=2.56\times 10^{6}\) paths of the same underlying asset dynamics but which were independently simulated. While some variability of the results are therefore to be expected due to the underlying samples, the results in Table 5.2 demonstrate the robustness of the proposed NN approach.
### Ground truth: Problems \(MV\left(\rho\right)\) and \(DSQ\left(\gamma\right)\)
In this subsection, we demonstrate that if the investment objective (1.1) is separable in the sense of dynamic programming, the correct time-consistent optimal investment strategy is recovered, otherwise we obtain the correct pre-commitment (time-inconsistent) investment strategy.
To demonstrate this, the theoretical embedding result of Li and Ng (2000); Zhou and Li (2000), which establishes the equivalence of problems \(MV\left(\rho\right)\) and \(DSQ\left(\gamma\right)\) under fairly general conditions, can be exploited for ground truth analysis purposes as follows. Suppose we solved problems \(MV\left(\rho\right)\) and \(DSQ\left(\gamma\right)\) on the same
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \cline{3-7} \multicolumn{1}{c}{} & \multicolumn{5}{|c|}{\(W\left(T\right)\) percentiles} \\ \hline Solution approach & Rebalancing & 5th & 20th & 50th & 80th & 95th \\ \hline \hline Closed-form solution & Continuous, \(\Delta t\)\(\rightarrow\) 0 & 86.81 & 98.02 & 106.35 & 112.82 & 118.15 \\ \hline Shallow NN approximation & Discrete, \(\Delta t=0.25\), total of \(N_{rb}=4\) only & 86.62 & 97.30 & 105.67 & 112.54 & 118.85 \\ \hline \end{tabular}
\end{table}
Table 5.1: Closed-form solution - \(DSQ\left(\gamma\right)\) with continuous rebalancing: Percentiles of the simulated ( \(n=2.56\times 10^{6}\)) terminal wealth distributions obtained by implementing the optimal strategies in Figure 5.1. In both cases, a mean terminal wealth of 105 is obtained. Note that the NN approximation was obtained under the assumption of quarterly rebalancing only, no leverage or short-selling, and therefore no trading in insolwebers.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \(\rho\) & \multicolumn{2}{|c|}{5\% CVaR} & \multicolumn{2}{|c|}{\(E^{q_{0},w_{0}}\left[W\left(T\right)\right]\)} & \multicolumn{2}{|c|}{Value function} & \multicolumn{2}{|c|}{\%} \\ \cline{2-7} & PDE & NN & PDE & NN & PDE & NN & \\ \hline
0.10 & 940.60 & 940.55 & 1069.19 & 1062.97 & 1047.52 & 1046.85 & -0.06\% \\ \hline
0.25 & 936.23 & 937.39 & 1090.89 & 1081.99 & 1208.95 & 1207.88 & -0.09\% \\ \hline
1.00 & 697.56 & 690.11 & 1437.73 & 1444.16 & 2135.29 & 2134.27 & -0.05\% \\ \hline
1.50 & 614.92 & 611.65 & 1508.10 & 1510.07 & 2877.07 & 2876.76 & -0.01\% \\ \hline \end{tabular}
\end{table}
Table 5.2: Ground truth - problem \(MCV\left(\rho\right)\): The PDE results are obtained from Forsyth and Vetzal (2022) for selected points on the Mean-CVaR “efficient frontier”. The “Value function” column reports the value of the objective function (2.14) under the corresponding optimal control, while “% difference” reports the percentage difference in the reported value functions for the NN solution compared to the PDE solution.
underlying training data set. We remind the reader that in the proposed NN approach, problem \(MV\left(\rho\right)\) can indeed be solved directly without difficulty, which is not possible in dynamic programming-based approaches. Then, considering the numerical results, there should be values of parameters \(\rho\equiv\tilde{\rho}\) and \(\gamma\equiv\tilde{\gamma}\) such that the optimal strategy of \(MV\left(\rho\equiv\tilde{\rho}\right)\) corresponds exactly to the optimal strategy of \(DSQ\left(\gamma\equiv\tilde{\gamma}\right)\), with a specific relationship holding between \(\tilde{\rho}\) and \(\tilde{\gamma}\). The NN approach can therefore enable us to numerically demonstrate the embedding result of Li and Ng (2000); Zhou and Li (2000) in a setting where the underlying asset dynamics are not explicitly specified and where multiple investment constraints are present. We start by recalling the embedding result.
**Proposition 5.1**.: _(Embedding result of Li and Ng (2000); Zhou and Li (2000)) Fix a value \(\tilde{\rho}>0\). If \(\mathcal{P}^{*}\in\mathcal{A}\) is the optimal control of problem \(MV\left(\rho\equiv\tilde{\rho}\right)\) in (2.12), then \(\mathcal{P}^{*}\) is also the optimal control for problem \(DSQ\left(\gamma=\tilde{\gamma}\right)\) in (2.10), provided that_
\[\tilde{\gamma} = \frac{1}{2\tilde{\rho}}+E^{t_{0},w_{0}}\left[W^{*}\left(T; \mathcal{P}^{*},\mathbf{Y}\right)\right]. \tag{5.4}\]
Proof.: See Li and Ng (2000); Zhou and Li (2000). We also highlight the alternative proof provided in Dang and Forsyth (2016), which shows that this result is valid for any admissible control set \(\mathcal{A}\).
Since (5.4) is valid for any admissible control set \(\mathcal{A}\), we consider a factor investing scenario where portfolios are constructed using popular long-only investable equity factor indices (Momentum, Value, Low Volatility, Size), a broad equity market index (the CRSP VWD index), 30-day T-bills and 10-year Treasury bonds (see Appendix C for definitions). For illustrative purposes in the case of an investor primarily concerned with long-run factor portfolio performance, we use a horizon of \(T=10\) years, \(w_{0}=120\), annual contributions of \(q\left(t_{m}\right)=12\), and annual rebalancing.
Given historical returns data for the underlying assets, we construct training and testing (out-of-sample) data sets for the NN, \(\mathcal{Y}_{n}\) and \(\mathcal{Y}_{n}^{test}\), respectively, using stationary block bootstrap resampling of empirical historical asset returns (see Appendix C), which is popular with practitioners (Anarkulova et al. (2022); Cavaglia et al. (2022); Cogneau and Zakalmouline (2013); Dichtl et al. (2016); Scott and Cavaglia (2017); Simonian and Martirosyan (2022)) and is designed to handle weakly stationary time series with serial dependence. See Ni et al. (2022) for a discussion concerning the probability of obtaining a repeated path in block bootstrap resampling (which is negligible for any realistic number of samples). Due to availability of historical data we use inflation-adjusted monthly empirical returns from 1963:07 to 2020:12. The training data set (\(n=10^{6}\)) is obtained using an expected block size of 6 months of joint returns from 1963:07 to 2009:12, while the testing data set (\(n=10^{6}\)) uses an expected block size of 3 months and returns from 2010:01 to 2020:12. We consider NNs with two hidden layers, each with only eight hidden nodes.
Choosing two values of \(\tilde{\rho}>0\) to illustrate different levels of risk aversion (see Table 5.3), we solve problem \(MV\left(\rho=\tilde{\rho}\right)\) in (2.12) directly using the proposed approach to obtain the optimal investment strategy \(\mathbf{f}\left(;\hat{\mathbf{\theta}}_{me}^{*}\right)\). Note that since we consider a fixed NN structure in this setting rather than a sequence of NNs, we drop the subscript "\(n\)" in the notation \(\mathbf{f}\left(;\hat{\mathbf{\theta}}_{me}^{*}\right)\). Using this result together with (5.4), we can approximate the associated value of \(\tilde{\gamma}\) by
\[\tilde{\gamma} \simeq \frac{1}{2\tilde{\rho}}+\frac{1}{n}\sum_{j=1}^{n}W^{*\left(j \right)}\left(T;\hat{\mathbf{\theta}}_{mev}^{*},,\mathcal{Y}_{n}\right), \tag{5.5}\]
and solve problem \(DSQ\left(\gamma=\tilde{\gamma}\right)\) independently using the proposed approach on the same training data set \(\mathcal{Y}_{n}\).
According to Proposition 5.1, the resulting investment strategy \(\mathbf{f}\left(;\hat{\mathbf{\theta}}_{dsq}^{*}\right)\) should be (approximately) identical to the strategy \(\mathbf{f}\left(\cdot;\hat{\mathbf{\theta}}_{mv}^{*}\right)\) if the proposed approach works as required. Note that the parameter vectors are expected to be different (i.e. \(\hat{\mathbf{\theta}}_{dsq}^{*}\neq\hat{\mathbf{\theta}}_{mv}^{*}\)) due to a variety of reasons (multiple local minima, optimization using SGD, etc.), but the resulting wealth distributions and asset allocation should agree, i.e. \(\mathbf{f}\left(\cdot;\hat{\mathbf{\theta}}_{dsq}^{*}\right)\simeq\mathbf{f}\left(\cdot; \hat{\mathbf{\theta}}_{mv}^{*}\right)\).
Figure 5.2 demonstrates the investment strategies \(\mathbf{f}\left(\cdot;\hat{\mathbf{\theta}}_{mv}^{*}\right)\) and \(\mathbf{f}\left(\cdot;\hat{\mathbf{\theta}}_{sq}^{*}\right)\) obtained by training the NNs on the same training data set using values of \(\tilde{\rho}=0.017\) and \(\tilde{\gamma}=429.647\), respectively. Note that the values \(\tilde{\rho}\) and \(\tilde{\gamma}\) are rounded to three decimal places, and Figure 5.2 corresponds to Results set 1 in Table 5.3. In this example, only four of the underlying candidate assets have non-zero investments, which is to be expected due to the high correlation between long-only equity factor indices.
Table 5.3 confirms that the associated optimal terminal wealth distributions of \(MV\left(\rho=\tilde{\rho}\right)\) and \(DSQ\left(\gamma=\tilde{\gamma}\right)\) indeed correspond, both in-sample (training data set) and out-of-sample (testing data set).
The proposed NN approach therefore clearly works as expected, in that we demonstrated that the result of Proposition 5.1 in a completely model-independent way in a portfolio optimization setting where no known analytical solutions exist. In particular, we emphasize that no assumptions were made regarding parametric underlying asset dynamics, the results are entirely data-driven. As a result, we can interpret the preceding results as showing that the approach correctly recovers the time-inconsistent (or pre-commitment) strategy without difficulty if the objective is not separable in the sense of dynamic programming, such as in the case of the \(MV\left(\rho\right)\) problem, whereas if the objective is separable in the sense of dynamic programming, such as in the case of the \(DSQ\left(\gamma\right)\) problem, the approach correctly recovers the associated time-consistent strategy.
Figure 5.2: Ground truth - problems \(MV\left(\rho=\tilde{\rho}\right)\) and \(DSQ\left(\gamma=\tilde{\gamma}\right)\): investment strategies \(\boldsymbol{f}\left(\cdot;\hat{\boldsymbol{\theta}}_{mv}^{*}\right)\) and \(\boldsymbol{f}\left(\cdot;\hat{\boldsymbol{\theta}}_{dst}^{*}\right)\) obtained by training the NNs using values of \(\tilde{\rho}=0.017\) and \(\tilde{\gamma}=429.647\) (rounded to three decimal places), respectively. Each figure shows the proportion of wealth invested in the asset as a function of the minimal NN features, namely time and available wealth. Zero investment under the optimal strategies in the broad market index and the Size factor.
\begin{table}
\begin{tabular}{|c||c|c|c||c|c|c|c|} \cline{2-9} \multicolumn{1}{c||}{} & \multicolumn{2}{c||}{Results set 1: \(\tilde{\rho}=0.017\), \(\tilde{\gamma}=429.647\)} & \multicolumn{3}{c|}{Results set 2: \(\tilde{\rho}=0.0097\), \(\tilde{\gamma}=493.196\)} \\ \hline \(W\left(T\right)\) & \multicolumn{2}{c|}{Training data} & \multicolumn{2}{c||}{Testing data} & \multicolumn{2}{c|}{Training data} & \multicolumn{2}{c|}{Testing data} \\ \cline{2-9} distribution & MV & DSQ & MV & DSQ & MV & DSQ & MV & DSQ \\ \hline \hline Mean & 400.2 & 400.3 & 391.2 & 391.6 & 441.5 & 441.8 & 441.8 & 441.5 \\ \hline Stdev & 55.4 & 55.4 & 26.2 & 25.7 & 79.6 & 79.7 & 39.4 & 39.5 \\ \hline
5th percentile & 276.5 & 276.4 & 346.6 & 347.5 & 255.2 & 254.6 & 367.8 & 367.1 \\ \hline
25th percentile & 391.8 & 392.3 & 382.4 & 382.8 & 422.4 & 423.6 & 430.9 & 430.7 \\ \hline
50th percentile & 416.1 & 416.3 & 396.5 & 396.8 & 469.8 & 470.1 & 451.3 & 451.2 \\ \hline
75th percentile & 429.9 & 429.8 & 406.4 & 406.7 & 487.7 & 489.6 & 465.0 & 464.8 \\ \hline
95th percentile & 452.1 & 452.1 & 418.9 & 419.0 & 516.1 & 516.5 & 480.9 & 480.2 \\ \hline \end{tabular}
\end{table}
Table 5.3: Ground truth - problems \(MV\left(\rho=\tilde{\rho}\right)\) and \(DSQ\left(\gamma=\tilde{\gamma}\right)\): Terminal wealth results obtained using \(n=10^{6}\) joint paths for the underlying assets. Note that the values of \(\tilde{\rho}\) and \(\tilde{\gamma}\) are rounded to three decimal places,.
### Mean - Semi-variance strategies
Having demonstrated the reliability of the results obtained using the proposed NN approach with the preceding ground truth analyses, we now consider the solution of the Mean - Semi-variance problem (2.16). To provide the necessary context to interpret the \(MSemiV\left(\rho\right)\)-optimal results, we compare the results of the optimal solutions of the \(MCV\left(\rho=\rho_{mcv}\right)\), \(MSemiV\left(\rho=\rho_{msv}\right)\), and \(OSQ\left(\gamma=\gamma_{osq}\right)\) problems, where the values of \(\rho_{mcv}\), \(\rho_{msv}\) and \(\gamma_{osq}\) are selected to obtain the same expected value of terminal wealth on the NN training data set. This is done since the MCV- and OSQ-optimal strategies have been analyzed in great detail (Dang and Forsyth (2016); Forsyth (2020)), and are therefore well understood. Note that since all three strategies are related to the maximization of the mean terminal wealth and while simultaneously minimizing some risk measure (which is implicitly done in the case of the OSQ problem, see Dang and Forsyth (2016)), it is natural to compare the strategies on the basis of equal expectation of terminal wealth.
To highlight the main qualitative features of the \(MSemiV\left(\rho\right)\)-optimal results, we consider a simple investment scenario of two assets, namely 30-day T-bills and a broad equity market index (the VWD index) - see Appendix C for definitions. We choose \(T=\)5 years, \(w_{0}=1000\), and zero contributions to demonstrate a lump sum investment scenario with quarterly rebalancing.
To illustrate the flexibility of the NN approach to underlying data generating assumptions, the NN training data sets are constructed using generative adversarial network (GAN)-generated synthetic asset returns obtained by implementing the TimeGAN algorithm proposed by Yoon et al. (2019). In more detail, using empirical monthly asset returns from 1926:01 to 2019:12 for the underlying assets (data sources are specified in Appendix C), the TimeGAN is trained with default parameters as in Yoon et al. (2019) using block sizes of 6 months to capture both correlation and serial correlation aspects of the (joint) time series.6 Once trained, the TimeGAN is then used to generate a set of \(n=10^{6}\) paths of synthetic asset returns, which is used as the training data set to train the NNs corresponding to the MCV, MSemiV and OSQ-optimal investment strategies.
Footnote 6: It appears that the actual code in Yoon et al. (2019) implements the following steps: (i) takes as input actual price data, (ii) forms rolling blocks of price data and (iii) forms a single synthetic price path (which is the same length as the original path) by randomly sampling (without replacement) from the set of rolling blocks. Step (iii) corresponds to the non-overlapping block bootstrap using a fixed block size. This should be contrasted with stationary block bootstrap resampling of Politis and Romano (1994). Step (i) does not make sense as input to a bootstrap technique, since the data set is about 10 years long, with an initial price of $50 and a final price of $1200. We therefore changed Step (i), so that all data was converted to returns prior to being used as input.
Figure 5.3 illustrates the resulting optimal investment strategies, and we observe that the MSemiV-optimal strategy is fundamentally different from the MCV and OSQ-optimal strategies, while featuring elements of both. Specifically, Figure 5.4, which illustrates the resulting optimal terminal wealth distributions (with the same expectation), demonstrates that the MSemiV strategy, like the MCV strategy, can offer better downside protection than the OSQ strategy, while the MSemiV strategy retains some of the qualitative elements of the OSQ distribution such as the left skew.
Having illustrated that the MSemiV problem can be solved in a dynamic trading setting using the proposed NN approach to obtain investment strategies that offer potentially valuable characteristics, we leave a more in-depth investigation of the properties and applications of MSemiV-optimal strategies for future work.
Figure 5.3: Optimal investment strategies for the \(MCV\left(\rho=\rho_{mcv}\right)\), \(MSemiV\left(\rho=\rho_{msv}\right)\), and \(OSQ\left(\gamma=\gamma_{osq}\right)\) strategies, obtaining identical expectation of terminal wealth on the training data set. Each figure shows the proportion of wealth invested in the broad equity market index as a function of the minimal NN features, namely time and available wealth.
## 6 Conclusion
In this paper, we presented a flexible NN approach, which does not rely on dynamic programming techniques, to solve a large class of dynamic portfolio optimization problems. In the proposed approach, a single optimization problem is solved, issues of instability and error propagation involved in estimating high-dimensional conditional expectations are avoided, and the resulting NN is parsimonious in the sense that the number of parameters does not scale with the number of rebalancing events.
We also presented theoretical convergence analysis results which show that the numerical solution obtained using the proposed approach can recover the optimal investment strategy, provided it exists, regardless of whether the resulting optimal investment strategy is time-consistent or (formally) time-inconsistent.
Numerical results confirmed the advantages of the NN approach, and showed that accurate results can be obtained in ground truth analyses in a variety of settings. The numerical results also highlighted that the approach remains agnostic as to the underlying data generating assumptions, so that for example empirical asset returns or synthetic asset returns can be used without difficulty.
We conclude by noting that the NN approach is not necessarily limited to portfolio optimization problems such a those encountered during the accumulation phase of pension funds, and could be extended to address the significantly more challenging problems encountered during the accumulation phase of defined contribution pension funds (see for example Forsyth (2022)). We leave this extension for future work.
## 7 Declarations
The authors have no competing interests to declare that are relevant to the content of this article. P.A. Forsyth's work was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) grant RGPIN-2017-03760.
|
2308.16005 | Hybrid Quantum Neural Network Structures for Image Multi-classification | Image classification is a fundamental computer vision problem, and neural
networks offer efficient solutions. With advancing quantum technology, quantum
neural networks have gained attention. However, they work only for
low-dimensional data and demand dimensionality reduction and quantum encoding.
Two recent image classification methods have emerged: one employs PCA
dimensionality reduction and angle encoding, the other integrates QNNs into
CNNs to boost performance. Despite numerous algorithms, comparing PCA reduction
with angle encoding against the latter remains unclear. This study explores
these algorithms' performance in multi-class image classification and proposes
an optimized hybrid quantum neural network suitable for the current
environment. Investigating PCA-based quantum algorithms unveils a barren
plateau issue for QNNs as categories increase, unsuitable for multi-class in
the hybrid setup. Simultaneously, the combined CNN-QNN model partly overcomes
QNN's multi-class training challenges but lags in accuracy to superior
traditional CNN models. Additionally, this work explores transfer learning in
the hybrid quantum neural network model. In conclusion, quantum neural networks
show promise but require further research and optimization, facing challenges
ahead. | Mingrui Shi, Haozhen Situ, Cai Zhang | 2023-08-30T12:48:05Z | http://arxiv.org/abs/2308.16005v1 | # Hybrid Quantum Neural Network Structures for Image Multi-classification
###### Abstract
Image classification is a fundamental computer vision problem, and neural networks offer efficient solutions. With advancing quantum technology, quantum neural networks have gained attention. However, they work only for low-dimensional data and demand dimensionality reduction and quantum encoding. Two recent image classification methods have emerged: one employs PCA dimensionality reduction and angle encoding, the other integrates QNNs into CNNs to boost performance. Despite numerous algorithms, comparing PCA reduction with angle encoding against the latter remains unclear. This study explores these algorithms' performance in multi-class image classification and proposes an optimized hybrid quantum neural network suitable for the current environment. Investigating PCA-based quantum algorithms unveils a barren plateau issue for QNNs as categories increase, unsuitable for multi-class in the hybrid setup. Simultaneously, the combined CNN-QNN model partly overcomes QNN's multi-class training challenges but lags in accuracy to superior traditional CNN models. Additionally, this work explores transfer learning in the hybrid quantum neural network model. In conclusion, quantum neural networks show promise but require further research and optimization, facing challenges ahead.
**Keywords:** Image classification, Quantum algorithms, Quantum neural networks,
Convolutional neural networks
## 1 Introduction
Image classification is a crucial task in the field of machine learning. With the widespread application of digital images and the generation of large-scale datasets, significant breakthroughs have been achieved in image classification using deep learning. Deep learning leverages multi-layered neural network structures to automatically extract features and classify images, enabling the learning of higher-level abstract features from raw pixel-level data, thereby enhancing the accuracy and performance of image classification[1, 2, 3, 4, 5]. The success of deep learning can be attributed to its powerful model representation capabilities and the training on massive datasets. By stacking deep neural networks, deep learning learns multi-level feature representations, including low-level edges, textures, and higher-level semantic information such as shapes and structures. Specially designed structures such as Convolutional Neural Networks (CNNs) and pooling layers enable deep learning to effectively handle spatial locality and translation invariance in images[6, 7, 8, 9].
Quantum Machine Learning (QML) is an advanced research field that combines quantum computing with machine learning, aiming to leverage the characteristics of quantum computation to enhance the processing capabilities of conventional machine learning algorithms[10, 11]. In classical computers, the time complexity for handling large data vectors is typically polynomial or exponential. However, quantum computing offers significantly higher storage availability than classical computing, enabling easy storage and processing of massive data. With the rapid progress of quantum computing technology, there is an urgent need to explore how to maximize its advantages across various application domains. Implementing quantum computational features in machine learning will lead to revolutionary changes in computer vision, particularly in dealing with text, images, videos, and other areas. The applications of quantum machine learning can be categorized into two primary aspects: optimizing traditional machine learning algorithms and addressing situations where classical models are unable to perfectly represent image feature correlations. Quantum machine learning is based on quantum physics principles, integrating quantum computing with machine learning to harness quantum devices' potential for enhancing traditional machine learning algorithm performance. By utilizing quantum computing concepts to improve machine learning algorithms, we can enhance existing classification and clustering algorithms. In quantum machine learning, amplitude encoding can be employed for data preprocessing, transforming classical machine learning data into formats suitable for quantum device processing[12, 13].
The Variational Quantum Algorithm (VQA) is a type of quantum algorithm used in Quantum Machine Learning (QML) to solve optimization problems[14, 15, 16]. As a specific application of VQA, the Parametric Quantum Circuit (PQC), also known as the Variational Quantum Circuit, is a quantum circuit composed of unitary gates with freely adjustable parameters. With the development of Parametric Quantum Circuits, researchers have started exploring their integration with machine learning to address classical problems[11, 16], such as numerical optimization, approximation, classification, and more[17, 18]. Due to the resemblance of Parametric Quantum Circuits to traditional neural networks, they can be optimized by adjusting the parameters to
approximate the target function. Consequently, combining Parametric Quantum Circuits with neural networks has led to the development of Quantum Neural Network (QNN) algorithms[19, 20, 21]. One key difference between QNN and neural networks is that all parameter gates in QNN are reversible. Additionally, PQCs do not employ activation functions for non-linear operations; instead, they use entangling layers to perform entanglement operations on the output quantum states, achieving non-linear computations.
Building upon the foundation of Quantum Neural Networks (QNNs), Cong et al.[22] were inspired by Convolutional Neural Networks (CNNs) and proposed the Quantum Convolutional Neural Networks (QCNN) model. This model combines and enhances multiple quantum techniques, such as multiscale entanglement and quantum error correction, to process input data with fewer network parameters. The QCNN has been demonstrated to be effectively trainable on near-term quantum devices. Furthermore, researchers have explored the integration of quantum circuits and CNNs. YaoChong Li et al.[23] introduced an innovative approach by integrating parametric quantum circuits with CNNs, constructing a recognition model with quantum convolutional layers and quantum classification layers, which achieved remarkable results in experiments. In the domain of molecular modeling, K.T. Schutt et al. [24] employed continuous-filter convolutional layers to model local correlations and applied it to SchNet for modeling quantum interactions within molecules, enabling joint prediction of total energy and interatomic forces. Tomohiro Mano and Tomi Ohtsuki analyzed three-dimensional wave functions using image recognition methods based on multilayer CNNs[25]. They found that training the network at the center of the band allows obtaining a complete phase diagram. Alexey A. Melnikov et al.[26] proposed a graph-based machine learning algorithm, where a designed convolutional neural network learns to recognize graphs with quantum advantages without executing quantum walks or random walk simulations. Zhang et al.[27] introduced a quantum-based subgraph convolutional neural network structure that captures both global topological and local connectivity structures in graphs, effectively characterizing multiscale patterns present in the data. These studies provide novel ideas and approaches for the field of quantum machine learning and demonstrate feasibility and effectiveness through experimental and numerical simulations.
Existing research has shown that in many cases, quantum algorithms outperform classical computing algorithms[28, 29], but it has not been proven that quantum computing can improve performance in all areas. Therefore, a significant amount of research is currently focused on identifying fields where quantum machines' parallel computing capabilities can lead to advancements. In conclusion, quantum machine learning is a challenging yet promising field. Leveraging the power of quantum computing can significantly enhance the performance of applications using quantum machine learning, driving progress in various domains. However, quantum machine learning is still in a rapid development phase, requiring further research and practice to address technical challenges. As hardware devices continue to improve, research in the quantum machine learning field will also advance.
The main focus of this paper is to investigate the strengths and weaknesses of several commonly used Hybrid Quantum Neural Network (HQNN) algorithms[30]. There
are two main approaches studied: one using Principal Component Analysis (PCA) as the dimensionality reduction method with angle-encoded QNN models, and the other combining traditional CNN models with QNN models to form hybrid quantum neural networks. The first approach is primarily applied to image multi-classification problems, followed by a comparison with the second approach. Additionally, we propose a new PQC construction approach by combining new quantum entanglement strategies and better combinations of quantum parameter gates. Furthermore, we study whether the HQNN network model improves in performance after applying transfer learning.
## 2 Related Work
### Amplitude Encoding and Angle Encoding
In machine learning, the representation of data plays a crucial role in determining the final training effectiveness of algorithms. In classical machine learning problems, data comes in various forms, including language, images, and various information components, and is often represented as one-dimensional vectors or multidimensional arrays. Such data representations facilitate appropriate processing by algorithms and help achieve desired algorithmic outcomes. However, in quantum machine learning, the representation of data differs from the traditional approach. For quantum machine learning to handle classical data, we need to find suitable methods to encode it into a quantum system. This process is referred to as quantum data encoding or quantum data embedding. The quality of data encoding directly influences the effectiveness of quantum machine learning. Several quantum data encoding methods have been proposed, such as amplitude encoding and angle encoding[31, 32, 33, 22, 34].
For a classical dataset \(X=x^{(1)},...,x^{(i)},...,x^{(m)}\) containing \(m\) samples, where each sample is a vector with \(n\) features, \(x^{(i)}=x^{(i)}_{1},x^{(i)}_{2},...,x^{(i)}_{n}\), the goal of amplitude encoding is to encode the feature data of each sample into the amplitudes of a quantum state. We can use \(N=\log_{2}(n)\) quantum bits (qubits) to encode each data sample as follows:
\[|\psi^{(i)}\rangle=\sum_{j=1}^{n}x^{(i)}_{j}|j\rangle \tag{1}\]
It is important to note that the amplitudes of each computational basis in this quantum state must satisfy the normalization condition, and in Formula 1 it is required that \(\sum\limits_{j=1}^{n}(x^{(i)}_{j})^{2}=1\). After encoding the data into the amplitudes of the quantum state using amplitude encoding, subsequent algorithms will involve computations on these quantum state amplitudes. The advantage of amplitude encoding lies in significantly reducing the number of quantum bits required for data encoding, meeting the stringent requirements of NISQ devices for the number of qubits. However, it is worth mentioning that this encoding method may have lower efficiency in the preparation process, which is the process of preparing the quantum state with the desired amplitudes.
For angle encoding, each feature of the data sample is encoded into the rotation angle of each quantum bit. For the angle encoding of \(x^{(i)}\), we require \(n\) quantum bits to complete the encoding:
\[|\psi^{(i)}\rangle=\bigotimes_{j=1}^{n}R(x_{j}^{(i)}) \tag{2}\]
In angle encoding, \(R\) represents single-qubit rotation gates, which typically include \(R_{x}\), \(R_{y}\), and \(R_{z}\) gates. Angle encoding only uses \(n\) quantum bits and a constant-depth quantum circuit, making it suitable for current quantum hardware. However, for samples with a large number of features, it is not feasible to directly encode the data using angle encoding. To address this issue, a common approach is to use feature dimensionality reduction techniques, which reduce the number of features to meet the requirement of the available number of quantum bits.
### Quantum Neural Network
As a quantum machine learning algorithm, Quantum Neural Network is a model that combines quantum computing with neural networks. Its main purpose is to leverage the advantages of quantum computing to handle and learn complex data patterns. Similar to classical neural networks, a Quantum Neural Network consists of input, intermediate, and output layers. In QNN, the input layer serves as the quantum data encoding layer, the intermediate layer represents the parametric quantum circuit layer, and the output layer corresponds to the quantum measurement layer.
A parameterized quantum circuit is a quantum computing model in which the parameters of quantum gates can be adjusted based on the problem at hand. These parameterized gates can be optimized according to the input data to perform specific quantum computing tasks. Typically, a parameterized quantum circuit consists of a series of quantum gates with adjustable parameters. These parameters can be real numbers and can be trained using classical optimization algorithms to maximize or minimize specific quantum measurement outcomes. By tuning these parameters, a parameterized quantum circuit can represent a wide range of quantum operations and quantum state transformations. Parameterized quantum circuits find broad applications in quantum machine learning and quantum optimization.
In quantum computing, commonly used measurement bases include the Z-basis, X-basis, Y-basis, etc. The quantum measurement layer in a Quantum Neural Network often selects the Z-basis measurement, which corresponds to measuring the quantum state in the Z-direction and obtaining the expectation value. The expectation value can be understood as the average of measurement outcomes obtained in multiple Z-basis measurements. Therefore, the output of the Quantum Neural Network is the expectation value obtained by measuring the quantum state in the Z-basis. The expectation value of the quantum state, as the output result, possesses some characteristics:
1. Real-valued: The expectation value of a quantum state is a real number that represents the probability distribution in a specific measurement basis.
2. Interpretability: The expectation value reflects the probability distribution of a quantum state in a specific measurement basis, which helps us understand the model's processing result on input data.
3. Comparability: The expectation values of different samples can be compared, enabling tasks such as classification, regression, etc.
Indeed, the output layer of a quantum neural network utilizes a quantum measurement layer, and the output result is represented by the expectation value of the quantum state. By choosing different measurement bases, one can obtain expectation values along different directions, enabling the processing and prediction of input data.
As shown in Figure 1, the role of the Quantum Data Encoding Layer \(E\) is to encode classical data \(x^{(i)}\) into the quantum circuit for further processing by subsequent layers. The PQC (Parameterized Quantum Circuit) layer \(U\) performs complex quantum computations on the input quantum state. Finally, the output is obtained by measuring the quantum state using the Quantum Measurement Layer, and the expectation value is used as the network's output result. When we choose the Z-basis as the measurement basis, the output of the Quantum Neural Network (QNN) can be expressed as:
\[\hat{y}^{(i)}=\langle 0|E^{\dagger}(x^{(i)})U^{\dagger}(\theta)ZU(\theta)E(x^{ (i)})|0\rangle \tag{3}\]
After obtaining the output vector \(\hat{y}\), further processing can be performed on it.
Quantum Neural Networks are a practical application of variational quantum algorithms in quantum computing.
### Deep learning for image classification
Deep learning has made significant progress in image classification tasks, and Convolutional Neural Networks are among the most commonly used and successful deep learning models. The design of CNNs is inspired by the working principles of the human visual system, allowing them to automatically learn and extract features from images, resulting in high accuracy in image classification.
A Convolutional Neural Network consists of multiple layers, and its key components include the convolutional layer, pooling layer, and fully connected layer. In CNNs,
Figure 1: a diagram of the quantum neural network. The quantum state is initialized as \(|0\rangle\), and \(E(x)\) represents the quantum data encoding layer, which is the implementation to load classical data into the quantum circuit. \(U(\theta)\) denotes the parameterized quantum circuit, where \(\theta\) represents the adjustable parameters in the circuit. The final layer marked with Z represents the Z-basis measurement in the quantum neural network. Finally, we obtain the output \(\hat{y}\) of the QNN.
the convolutional layer plays the role of feature extraction. It performs convolutional operations with a set of learnable convolutional kernels on the input image to capture local features in the image. The convolutional operation slides a window over the image and performs element-wise multiplication and accumulation with the convolutional kernel to generate feature maps. By combining multiple convolutional kernels, the network can learn more abstract and high-level feature representations. Following the convolutional layer, the pooling layer acts as a downsampling method and preserves essential information. The pooling layer aggregates features from local regions (max pooling or average pooling), reducing the size of the feature maps. This helps to reduce the number of parameters, improve computational efficiency, and enhance the model's invariance to translation, scaling, and rotation transformations. The fully connected layer is situated between the convolutional layer and the output layer. In the fully connected layer, each neuron is connected to all neurons in the previous layer. By learning the weight and bias parameters, the fully connected layer transforms the feature maps extracted by the convolutional and pooling layers into the final class predictions. The fully connected layer establishes connections between high-level abstract features and specific classes, enabling classification decisions.
The success of deep learning in image classification relies heavily on large-scale annotated image datasets and powerful computational resources. By feeding image data into deep learning models and optimizing them through backpropagation algorithms, the models can automatically learn and adjust parameters to achieve optimal performance in image classification tasks. The application of deep learning in image classification extends beyond traditional object recognition and encompasses more complex tasks such as scene understanding, facial recognition, image generation[35, 36, 37].
### Hybrid Quantum Neural Network
Hybrid quantum neural network is a machine learning algorithm that combines quantum neural network algorithms with classical algorithms. Its purpose is to leverage the quantum computing properties of QNN in classical algorithms or to use classical algorithms to assist QNN in accomplishing its target tasks[38, 39, 40, 22, 41]. There are common forms of HQNN, such as using QNN as the training model and employing traditional optimization algorithms like stochastic gradient descent and Adam optimization; alternatively, QNN can be integrated as a component in combination with a traditional convolutional neural network to form a new model called CNN-QNN, which is then optimized using classical optimization algorithms[42, 43, 41].
In the first type of HQNN model, the main component is composed of QNN[44]. Therefore, the primary concern is how to encode classical data into the QNN model. This encoding method is known as data encoding, and angle encoding and amplitude encoding are the main choices. Amplitude encoding requires fewer quantum bits for data with a large number of features, making it a preferred choice. On the other hand, angle encoding consumes a significant number of quantum bits, so to avoid resource consumption, dimensionality reduction techniques like linear discriminant analysis or principal component analysis are commonly used. The most crucial step in training the model is to optimize it based on its performance. The optimization process aims
to find the optimal model parameters that minimize the loss function, leading to more accurate predictions on the training data and better generalization ability. Since the main component of the QNN model is the parameterized quantum circuit, we can use traditional optimization techniques to adjust its parameters and ensure the model's convergence.
In the second type of HQNN model, known as the CNN-QNN model, QNN is added as a component to the CNN network. Typically, QNN is added in the middle of the hidden layers, after the convolutional blocks, similar to a fully connected layer, and sometimes as the output layer of the entire model. For this type of HQNN model, the training process also falls into two categories. We divide the model into the CNN part and the QNN part. Without a doubt, the parameters in the QNN part will be adjusted during training optimization. As for the CNN part, the first case involves training and optimizing it together with the QNN part. In the second case, transfer learning is applied to the CNN part, where a pre-trained CNN model is transferred to the HQNN model without requiring additional training, reducing the training cost and improving efficiency to some extent.
## 3 HQNN Model Design for Image Multi-Classification
### HQNN Model Design with PCA Dimensionality Reduction
In the current mixed quantum neural network, a common approach is to incorporate QNN as a component into the neural network or directly use QNN as the target training model. Regardless of the approach, a significant challenge is how to handle the large amount of data with a limited number of quantum bits. Since image data contains numerous features, a common solution is to use PCA dimensionality reduction in combination with angle encoding. This process transforms the high-dimensional data of classical image data into low-dimensional data that can be encoded with a small number of quantum bits. Subsequently, the low-dimensional data is input to the quantum circuit using angle encoding.
As shown in Figure 2, one common construction method for the QNN model involves encoding the low-dimensional data \(x_{low}\) obtained through PCA dimensionality reduction into the quantum circuit. Each value of the low-dimensional data is used as the rotation angle for the Ry gate. The subsequent part is a typical parameterized quantum circuit. The PQC starts with a set of rotation gate operations, consisting of Rx, Ry, and Rz gates, applied to each input qubit. Then, quantum entanglement operations are performed on the qubits. In the entanglement part, we require each qubit to undergo an entanglement operation. This is achieved by using controlled gates to connect different qubits. There are four common entanglement strategies: linear entanglement, cyclic entanglement, star entanglement, and fully connected entanglement[17]. In this design, we use the most commonly used cyclic entanglement strategy, where each qubit is entangled with its next nearest neighbor qubit. After the entanglement part, another set of rotation gate operations is typically applied to each qubit. It is important to note that the parameters of these gates are different from the
previous ones, representing a new set of operations. We consider the first rotation operation, entanglement operation, and second rotation operation together as one layer, and we can repeat this layer to create a multi-layered quantum circuit. It is worth noting that the second rotation operation is not necessary in every layer; it is only needed after the final layer. Finally, we measure the expectation value of each qubit in the Z basis and use the expectation values from each qubit as our output vector \(\hat{o}\).
### HQNN model based on CNN
We can interpret the HQNN model based on CNN from two perspectives.
First Perspective - CNN as the Main Model Body with QNN Embedded as a Layer. Convolutional Neural Networks (CNN) are currently the most effective and practical method for processing image data. CNN has given rise to numerous models designed for various image-related tasks in computer vision, such as LeNet, VGG, ResNet for image classification, YOLO for object detection[45], and U-net for semantic segmentation[46]. The goal of the HQNN model based on CNN is to modify the well-designed CNN model by incorporating the Quantum Neural Network (QNN) as a quantum circuit layer, referred to as the "quantum layer," and adding it to the original model, as depicted in Figure 3. There are two common approaches in current practices: one is directly adding the quantum layer to the CNN network, either between fully connected blocks or as a new output layer; the other is to replace one or more fully connected layers, utilizing the quantum layer's output either as the final output or an intermediate layer. In our illustration, we adopt the approach of adding the quantum layer after the convolutional blocks and before the output layer. This method aims to optimize and enhance the original CNN model, resulting in improvements reported in various articles.
Figure 2: Schematic Diagram of HQNN based on PCA Processing.
Second Perspective - QNN as the Main Model Body with CNN as Preprocessing. In this perspective, the QNN serves as the main model body, and CNN is viewed as a preprocessing step for the QNN model. The key structure of CNN is the convolutional layer, which extracts features from input image data to obtain useful and high-level feature representations. Through end-to-end training, CNN automatically learns feature representations, eliminating the need for laborious and subjective manual feature engineering. Moreover, CNN, with convolution and pooling operations, takes advantage of local perception and parameter sharing mechanisms, effectively capturing local patterns and textures in images, which PCA often fails to do. Additionally, CNN's multi-layer convolution and pooling operations enable multi-scale representations, capturing both fine details and overall information in images, while PCA can only extract global information. The introduction of non-linear activation functions in CNN enhances the model's representational capacity, enabling it to better model complex image features and relationships. As a result, CNN provides a robust foundation of features for computer vision tasks. After feature extraction by CNN, we continue to use QNN for the classification task on the obtained feature data. Since QNN's output is constrained, we can add a final fully connected layer to control the output.
In conclusion, the HQNN model based on CNN is a novel approach that combines CNN and QNN, aiming to improve the performance of CNN or serve as a feature extraction step to fulfill QNN's requirements for low-dimensional features. The combination of these two models has shown promising results in various research studies.
Figure 3: HQNN Model based on CNN Architecture. X represents the input data. The red box represents the model’s convolutional block, which includes convolutional layers and pooling layers. The subsequent blue box represents the QNN layer, and the purple box represents the fully connected layer. Finally, \(\hat{y}\) denotes the model’s output
### The Proposed Model
In quantum machine learning, amplitude encoding has some advantages over angle encoding. Firstly, amplitude encoding uses the amplitudes of quantum bits to represent information, which allows it to accommodate more information. In contrast, angle encoding only utilizes the phase information of quantum bits, resulting in limited information capacity. Secondly, amplitude encoding exhibits a certain tolerance to noise and errors. Since amplitude encoding utilizes the amplitudes of quantum bits, it can retain more information even in the presence of some level of noise and errors. On the other hand, angle encoding is more susceptible to noise and errors, which may lead to information loss and misinterpretation[47]. Furthermore, amplitude encoding is more straightforward to implement during computation and operations. It can leverage common quantum gate operations for manipulation and measurement, whereas angle encoding requires more quantum gate operations, making it more complex to implement. Considering the large amount of data in image information, using amplitude encoding is more suitable for the current quantum device environment. It can mitigate the impact of noise to some extent and enable the storage of more information.
In the QNN model, one of the most crucial challenges is the construction of Parameterized Quantum Circuits. Obtaining a PQC with better simulation performance and higher expressive power has been a longstanding problem in the field of QNN. To address this issue, we have incorporated the concept of a single quantum neural network simulating an approximate function of a single variable[48]. This approach
Figure 4: Overall Model Diagram. In the diagram, the blue dashed box represents an example of a PQC with four qubits.
introduces a more expressive parameterized rotation gate U, which is illustrated in Figure 4 and can be expressed concisely as follows:
\[U(\theta)=R_{y}(\theta_{1})R_{z}(\theta_{2})R_{y}(\theta_{3}) \tag{4}\]
The parameter vector \(\theta\) is represented as \(\theta=(\theta_{1},\theta_{2},\theta_{3})\) for gate U. After passing through this combination gate, the initial quantum bits exhibit a broader and more random distribution on the Bloch sphere mapping. As a result, the PQC has a higher expressive power.
Furthermore, apart from focusing on innovative ways to design parameter gates, we should also pay attention to the entanglement capability of each quantum bit[17]. Two common quantum entanglement strategies are full connectivity and cyclic connectivity, and full connectivity has some important advantages over cyclic connectivity. Full connectivity enables direct entanglement between all quantum bits, establishing a global entanglement relationship where each bit is directly connected to every other bit. This entanglement strategy exhibits strong entanglement capability, flexibility, and information transfer efficiency. It can construct entanglement networks of different scales, catering to various quantum computing tasks and algorithm requirements. Full connectivity also efficiently transfers and exchanges quantum information, providing faster and more efficient information transfer. Moreover, full connectivity exhibits good scalability, allowing easy expansion to larger quantum systems to meet complex computational demands.
Finally, we continue to use the conventional method of calculating the expectation value of Z-basis measurements as the output of our QNN.
Simultaneously, we embed the QNN into a simple CNN model to leverage the powerful feature extraction capabilities of CNN. After the QNN, we add a fully connected layer to obtain a more flexible model output. The overall model is depicted in Figure 4.
## 4 Experiment and Result Analysis
In order to explore the performance of the model in image classification, we used two of the most commonly used datasets in the fields of machine learning and deep learning for image classification: the MNIST dataset and the FashionMNIST dataset. These two datasets are benchmark datasets in the computer vision domain. The MNIST dataset contains handwritten digit images with 10 classes, each representing a digit from 0 to 9. The images are grayscale and have a size of 28x28 pixels, with a total of 60,000 training samples and 10,000 test samples. The MNIST dataset is widely used to validate and compare the performance of various image classification algorithms, and it is considered a relatively simple dataset. On the other hand, the FashionMNIST dataset is a more challenging dataset, consisting of images of 10 categories of fashion items such as shirts, pants, and shoes. Compared to the MNIST dataset, the FashionMNIST dataset contains more complex and diverse image styles, making it more relevant to real-world applications. It also includes 60,000 training samples and 10,000 test samples, with images of size 28x28 pixels. In our experiments, we used conventional training optimization methods for all models. The training process was
set to 50 epochs, and the commonly used cross-entropy loss function was used as the loss function for image classification tasks. We employed the Adam optimizer with a learning rate of 0.01. The main focus of our analysis is to compare the training process and the final test accuracy of the models to evaluate their performance.
While binary classification has achieved 100% accuracy on many models, we are more interested in multi-class classification problems. Multi classification better reflects the complexity and diversity of the real world. Many image datasets involve multiple categories, such as object recognition, face recognition, scene classification, etc., which require more fine-grained classification of images to obtain more accurate results. Multi classification provides richer information and better meets the needs of real-world applications. For example, suppose we want to build an image recognition system that can recognize different types of animals. If we simplify the problem to binary classification, such as distinguishing between cats and dogs, then we will not be able to distinguish other animal categories, such as birds, elephants, lions, etc. However, through multi classification, we can train the model to distinguish more animal categories, giving it a broader range of applications.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & \multicolumn{3}{c}{PCA+QNN} & New HQNN \\ \cline{2-6} Class & 2 & & 8 & 10 \\ \cline{2-6} Dimension & 8 & 10 & 8 & 10 & None \\ \hline Loss & 0.3940 & 0.4096 & 1.1819 & 1.1123 & 0.4086 \\ Accuracy & 0.8011 & 0.7954 & 0.2324 & 0.2245 & 0.8439 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results of PCA Multi Classification
Figure 5: Comparison between the Model Built with the New PQC Approach and the Traditional PQC Model. The curves ”loss1” and ”accuracy1” represent the loss and accuracy of the new model, while the curves ”loss2” and ”accuracy2” represent the loss and accuracy of the original model. The left plot shows the training results on the MNIST dataset, while the right plot shows the results on the FashionMNIST dataset.
According to the model described in Section 3.1, we conducted training and testing of PCA-generated low-dimensional data at 8 and 10 dimensions for multi-classification tasks, as shown in Table 1. In the experiments, we first compared the use of PCA for dimensionality reduction to obtain different low-dimensional datasets to investigate its impact on the QNN classification problem. To demonstrate the model's multi-classification performance, we conducted not only binary classification comparisons but also 8-class classification comparisons. The results indicate that the HQNN model used for binary classification, based on PCA dimensionality reduction, exhibited moderate convergence, with accuracy hovering around 80% regardless of whether the PCA reduced the dimensions to 8 or 10. However, when faced with the 8-class classification problem, the model encountered a training issue due to the vanishing gradient problem, commonly known as the "barren plateau" problem. The performance on the FashionMNIST dataset mirrored that on the MNIST dataset, as illustrated in Figure 5, which solely presents the loss reduction process and accuracy variation for the MNIST dataset. It is evident that the PCA-based HQNN model has demonstrated significant drawbacks in the context of multi-classification problems.
Subsequently, we trained and tested the new model proposed in Section3.3 and compared it with the PCA-based approach, as shown in Table 1. Both models were utilized for the 10-classification task. Leveraging the CNN-based architecture, the new model not only exhibited excellent training performance but also achieved an accuracy of approximately 80%. In Figure 6, we presented the training process and accuracy
Figure 6: Training Performance and Comparison of the New Model. The figure illustrates the training process and accuracy variations of the new model on the two datasets. It shows the training performance and accuracy changes of the new model on both the MNIST and FashionMNIST datasets.
variation curves for the new QNN model using CNN dimensionality reduction and amplitude encoding on both the MNIST and FashionMNIST datasets. The model achieved an accuracy of around 85% on the MNIST dataset and about 78.2% on the FashionMNIST dataset. This demonstrates that the new model, employing CNN-based dimensionality reduction and amplitude encoding, is significantly more effective than the PCA-based approach with angle encoding in multi-classification problems. Furthermore, we compared the new model with the widely used PQC-based QNN model on both datasets. The results show that our proposed model not only outperforms the conventional model during the training process but also achieves an accuracy improvement of approximately 5%. Therefore, our proposed model proves to be more practical and efficient for image multi-classification tasks.
In the aforementioned new model, we employed a non-transfer learning approach, while transfer learning might potentially enhance the training process[49]. Transfer learning is an effective method for training new models[50, 51], allowing the utilization of existing models and knowledge to improve performance in new tasks or domains, while also saving training time and resource costs. In transfer learning, a pre-trained model is typically used as the initial model, which has been trained on a large-scale dataset. Subsequently, the weight parameters of this model can be used as a starting point for fine-tuning or further training on the new task. The benefit of this approach is that the pre-trained model has already learned general feature representations, aiding in faster convergence to the optimal solution for the new task. Hence, we initially trained a standalone CNN model, and then employed transfer learning by fixing the parameters of the convolutional blocks and only training the subsequent QNN layers and fully connected layers, maintaining the same model architecture as the new model. The use of the pre-trained and fixed convolutional blocks can be likened to utilizing an effective feature extractor. A comparison between transfer learning and non-transfer learning for the new model is illustrated in Figure 7. Notably, the non-transfer trained model exhibited an approximately 5% higher accuracy compared to the original model.
Figure 7: Comparison between Transfer Learning and Non-Transfer Learning for the New Model.The figure presents a comparison between the training performance of the new model using transfer learning and non-transfer learning approaches on both the MNIST and FashionMNIST datasets. The left graph displays the training results for the MNIST dataset, while the right graph shows the training results for the FashionMNIST dataset.
the transfer learning approach. The underlying cause of this phenomenon might be attributed to the fact that the introduced QNN layer is less sensitive to the reduced-dimensional data from the original convolutional blocks when trained in isolation, as opposed to training the entire model together.
## 5 Conclusion
In this paper, our main objective was to explore a more practical and efficient quantum neural network (QNN) algorithm by comparing different HQNN model construction approaches in image multi-classification problems. Through our research, we found that the HQNN model using PCA as the dimensionality reduction method encountered the issue of the vanishing gradient problem in multi-classification problems, while the combination of CNN and QNN effectively avoided this problem. Moreover, our newly proposed QNN construction method, which combines amplitude encoding and a more efficient PQC model, demonstrated better performance in image multi-classification problems. However, the transfer learning approach applied to the HQNN model that combines CNN and QNN did not prove to be suitable for this type of problem, or transfer learning might not be appropriate for such HQNN models. Although our model outperformed traditional HQNN models in terms of performance, its accuracy still fell short compared to excellent CNN models in traditional algorithms.
In conclusion, quantum machine learning holds great promise and is expected to bring revolutionary changes to the field of machine learning. By optimizing traditional algorithms and handling feature correlations that classical models cannot perfectly represent, quantum machine learning has the potential to improve the performance of various applications. While there are still technical challenges and not all fields may benefit from quantum computing, with more research and advancements, we believe quantum machine learning will unleash tremendous potential for future scientific and technological development.
|
2304.01874 | Incremental Verification of Neural Networks | Complete verification of deep neural networks (DNNs) can exactly determine
whether the DNN satisfies a desired trustworthy property (e.g., robustness,
fairness) on an infinite set of inputs or not. Despite the tremendous progress
to improve the scalability of complete verifiers over the years on individual
DNNs, they are inherently inefficient when a deployed DNN is updated to improve
its inference speed or accuracy. The inefficiency is because the expensive
verifier needs to be run from scratch on the updated DNN. To improve
efficiency, we propose a new, general framework for incremental and complete
DNN verification based on the design of novel theory, data structure, and
algorithms. Our contributions implemented in a tool named IVAN yield an overall
geometric mean speedup of 2.4x for verifying challenging MNIST and CIFAR10
classifiers and a geometric mean speedup of 3.8x for the ACAS-XU classifiers
over the state-of-the-art baselines. | Shubham Ugare, Debangshu Banerjee, Sasa Misailovic, Gagandeep Singh | 2023-04-04T15:28:22Z | http://arxiv.org/abs/2304.01874v2 | # Incremental Verification of Neural Networks
###### Abstract.
Complete verification of deep neural networks (DNNs) can exactly determine whether the DNN satisfies a desired trustworthy property (e.g., robustness, fairness) on an infinite set of inputs or not. Despite the tremendous progress to improve the scalability of complete verifiers over the years on individual DNNs, they are inherently inefficient when a deployed DNN is updated to improve its inference speed or accuracy. The inefficiency is because the expensive verifier needs to be run from scratch on the updated DNN. To improve efficiency, we propose a new, general framework for incremental and complete DNN verification based on the design of novel theory, data structure, and algorithms. Our contributions implemented in a tool named IVAN yield an overall geometric mean speedup of 2.4x for verifying challenging MNIST and CIFAR10 classifiers and a geometric mean speedup of 3.8x for the ACAS-XU classifiers over the state-of-the-art baselines.
Keywords:Theoretical Verification, Robustness, Deep Neural Networks +
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Journal of Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Journal: Information Sciences
+
Footnote †: journal: Journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Journal of Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Journal of Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Journal of Information Sciences
+
Footnote †: journal: Journal of Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Journal of Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Information Sciences
+
Footnote †: journal: Journal: Information Sciences
+
same task. Existing verifiers can be broadly classified as either complete or incomplete. Incomplete methods are more scalable but may fail to prove or disprove a trustworthiness property [11, 12, 13, 14, 15, 16, 17, 18]. A complete verifier always verifies the property if the property holds or otherwise returns a counterexample. Complete verification methods are more desirable as they are guaranteed to provide an exact answer for the verification task [1, 1, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32].
**Limitation of Existing Works:** The deployed DNNs are modified for reasons such as approximation [1, 14], fine-tuning [15], model repair [19], or transfer learning [20]. Various approximations such as quantization, and pruning slightly perturb the DNN weights, and the updated DNN is used for the same task [1, 16, 17]. Similarly, fine-tuning can also be performed to repair the network on buggy inputs while maintaining the accuracy on the original training inputs [10]. Each time a new DNN is created, expensive complete verification needs to be performed to check whether it is trustworthy. A fundamental limitation of all existing approaches for complete verification of DNNs is that the verifier needs to be run from scratch end-to-end every time the network is even slightly modified. As a result, developers still rely on test set accuracy as the main metric for measuring the quality of a trained network. This limitation of existing verifiers restricts their applicability as a tool for evaluating the trustworthiness of DNNs.
**This Work: Incremental and Complete Verification of DNNs:** In this work, we address the fundamental limitation of existing complete verifiers by presenting IVAN, the first general technique for incremental and complete verification of DNNs. An original network and its updated network have similar behaviors on most of the inputs, therefore the proofs of property on these networks are also related. IVAN accelerates the complete verification of a trustworthy property on the updated network by leveraging the proof of the same property on the original network. IVAN can be built on top of any Branch and Bound (BaB) based method. The BaB verifier recursively partitions the verification problem to gain precision. It is currently the dominant technology for constructing complete verifiers [1, 1, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32].
**Challenges:** The main challenge in building an incremental verifier on top of a non-incremental one is to determine which information to pass on and how to effectively reuse this information. Formal methods research has developed numerous techniques for incremental verification of programs, that reuse the proof from previous revisions for verifying the new revision of the program [15, 1, 16, 17, 18, 19]. However, often the program commits are local changes that affect only a small part of the big program. In contrast, most DNN updates result in weight perturbation across one or many layers of the network. This poses a different and more difficult challenge than incremental program verification. Additionally, DNN complete verifiers employ distinct heuristics for branching. A key challenge is to develop a generic method that incrementally verifies a network perturbed across multiple layers and is applicable to multiple complete verification methods, yet can provide significant performance benefits.
**Our Solution:** IVAN computes a specification tree - a novel tree data structure representing the trace of BaB - from the execution of the complete verifier on the original network. We design new algorithms to refine the specification tree to create a more compact tree. At a high level, the refinement involves reordering the branching decisions such that the decisions that worked well in the original verification are prioritized. Besides, it removes the branching decisions that worked poorly in the original verification by pruning nodes and edges in the specification tree. IVAN also improves the branching strategy in BaB for the updated network based on the observed
effectiveness of branching choices when verifying the original DNN. The compact specification tree and the improved branching strategy guide the BaB execution on the updated network to faster verification, compared to non-incremental verification that starts from scratch. IVAN yields up to 43x speedup over the baseline based on state-of-the-art non-incremental verification techniques [14, 15, 16]. It achieves a geometric mean speedup of 2.4x across challenging fully-connected and convolutional networks over the baseline. IVAN is generic and can work with various common BaB branching strategies in the literature (input splitting, ReLU splitting).
**Main Contributions:** The main contributions of this paper are:
* We present a novel, general framework for incremental and complete DNN verification by designing new algorithms and data structure that allows us to succinctly encode influential branching strategies to perform efficient incremental verification of the updated network.
* We identify a class of network modifications that can be efficiently verified by our framework by providing theoretical bounds on the amount of modifications.
* We implement our approach into a tool named IVAN and show its effectiveness over multiple state-of-the-art complete verification techniques, using distinct branching strategies (ReLU splitting and input splitting), in incrementally verifying both local and global properties of fully-connected and convolutional networks with ReLU activations trained on the popular ACAS-XU, MNIST, and CIFAR10 datasets. Our results show that for MNIST and CIFAR10 classifiers, using the ReLU splitting technique [14] IVAN yields a geometric mean speedup of 2.4x over the state-of-the-art baseline [14, 15]. For ACAS-XU, using the input splitting technique IVAN achieves a geometric mean speedup of 3.8x over RefineZono [15].
IVAN implementation is open-source, publicly available at [https://github.com/uiuc-focal-lab/IVAN](https://github.com/uiuc-focal-lab/IVAN). An extended version of this paper containing all the proofs and additional experiments is available at [https://arxiv.org/abs/2304.01874](https://arxiv.org/abs/2304.01874).
Figure 1: Workflow of IVAN from left to right. _IVAN_ takes the original network \(N\), input specification \(\phi\) and output specification \(\psi\). It is built on top of a BaB-based complete verifier that utilizes an analyzer \(A\) for the bounding, and heuristic \(H\) for branching. IVAN refines a specification tree \(T_{f}^{N}\), result of verifying \(N\), to create a compact tree \(T_{0}^{N^{a}}\) and updated branching heuristic \(H_{\Delta}\). IVAN performs faster verification of \(N^{a}\) exploiting both \(T_{0}^{N^{a}}\) and \(H_{\Delta}\).
## 2. Overview
Figure 1 illustrates the high-level idea behind the workings of IVAN. It takes as input the original neural network \(N\), the updated network \(N^{a}\), a local or global input region \(\phi\), and the output property \(\psi\). The goal of IVAN is to check whether for all inputs in \(\phi\), the outputs of networks \(N\) and \(N^{a}\) satisfy \(\psi\). \(N\) and \(N^{a}\) have similar behaviors on the inputs in \(\phi\), therefore the proofs of the property on these networks are also related. IVAN accelerates the complete verification of the property \((\phi,\psi)\) on \(N^{a}\) by leveraging the proof of the same property on \(N\).
**Neural Network Verifier:** Popular verification properties considered in the literature have \(\psi:=C^{T}Y\geq 0\), where \(C\) is a column vector and \(Y=N(X)\), for \(X\in\phi\). Most state-of-the-art complete verifiers use BaB to solve this problem. These techniques use an analyzer that computes the linear approximation of the network output \(Y\) through a convex approximation of the problem domain. This linear approximation of \(Y\) is used to perform the bounding step to show for the lower bound \(LB(C^{T}Y)\) that \(LB(C^{T}Y)\geq 0\). If the bounding step cannot prove the property, the verification problem is partitioned into subproblems using a branching heuristic \(H\). The partitioning splits the problem space allowing a more precise convex approximation of the split subproblems. This leads to gains in the precision of \(LB\) computation. Various choices for the analyzer and the branching strategies exist which represent different trade-offs between precision and speed.
IVAN leverages a specification tree representation and novel algorithms to store and transfer the proof of the property from \(N\) to \(N^{a}\) for accelerating the verification on \(N^{a}\). We show the workings of \(IVAN\) through the following illustrative example.
### Illustrative Example
We consider the two networks \(N\) and \(N^{a}\) with the same architecture as shown in Figure 2. Most practical network updates result in network weight perturbations e.g., quantization, model repair, and fine-tuning. Network \(N^{a}\) is obtained by updating (perturbing the weights) of network \(N\). These networks apply ReLU activation at the end of each affine layer except for the final layer. The weights for the affine layers are shown on the edges. We consider the verification property \((\phi,\psi)\) such that \(\phi=\{(i_{1},i_{2}):i_{1}\in[0,1]\wedge i_{2}\in[0,1]\}\) and \(\psi=(o_{1}+14\geq 0)\). Let \(\mathcal{R}=\{r_{1},r_{2},r_{3},r_{4}\}\) denote the set of ReLUs in the considered architecture. \(\mathcal{R}\) is a function of the architecture of the DNNs and is common for both \(N\) and \(N^{a}\).
**Branch and Bound:** We consider a complete verifier that uses a sound analyzer \(A\) based on the exact encoding of the affine layers and the common triangle linear relaxation (Bunel et al., 2020, 2017; Ehlers, 2017) for over-approximating the non-linear ReLU function. If due to over-approximation of
Figure 2. Example original network \(N\) and its perturbation \(N^{a}\) (blue weights). Each layer consists of a linear function followed by the ReLU activation function. \(\phi\) is the input specification and \(\psi\) is the output specification.
the ReLU function, the analyzer cannot prove or disprove the property, the verifier partitions the problem by splitting the problem domain. The analyzer is more precise if it separately analyzes the split subproblems and merges the results. There are two main strategies for branching considered in the literature, input splitting (Anderson et al., 2020; Wang et al., 2018), and ReLU splitting (Bunel et al., 2020, 2020, 2017; Ferrari et al., 2022; Palma et al., 2021). We show IVAN's effectiveness on both branching strategies in our evaluation (Section 6.1, Section 6.4). However, for this discussion, we focus on ReLU splitting which is scalable for the verification of high-dimensional inputs.
**ReLU splitting:** An unsolved problem is partitioned into two cases, where the cases assume the input \(\hat{x}_{i}\) to ReLU unit \(r_{i}\) satisfies the predicates \(\hat{x}_{i}\geq 0\) and \(\hat{x}_{i}<0\) respectively. Splitting a ReLU \(r_{i}\) eliminates the analyzer imprecision in the approximation of \(r_{i}\). When we split all the ReLUs in \(\mathcal{R}\), the analyzer is exact. Nevertheless, splitting all \(\mathcal{R}\) is expensive as it requires \(2^{|\mathcal{R}|}\) analyzer bounding calls. The state-of-the-art techniques use the heuristic function \(H\) to find the best ReLU to split at each step, leading to considerably scalable complete verification.
The branching function \(H\) scores the ReLUs \(\mathcal{R}\) for branching at each unsolved problem to partition the problem. If \(\mathcal{R}^{\prime}\subseteq\mathcal{R}\) denotes the subset of ReLUs that are not split in the current subproblem, then the verifier computes \(r=\arg\max_{\mathcal{R}^{\prime}}H\) to choose the \(r\) for the current split. \(H\) is a function of the exact subproblem that it branches and hence depends on \(\phi\), \(\psi\), the network, and the branching assumptions made for the subproblem. However, for the purpose of this running example, we consider a simple constant branching heuristic \(H\) that ranks \(H(r_{1})>H(r_{3})>H(r_{4})>H(r_{2})\) independent of the subproblem and the network. This assumption is only for the illustration of our idea, we show in the evaluation (Section 6) that IVAN can work with state-of-the-art branching heuristics (Bunel et al., 2020, 2021).
### IVAN Algorithm
**Specification Tree:** IVAN uses a rooted binary tree data structure to store the trace of splitting decisions during BaB execution. A specification split is a finer specification parameterized by the subset of ReLUs in \(\mathcal{R}\). The root node is associated with the specification \((\phi,\psi)\). All other nodes represent the specification splits obtained by splitting the problem domain recursively. Each internal node in the tree has two children, the result of the branching of the associated specification.
The split decision can be represented as a predicate. For a ReLU \(r_{i}\) with input \(\hat{x}_{i}\), let \(r_{i}^{+}\coloneqq(\hat{x}_{i}\geq 0)\) and \(r_{i}^{-}\coloneqq(\hat{x}_{i}<0)\) denote the split decisions. A split of ReLU \(r_{i}\) at node \(n\) creates two children nodes \(n_{l}\) and \(n_{r}\), each encoding the new specification splits. Each edge in the specification tree represents the split decision made at the branching step. An edge connects an internal node with its child node, and we label it with the additional predicate that is assumed by the child subproblem. A split of ReLU \(r\) at node \(n\) adds nodes \(n_{l}\) and \(n_{r}\) that are connected with edges labeled with predicates \(r_{i}^{+}\) and \(r_{i}^{-}\) respectively. If \(\varphi_{n}=(\phi^{\prime},\psi)\) is the specification split at \(n\), then \(\varphi_{n_{l}}=(\phi^{\prime}\wedge r^{+},\psi)\) and \(\varphi_{n_{r}}=(\varphi^{\prime}\wedge r^{-},\psi)\). The names of the nodes have no relation to the networks or the property, they are used for referencing a particular specification. However, the edges of the tree are tied to the network architecture through the labels. Although the specification tree is created as a trace of verification of a particular network \(N\), it is only a function of the ReLU units in the architecture of \(N\). This allows us to use the branching decisions in the specification tree for guiding the verification of any updated network \(N^{a}\) that has the same architecture as \(N\). We use \(LB_{N}(n)\) to denote the lower bound \(LB(C^{T}Y)\) obtained by the analyzer \(A\) on for the subproblem encoded by \(n\), on the network \(N\).
Figure 3 demonstrates the steps of BaB execution on \(N\). Each node represents the specification refined by BaB. We use function \(LB_{N}(n)\) to denote the \(LB(C^{T}Y)=LB(o_{1}+14)\) value obtained by the analyzer \(A\) at node \(n\). The specification is verified for the subproblem of \(n\) if the \(LB_{N}(n)\geq 0\). If \(LB_{N}(n)<0\), the analyzer returns a counterexample (CE). The CE is a point in the convex
approximation of the problem domain and it may be possible that it is spurious, and does not belong to the concrete problem domain. If the CE is not spurious, the specification is disproved and the proof halts. But, if the CE is spurious then the problem is unsolved, and it is further partitioned.
In the first step, for the specification \((\phi,\psi)\) encoded by the root node \(n_{0}\), the analyzer computes \(LB_{N}(n_{0})=-7\), which is insufficient to prove the specification. Further, the CE provided by the analyzer is spurious, and thus the analyzer cannot solve the problem. The root node \(n_{0}\) specification \((\phi,\psi)\) is split by ReLU split of \(r_{1}\) chosen by the heuristic function \(H\). Accordingly, in the specification tree, the node \(n_{0}\) is split into two nodes \(n_{1}\) and \(n_{2}\), with the specification splits \((\phi\wedge r_{1}^{+},\psi)\) and \((\phi\wedge r_{1}^{-},\psi)\) respectively. This procedure of recursively splitting the problem and correspondingly updating the specification tree continues until either all the specifications of the leaf nodes are verified, or a CE is found. In the final specification tree (\(T_{3}^{N}\) in this case), the leaf nodes are associated with the specifications that the analyzer could solve, and the internal nodes represent the specifications that the analyzer could not solve for network \(N\). For BaB starting from scratch, each node in the specification tree maps to a specification that invoked an analyzer call in BaB execution. Figure 3 presents that the verifier successfully proves the property with a specification tree containing 9 nodes. Thus, the verification invokes the analyzer 9 times and performs 4 nodes branchings for computing \(LB\).
Figure (a)a presents the specification tree for \(N^{a}\) at end of the verifying the property \((\phi,\psi)\). Although the \(LB(C^{T}Y)\) computed by the analyzer for each node specifications is different for \(N^{a}\) compared to \(N\), the final specification tree is identical for both networks. Our techniques in IVAN are motivated by our observation that the final specification tree for network \(N\) and its updated version \(N^{a}\) have structural similarities. Moreover, we find that for a DNN update that perturbs the network weight within a fixed bound, these trees are identical. We claim that there are two reasons for this: (i) the specifications that are solved by the analyzer for \(N\) are solved by the analyzer for \(N^{a}\) (specifications of the leaf nodes of the specification tree) and (ii) the specifications that are unsolved by the analyzer for \(N\) are unsolved for \(N^{a}\) (specifications of the internal nodes of the specification tree). In Section 4.4, we provide theoretical bounds on the network perturbations such that these
Figure 3: Steps in Branch and Bound algorithm for complete verification of \(N\). The nodes are labeled with a name and the \(LB_{N}(n)\). The nodes in the specification tree are annotated with their specifications. The edges are labeled with the branching predicates. Each step in BaB partitions unsolved specifications in \(T_{i}^{N}\) into specification splits in \(T_{i+1}^{N}\). The proof is complete when all specification splits corresponding to the leaf nodes are solved.
claims hold true (Theorem 4). Nevertheless, for networks obtained by perturbation beyond the theoretical bounds, the specification trees are still similar if not identical. In our evaluation, we observe this similarity for large networks with practical updates e.g., quantization (Section 6).
**Reuse:** We first introduce our concept of specification tree reuse which uses \(T_{f}^{N}\), the final tree after verifying \(N\), as the starting tree \(T_{0}^{N^{a}}\) for the verification of \(N^{a}\). In contrast, the standard BaB verification starts with a single node tree that represents the unpartitioned initial specification \((\phi,\psi)\). In the reuse technique, IVAN starts BaB verification of \(N^{a}\) from the leaves of \(T_{0}^{N^{a}}=T_{f}^{N}\). For our running example, analyzer \(A\) successfully verifies \(N^{a}\) specifications for all the leaf nodes of the specification tree \(T_{0}^{N^{a}}\) (Figure 3(b)). We show that for any specification tree (created on the same network architecture), verifying the subproblem property on all the leaves of the specification tree is equivalent to verifying the main property \((\phi,\psi)\) (Lemma 1). Verifying the property on \(N^{a}\) from scratch requires 9 analyzer calls and 4 node branchings. However, with the reuse technique, we could prove the property with 5 analyzer calls corresponding to the leaves of \(T_{0}^{N^{a}}\) and without any node branching. Theorem 4 guarantees that the specification of the leaf nodes should be verified on \(N^{a}\) by the analyzer if the network perturbations are lower than a fixed bound. Although for larger perturbations, we may have to split leaves of \(T_{0}^{N^{a}}\) further for complete verification, we empirically observe that the reuse technique is still effective to gain speedup on most practical network perturbations.
**Reorder:** A split is more effective if it leads to fewer further subproblems that the verifier has to solve to prove the property. Finding the optimal split is expensive. Hence, the heuristic \(H\) is used to estimate the effectiveness of a split, and to choose the split with the highest estimated effectiveness. Often the estimates are imprecise and lead to ineffective splits. We use \(LB_{N}(n)\) to give an approximation to quantifying the effectiveness of a split. We discuss this exact formulation of the observed effectiveness scores \(H_{obs}\) in Section 4.3. Our second concept in IVAN is based on our
Figure 4: BaB specification tree for various techniques proposed for incremental verification.
insight that if a particular branching decision is effective for verifying \(N\) then it should be effective for verifying \(N^{a}\). Likewise, if a particular branching decision is ineffective in the verification of \(N\), it should be ineffective in verifying \(N^{a}\). Based on this insight, we use the observed effectiveness score of splits in verifying \(N\) to modify the original branching heuristic \(H\) to an improved heuristic \(H_{\Delta}\). \(H_{\Delta}\) takes the weighted sum of original branching heuristic \(H\) and observed effectiveness scores on \(N\) denoted by \(H_{obs}\). We formulate the effectiveness of a split and \(H_{\Delta}\) in Section 4.3. For simplicity, in the running example, we rerank the ReLUs based on the observed effectiveness of the splits as \(H_{\Delta}(r_{4})>H_{\Delta}(r_{3})>H_{\Delta}(r_{2})>H_{\Delta}(r_{1})\). Figure 4c presents the specification tree for verifying \(N^{a}\) with the updated branching heuristic \(H_{\Delta}\) that requires 5 analyzer calls and 2 node branchings. Reorder technique starts from scratch with a different branching order \(H_{\Delta}\) and it is incomparable in theory to the reuse technique. In Section 6.2, we observe that reorder works better in most experiments.
**Bringing All Together:** Our main algorithm combines our novel concepts of specification tree reuse and reorder yielding larger speedups than possible with only reuse or reorder. Specification tree reuse and reorder are not completely orthogonal and thus combining them is not straightforward. Since in reuse we start verifying \(N^{a}\) with the final specification tree \(T_{f}^{N}\), the splits are already performed with the original order \((r_{1},r_{4},r_{3},r_{2}\) in our example). Our augmented heuristic function \(H_{\Delta}\) will have a limited effect if we reuse \(T_{0}^{N^{a}}=T_{f}^{N}\), since the existing tree branches may already be sufficient to prove the property.
**Constructing a Pruned Specification Tree:** It is difficult to predict the structure of the tree with augmented order. For instance, in our example, \(N\) is verified with \(r_{1},r_{4},r_{3},r_{2}\) order and we have \(T_{f}^{N}\) branched in that order. However, we cannot predict the final structure of the specification tree if branched with our augmented order \(r_{4},r_{3},r_{2},r_{1}\) without actually performing those splits from scratch (as it was done in Figure 4c).
We solve this problem with our novel pruning operation that removes ineffective splits from \(T_{f}^{N}\) and constructs a new compact tree \(T_{P}\). Figure 5 shows the construction of pruned tree \(T_{P}\) for our running example. We remove the split \(r_{1}\) at \(n_{0}\) as it is less effective. Removing \(r_{1}\) from \(T_{3}^{N}\) also eliminates the nodes \(n_{1}\) and \(n_{2}\). The subtrees rooted at \(n_{1}\) and \(n_{2}\) are the result of split \(r_{1}\). If we undo the split \(r_{1}\) at node \(n_{0}\), then \(n_{0}\) should follow the branching decisions taken by one of its children. For this, we can choose either the subtree of \(n_{0}\) or \(n_{1}\), and attach it to \(n_{0}\). We describe the exact method of choosing which subtree to keep in Section 4.3. For this example, our approach chooses to keep the subtree of node \(n_{2}\) and eliminates the subtree at node \(n_{1}\). The pruning procedure leads to the discarding of entire subtrees creating a tree with fewer leaf nodes (leaf nodes \(n_{3}\), \(n_{4}\) are deleted in the example along with internal nodes \(n_{1}\), \(n_{2}\)). Consequently, we obtain a more compact tree with only influential splits in the specification tree.
We start the verification of \(N^{a}\) from the leaf nodes of the pruned tree i.e. \(T_{0}^{N^{a}}=T_{P}\). For our running example specification splits of all leaf nodes of \(T_{P}\) are verified by the analyzer and no further splitting is needed. Figure 4d presents the final specification tree in case we initialize the
Figure 5. IVAN removes the ineffective split \(r_{1}\) at \(n_{0}\) and construct a new specification tree \(T_{P}\).
proof with the compact tree obtained from the IVAN algorithm. We show the time complexity of incremental verification in Section 4.2. For the running example, the incremental proof requires only 3 analyzer calls and no branching calls, and it is a significant reduction to the 9 analyzer calls and 5 node branchings performed by the baseline starting from scratch.
## 3. Preliminaries
In this section, we provide the necessary background on complete neural network verification.
### Neural Network Verification
**Neural Networks** Neural networks are functions \(N:\mathbb{R}^{n_{0}}\rightarrow\mathbb{R}^{n_{I}}\). In this work, we focus on layered neural networks obtained by a sequential composition of \(l\) layers \(N_{1}:\mathbb{R}^{n_{0}}\rightarrow\mathbb{R}^{N_{1}},\ldots,N_{l}:\mathbb{R }^{n_{l-1}}\rightarrow\mathbb{R}^{n_{l}}\). Each layer \(N_{i}\) applies an _affine function_ (convolution or linear function) followed by a non-linear activation function to its input. The choices for non-linear activation functions are ReLU, sigmoid, or tanh. \(ReLU(x)=max(0,x)\) is most commonly used activation function. In Section 4, we focus on the most common BaB verifiers that partition the problems using ReLU splitting in ReLU networks. The \(i\)-th layer of each network \(N_{i}:\mathbb{R}^{n_{i}}\longrightarrow\mathbb{R}^{n_{i+1}}\) is defined as \(N_{i}(x)=ReLU(A_{i}X+B_{i})\) where \(i\in[l]\).
At a high level, neural network verification involves proving that all network outputs corresponding to a chosen set of inputs satisfying the input specification \(\phi\) satisfy a given logical property \(\psi\). We first define the input and output specifications that we consider in this work:
Definition 1 (Input specification).: _For a neural network \(N:\mathbb{R}^{n_{0}}\rightarrow\mathbb{R}^{n_{I}}\), \(\phi_{t}\) is a connected region and \(\phi_{t}\subseteq\mathbb{R}^{n_{0}}\). **Input specification \(\phi:\mathbb{R}^{n_{0}}\rightarrow\{true,false\}\)** is a predicate over the input region \(\phi_{t}\)._
Definition 2 (output specification).: _For a neural network with \(n_{l}\) neurons in the output layer. **output specification \(\psi:\mathbb{R}^{n_{l}}\rightarrow\{true,false\}\)** is a predicate over the output region._
The output property \(\psi\) could be any logical statement taking a truth value true or false. In our paper, we focus on properties that can be expressed as Boolean expressions over linear forms. Most DNN verification works consider such properties.
\[\psi(Y)=(C^{T}Y\geq 0) \tag{1}\]
We next define the verification problem solved by the verifiers:
Definition 3 (Verification Problem).: _The **neural network verification** problem for a neural network \(N\), an input specification \(\phi\) and a logical property \(\psi\) is to prove whether \(\forall X\in\phi_{t}\). \(\psi(N(X))=true\) or provide a counterexample otherwise._
A complete verifier always verifies the property if it holds or returns a counterexample otherwise. Formally, it can be defined as:
Definition 4 (Complete Verifier).: _A **complete verifier \(V\)** for an input specification \(\phi\), a neural network \(N\), an output property \(\psi\) satisfies the following property:_
\[V(\phi,\psi,N)=\text{Verified}\Longleftrightarrow\forall X\in\phi_{t}.\psi(N (X))=true\]
### Branch and Bound for Verification
In this Section, we discuss the branch and bound techniques for complete verification of DNNs. The BaB approach in these techniques use a divide-and-conquer algorithm to compute the \(LB(C^{T}Y)\) for proving \((C^{T}Y\geq 0)\) (Eq. 1). We next discuss the bounding and branching steps in BaB techniques.
**Bounding:** The bounding step uses an analyzer to find a lower bound \(LB(C^{T}Y)\). In complete verifiers, the analyzers are exact for linear functions (e.g., DeepZ [Singh et al.2018], DeepPoly [Singh et al.2019b]). However, they over-approximate the non-linear activation function through a convex over-approximation. We define these sound analyzers as:
**Definition 5** (Sound Analyzer): _A **sound analyzer**\(A\) on an input specification \(\phi\), a DNN \(N\), an output property \(\psi\) returns Verified, Unknown, or Counterexample. It satisfies the following properties:_
\[A(\phi,\psi,N) =\text{Verified}\implies\forall X\in\phi_{t}.\psi(N(X))=\text{ true}\] \[A(\phi,\psi,N) =\text{Counterexample}\implies\exists X\in\phi_{t}.\psi(N(X))= \text{false}\]
**Branching:** If the analyzer cannot prove a property, the BaB verifier partitions the problem into easier subproblems to improve analyzer precision. Algorithm 1 presents the pseudocode for the BaB verification. The algorithm maintains a \(Unsolved\) list of problems that are currently not proved or disproved. It initializes the list with the main verification problem. Line 5 performs the bounding step in the BaB algorithm using the analyzer \(A\). For simplicity, we abuse the notation and use \(A(prob,N)\) for denoting the analyzer output instead of \(A(\phi,\psi,N)\). Here, the \(prob\) encapsulates the input and output specifications \(\phi,\psi\). Line 13 partitions the unsolved problem into subproblems. The algorithm halts when either the \(A\) finds a counterexample on one of the subproblems or the list of unsolved problems is empty. There are two common branching strategies for BaB verification, input splitting and ReLU splitting, which we describe next.
```
1:functionBaB(\(N,problem\))
2:\(Unsolved\leftarrow[(problem)]\)
3:while\(Unsolved\) is not empty do
4:for\(prob\inUnsolved\)do
5:\(status[prob]=A(prob,N)\)\(\triangleright\) Bounding step
6:for\(prob\inUnsolved\)do
7:if\(status[prob]=\) Verifiedthen
8:\(Unsolved.remove(prob)\)\(\triangleright\) Remove verified subproblems
9:if\(status[prob]=\) Counterexamplethen
10:return Counterexample for\(prob\)\(\triangleright\) Return if a counterexample is found
11:if\(status[prob]=\) Unknownthen
12:\(Unsolved.remove(prob)\)
13:\([\text{suppb}_{1},\text{suppb}_{2}]\leftarrow\) split(prob)\(\triangleright\) Branching step
14:\(Unsolved.insert(\text{suppb}_{1},\text{suppb}_{2})\)
15:return Verified
```
**Algorithm 1** Branch and Bound
**Input Splitting:** In input splitting, the input region \(\phi_{t}\) for verification is partitioned. The typical choice is to cut a selected input dimension in half while the rest of the dimensions are unchanged. The dimension to cut is decided by the branching strategy used. This technique is known to be \(\delta\)-complete for any activation function [Anderson et al.2019], but does not scale for high-dimensional input space. In many computer vision tasks, the input is an image with 1000s of pixels. Thus, a high-dimensional perturbation region on such input cannot be branched efficiently for fast verification.
**ReLU Splitting:** State-of-the-art techniques that focus on verifying DNNs with high-dimensional input and ReLU activation, use ReLU splitting. We denote a ReLU unit for \(i\)-th layer and \(j\)-th index as a function \(x_{i,j}=\max(\hat{x}_{i,j},0)\), where \(\hat{x}_{i,j}\) and \(x_{i,j}\) are the pre-activation and post-activation values respectively. The analyzer computes lower bounds \(lb\) and upper bounds \(ub\) for each intermediate
variable in the DNN. If \(lb(\hat{x}_{i,j})\geq 0\), then the ReLU unit simply acts as the identify function \(x_{i,j}=\hat{x}_{i,j}\). If \(ub(\hat{x}_{i,j})\leq 0\), then the ReLU unit operates as a constant function \(x_{i,j}=0\). In both of these cases, the ReLU unit is a linear function. However, if \(lb(\hat{x}_{i,j})<0<ub(\hat{x}_{i,j})\), we cannot linearize the ReLU function exactly. We call such ReLU units ambiguous ReLUs. In ReLU splitting, the unsolved problem is partitioned into two subproblems such that one subproblem assumes \(\hat{x}_{i,j}<0\) and the other assumes \(\hat{x}_{i,j}\geq 0\). This partition allows us to linearize the ReLU unit in both subproblems leading to a boost in the overall precision of the analyzer. The heuristic used for selecting which ReLU to split significantly impacts the verifier speed.
**BaB for Other Activation Functions:** BaB-based verification can work with the most commonly used activation functions (\(\tanh\), sigmoid, leaky ReLU).
1. For piecewise linear activation functions such as leaky ReLU, activation splitting approaches (e.g, ReLU splitting) can be used for complete verification.
2. For other activation functions (\(\tanh\), sigmoid), BaB with activation splitting cannot yield complete verification but can be used to improve the precision of sound and incomplete verification (Dutta et al., 2017; Muller et al., 2021).
3. Although input splitting is less efficient in the aforementioned cases for high dimensional DNN inputs, it can be applied with any activation function (\(\tanh\), sigmoid, ReLU, leaky ReLU).
## 4. Incremental Verification
In this section, we describe our main technical contributions and the IVAN algorithm. We first formally define the specification tree structure used for incremental verification (Section 4.1). Next, we formulate the problem of incremental verification (Section 4.2). In Section 4.3, we illustrate the techniques used in our algorithm. We characterize the effectiveness of our technique by computing a class of networks for which our incremental verification is efficiently applicable in Section 4.4.
### Specification Tree for BaB
IVAN uses the specification tree to store the trace of splitting decisions that the BaB verifier makes on its execution. A specification tree can be used for any BaB branching method (e.g, input splitting), but without loss of generality, our discussion focuses on ReLU splitting. Let \(\mathcal{N}\) denote the class of networks with the same architecture, and let \(\mathcal{R}\) denote the set of ReLUs in this architecture. The specification tree captures the ReLU splitting decisions and the split specifications in the execution of BaB for a property \((\phi,\psi)\), where we define \((\phi,\psi)\coloneqq\phi\rightarrow\psi\).
For a ReLU \(r_{i}\) with input \(\hat{x}_{i}\), let \(r^{+}_{i}\coloneqq(\hat{x}_{i}\geq 0)\) and \(r^{-}_{i}\coloneqq(\hat{x}_{i}<0)\). We define a split decision as:
Definition 6 (Split Decision).: _For a ReLU \(r\in\mathcal{R}\), a split decision is \(r^{?}\in\{r^{+},r^{-}\}\) where \(r^{?}\) is assigned the predicate \(r^{+}\) or \(r^{-}\)._
A specification split of \((\phi,\psi)\) is a specification stronger than \((\phi,\psi)\) parameterized by the subset of ReLUs in \(\mathcal{R}\) and the corresponding split decisions. Formally,
Definition 7 (Specification Split).: _For a set of ReLUs \(\mathcal{R}^{\prime}=\{r_{1},r_{2}\ldots r_{k}\}\subseteq\mathcal{R}\), and ReLU split decision \(r^{?}_{i}\in\{r^{-}_{i},r^{+}_{i}\}\) for each \(r_{i}\), the corresponding specification split of \((\phi,\psi)\) is \((\phi\wedge r^{?}_{1}\wedge r^{?}_{2}\wedge\ldots r^{?}_{k},\psi)\)._
Since \(\emptyset\subseteq\mathcal{R}\), \((\phi,\psi)\) is a split specification of itself. Let \(\mathcal{S}\) denote the set of specification splits that can be obtained from \((\phi,\psi)\). Each node \(n\) in the tree encodes a specification split in \(\mathcal{S}\). Each edge in the specification tree is labeled with a ReLU split decision \(r^{?}\). Let \(\textit{Nodes}(T)\) denote the nodes of the tree \(T\) and \(\textit{Leaves}(T)\) denote the leaves of the tree \(T\).
**Mapping Nodes to Specification Splits:** The specification associated with the root node is \((\phi,\psi)\). The function \(\textit{Children}(n)\) maps a node \(n\) to either the pair of its children or \(\emptyset\) if \(n\) has no children. If \(n_{l}\) and \(n_{r}\) are the children of node \(n\) and \(\varphi_{n}=(\phi^{\prime},\psi)\) is the specification split at \(n\), then \(\varphi_{n_{l}}=(\phi^{\prime}\wedge r^{+},\psi)\)
and \(\varphi_{n_{r}}=(\varphi^{\prime}\wedge r^{-},\psi)\). For the specifications \(\varphi_{n},\varphi_{n_{I}},\varphi_{n_{r}}\) the following statement holds:
\[(\varphi_{n_{I}}\wedge\varphi_{n_{r}})\Longleftrightarrow\varphi_{n} \tag{2}\]
This relationship implies that verifying the parent node specification is equivalent to verifying the two children node's specifications. Formally, we can now define the specification tree as:
Definition 8 (specification tree).: _Given a set of ReLU \(\mathcal{R}\), a rooted full binary tree \(T\) is a **specification tree**, if for a node \(n\in\mathit{Nodes}(T)\), and nodes \(n_{I},n_{r}\in\mathit{Children}(n)\), edge \((n,n_{I})\) is labeled with predicate \(r^{+}\) and edge \((n,n_{r})\) is labeled with predicate \(r^{-}\), for \(r\in\mathcal{R}\)._
```
1:functionSplit(\(T,n,r\))
2:Input: Specification tree \(T\), a leaf node \(n\in\mathit{Leaves}(T)\), a ReLU \(r\in\mathcal{R}\) for splitting the node
3:Output: returns newly added nodes
4:\(n_{I}\leftarrow\mathit{Add\_Child}(n,r^{+})\)
5:\(n_{r}\leftarrow\mathit{Add\_Child}(n,r^{-})\)
6:return\(n_{I},n_{r}\)
```
**Algorithm 2**_Split_ operation
BaB uses a branching function \(H\) for choosing the ReLU to split. We define this branching function in terms of the node \(n\) of the specification tree as:
Definition 9 (Branching Heuristic).: _Given a set of ReLU \(\mathcal{R}\), a network \(N\), and a node \(n\) in the specification tree, if \(\mathcal{P}\subseteq\mathcal{R}\) denote the set of ReLUs split in the path from the root node \(n\)._
We next state the split operation on a specification tree. Algorithm 2 presents the steps in the split operation.
\(\bullet\)_Split Operation_: Every ReLU split adds two nodes to the specification tree at a given leaf node \(n\). The BaB algorithm chooses the ReLU \(\arg\max_{r\in\mathcal{R}/\mathcal{P}}H(N,n,r)\) to split at node \(n\) using the heuristic function.
### Incremental Verification: Problem Formulation
Give a set of networks \(\mathcal{N}\) with the same architecture with a set of ReLUs \(\mathcal{R}\), \(\mathcal{T}_{\mathcal{N}}\) be the set of all specification trees defined over \(\mathcal{R}\). There exists a partial order (\(<\)) on \(\mathcal{T}_{\mathcal{N}}\) through standard subgraph relation. BaB execution on a network \(N\in\mathcal{N}\) traces a sequence of trees \(T_{0},T_{1}\ldots T_{f}\in\mathcal{T}_{N}\) such that \(T_{i}<T_{i+1}\). It halts with the final tree \(T_{f}\) when it either verifies the property or finds a counterexample. The construction of \(T_{i+1}\) from \(T_{i}\) depends on the branching function \(H\) (Definition 9).
**Incremental Verification:** The incremental verification problem is to efficiently reuse the information from the execution of verification of network \(N\) for the faster verification of its updated version \(N^{a}\). Standard BaB for verification of \(N^{a}\) starts with a single node tree while the incremental verifier starts with a tree \(T_{0}^{N^{a}}\in\mathcal{T}_{\mathcal{N}}\) that is not restricted to be a tree with a single node. We modify the final specification tree \(T_{f}^{N}\) from the verification of \(N\) to construct \(T_{0}^{N^{a}}\). The branching heuristic \(H_{\Delta}\) for incremental verification is derived from the branching heuristic \(H\) based on the efficacy of various branching decisions made during the proof for \(N\). Formally, the complete incremental verifier we propose is defined as:
Definition 10 (Complete and Incremental Verifier).: _A **Complete and Incremental Verifier**\(V_{\Delta}\) takes a neural network \(N^{a}\), an input specification \(\phi\), an output property \(\psi\), analyzer \(A\), the branching heuristic \(H_{\Delta}\) and the initial tree \(T_{0}^{N^{a}}\). \(V_{\Delta}(N^{a},\phi,\psi,T_{0}^{N^{a}},H_{\Delta})\) returns Verified if\(N^{a}\) satisfies the property \((\phi,\psi)\), otherwise, it returns a Counterexample._
Algorithm 3 presents the incremental verifier algorithm for verifying the perturbed network. It takes \(H_{\Delta}\) and \(T_{0}^{N^{a}}\) as input. It maintains a list of active nodes which are the nodes corresponding to the specifications that are yet to be checked by the analyzer. It initializes the list of active nodes with leaves of tree \(T_{0}^{N^{a}}\) (line \(2\)). The main loop runs until the active list is empty (line \(3\)) or it
discovers a counterexample (line 9). At each iteration, it runs the analyzer on each node in the active list (line 5). The nodes that are _Verified_ are removed from the list (line 8), whereas the nodes that result in _Unknown_ are split. The new children are added to the active list (line 12).
**Optimal Incremental Verification:** We define the partial function \(\mathit{Time}_{\Delta}:\mathcal{T}_{\mathcal{N}}\times\mathcal{T}_{\mathcal{N}} \rightarrow\mathbb{R}\), \(\mathit{Time}_{\Delta}(T_{0}^{Na},T_{f}^{Na})\) for a fixed complete incremental verifier \(V_{\Delta}\) as the time taken by \(V_{\Delta}\) that starts from \(T_{0}^{Na}\) and halts with the final tree \(T_{f}^{Na}\). \(\mathit{Time}_{h}(H,H_{\Delta})\) and \(\mathit{Time}_{t}(T_{f}^{N},T_{0}^{Na})\) are the time for constructing \(H_{\Delta}\) from H, and \(T_{0}^{N^{a}}\) from \(T_{f}^{N}\) respectively. We pose the optimal incremental verification problem as an optimization problem of finding the best \(H_{\Delta},T_{0}^{Na}\) such that the time of incremental verification is minimized. Formally, we state the problem as:
\[\operatorname*{arg\,min}_{H_{\Delta},T_{0}^{Na}}\left[\mathit{Time}_{\Delta}( T_{0}^{Na},T_{f}^{Na})+\mathit{Time}_{h}(H,H_{\Delta})+\mathit{Time}_{t}(T_{f}^{N},T _{0}^{Na})\right] \tag{3}\]
The search space for \(T_{0}^{Na}\) is exponential in terms of \(\mathcal{R}\), and the search space for \(H_{\Delta}\) is infinite. Further, \(\mathit{Time}_{\Delta}\) is a complicated function of \(H_{\Delta},T_{0}^{Na}\) that does not have a closed-form formulation. As a result, it is not possible to find an optimal solution.
**Simplifying Assumptions:** To make the problem tractable we make a simplifying assumption that for all networks with the same architecture, each branching and bounding step on each invocation takes a constant time \(t_{H}\) and \(t_{A}\) respectively. We can now compute \(\mathit{Time}_{\Delta}(T_{0}^{Na},T_{f}^{Na})\) as:
Theorem 4.1: _(\(\mathit{Time}_{\Delta}\) for incremental verification). If the increenant verifier \(V_{\Delta}\) halts with the final tree \(T_{f}^{Na}\), then \(\mathit{Time}_{\Delta}(T_{0}^{Na},T_{f}^{Na})=(t_{A}+t_{H})\cdot\left(|Nodes(T _{f}^{Na})|+\frac{1-|Nodes(T_{0}^{Na})|}{2}\right)-t_{H}\cdot|Leaves(T_{f}^{Na} )|\)._
The proof of the theorem is in Appendix 9.2.
In this work, we focus on a class of algorithms for which the preprocessing times \(Time_{h}(H,H_{\Delta})\) and \(Time_{t}(T_{f}^{N},T_{0}^{N^{a}})\) are \(<<Time_{\Delta}(T_{0}^{N^{a}},T_{f}^{N^{a}})\). Furthermore, we also focus on branching heuristics used in practice where \(t_{H}<<t_{A}\). Equation 3 simplifies to finding \(H_{\Delta}\) and \(T_{0}^{N^{a}}\) such that the following expression \(Time_{\Delta}(T_{0}^{N^{a}},T_{f}^{N^{a}})=t_{A}\cdot\left(|Nodes(T_{f}^{N^{a}} )|+\frac{1-|Nodes(T_{0}^{N^{a}})|}{2}\right)\) is minimized. Rewriting and ignoring the constant term we get
\[Time_{\Delta}(T_{0}^{N^{a}},T_{f}^{N^{a}})=t_{A}\cdot\left(\frac{|Nodes(T_{f}^{N ^{a}})|-|Nodes(T_{0}^{N^{a}})|}{2}+\frac{|Nodes(T_{f}^{N^{a}})|}{2}\right) \tag{4}\]
### IVAN Algorithm for Incremental Verification
We describe the novel components of our algorithm and present the full workflow in Algorithm 5. Our first technique called reuse focuses on minimizing \(|Nodes(T_{f}^{N^{a}})|-|Nodes(T_{0}^{N^{a}})|\) in Equation 4. Our second reorder technique focuses on minimizing \(|Nodes(T_{f}^{N^{a}})|\). The \(H_{\Delta},T_{0}^{N^{a}}\) obtained by reuse and reorder are distinct. IVAN algorithm combines these distinct solutions, to reduce \(Time_{\Delta}(T_{0}^{N^{a}},T_{f}^{N^{a}})\).
**Reuse:** This technique is based on the observation that the BaB specification trees should be similar for small perturbations in the network. Accordingly, in the method, we use the final specification tree for \(N\) as the initial tree for the verification of \(N^{a}\) i.e. \(T_{0}^{N^{a}}=T_{f}^{N}\), and keep the \(H_{\Delta}=H\) unchanged. We formally characterize a set of networks obtained by small perturbation for which \(T_{0}^{N^{a}}=T_{f}^{N}\) is sufficient for verifying \(N^{a}\) without any further splitting in Section 4.4.
**Reorder:** Reorder technique focuses on improving the branching heuristic \(H\) such that it reduces \(|Nodes(T_{f}^{N^{a}})|\), and \(T_{0}^{N^{a}}\) is single node tree with \(n_{0}\) encoding the specification \((\phi,\psi)\). If we start \(T_{0}^{N^{a}}=T_{f}^{N}\), \(|Nodes(T_{f}^{N^{a}})|\) is at least \(|Nodes(T_{f}^{N})|\), and thus, we start \(T_{0}^{N^{a}}\) from scratch allowing the technique to minimize \(|Nodes(T_{f}^{N^{a}})|\). We create a branching function \(H_{\Delta}\) from \(H\) with the following two changes. (i) The splits that worked effectively for the verification of the \(N\) should be prioritized. (ii) The splits that were not effective should be depriitized. To formalize the effectiveness of splits, we define the \(LB_{N}(n)\) as the lower bound computed by the analyzer \(A\) on the network \(N\) for proving the property \(\varphi_{n}\) encoded by the node \(n\). Further, using the function \(LB_{N}\) we define an improvement function \(I_{N}\) represents the effectiveness of a ReLU split at a specific node as:
\[I_{N}(n,r)=\min(LB_{N}(n_{r})-LB_{N}(n),LB_{N}(n_{l})-LB_{N}(n)) \tag{5}\]
where \(n_{l},n_{r}\in Children(n)\) in the specification tree \(T_{f}^{N}\). We use \(I_{N}\) to define the observed effectiveness \(H_{obs}(r)\) from a split \(r\) on the entire specification tree for \(N\). It is defined as the mean of the improvement over each node where split \(r\) was made. Let \(Q\subset Nodes(T_{f}^{N})\) denote a set of nodes where split \(r\) was made. Then,
\[H_{obs}(r)=\frac{\sum_{n\in Q}I_{N}(n,r)}{|Q|}. \tag{6}\]
Using the \(H_{obs}(r)\) score we update the existing branching function as:
\[H_{\Delta}(n,r)=\alpha\cdot H(n,r)+(1-\alpha)\cdot(H_{obs}(r)-\theta). \tag{7}\]
Here, we introduce two hyperparameters \(\alpha\) and \(\theta\). The hyperparameter \(\alpha\in[0,1]\) controls the importance given to the actual heuristic score and the observed improvement from the verification on \(N\). If \(\alpha=1\), then \(H_{\Delta}\) depends only on the original branching heuristic score. If \(\alpha=0\), then it fully relies on observed split scores. The hyperparameter \(\theta\) ensures that our score positively changes score for \(r\) that have \(H_{obs}(r)>\theta\) and negatively change scores for \(H_{obs}(r)<\theta\).
**Constructing a Pruned Specification Tree:** The two reordering goals of prioritizing and deprioritizing effective and ineffective splits are difficult to combine with reuse. However, instead of starting from scratch, we can construct a specification tree \(T_{P}\) from \(T_{f}^{N}\) excluding the ineffective splits. For \(n\in\textit{Nodes}(T_{f}^{N})\), where ReLU \(r\) splits \(n\), we denote the set of bad splits as the set \(\mathcal{B}(T_{f}^{N})\) of the pairs \((n,r)\) such that the improvement score \(I_{N}(n,r)\leq\theta\). For \((n,r)\in\mathcal{B}(T_{f}^{N})\) while constructing the pruned tree our algorithm chooses a child \(n_{k}\) of \(n\). If a ReLU \(r_{k}\) is split at \(n_{k}\) in \(T_{f}^{N}\), it performs a split \(r_{k}\) in the corresponding node in \(T_{P}\), and skips over the bad split \(r\). The subtree corresponding to the other child \(n_{k^{\prime}}\) is eliminated and not added to our pruned tree. We choose \(n_{k}\) such that:
\[n_{k}=\operatorname*{arg\,min}_{n_{u}\in\textit{Children}(n)}LB_{N}(n_{u}) -LB_{N}(n) \tag{8}\]
We choose such \(n_{k}\) over \(n_{k^{\prime}}\) since \(LB_{N}(n)\) is closer to \(LB_{N}(n_{k})\) than \(LB_{N}(n_{k^{\prime}})\). Further, combining Equation 5 and 8, we can show \((LB_{N}(n_{k})-LB_{N}(n))<\theta\), i.e. their difference is bounded. We anticipate that on the omission of the split \(r\), the subtree corresponding to \(n_{k}\) is a better match to the necessary branching decisions following \(n\) than \(n_{k^{\prime}}\).
Algorithm 4 presents the top-down construction of \(T_{P}\). The algorithm starts from the root of \(T_{f}^{N}\) and recursively traverses through the children constructing \(T_{P}\). It maintains a queue \(Q\) of nodes yet to be explored and a map \(M\) that maps nodes from the tree \(T_{f}^{N}\) to the corresponding new nodes in \(T_{P}\). At a node \(n\), if \((n,r)\) is not a bad split, it performs the split \(r\) at the corresponding mapping \(\hat{n}\). Otherwise, if \(r_{k}\) is the split at \(n_{k}\), it skips over \(r\) and performs the split of \(r_{k}\) at \(\hat{n}\). The newly created children from a split of \(\hat{n}\) are associated with children of \(n_{k}\) using \(M\). The children of \(n_{k}\) are added in the \(Q\) and they are recursively processed in the next iteration for further constructing \(T_{P}\). \(T_{P}\) is still a specification tree satisfying the Definition 4.1 by construction. The specifications \(\varphi_{n}\) of a node \(n\) in \(T_{P}\) can be constructed using a path from the root to \(n\).
```
1:specification tree \(T_{f}^{N^{a}}\), hyperparameter \(\theta\)
2:Pruned tree \(T_{P}\)
3:\(n_{root}\leftarrow\) root of \(T_{f}^{N^{a}}\), \(\hat{n}_{root}\leftarrow\) copy of \(n_{root}\)
4:\(T_{P}\leftarrow\) Initialize a new tree with \(\hat{n}_{root}\)
5:\(Q\leftarrow\) Initialize list with \(n_{root}\)
6:\(M\leftarrow\) Initialize an empty map
7:\(M[n_{root}]\leftarrow\hat{n}_{root}\)
8:while\(Q\) is not empty do
9:\(n\gets Q.pop()\); \(r\leftarrow\) split at node \(n\); \(\hat{n}\gets M[n]\)
10:if\(I_{N}(n,r)<\theta\)then
11:\(n_{k}\leftarrow\operatorname*{arg\,min}_{n_{k}\in\textit{Children}(n)}LB_{N}(n_{k}) -LB_{N}(n)\)
12:\(r_{k}\leftarrow\) split at node \(n_{k}\)
13:\(n_{l},n_{r}\gets k.children\); \(\hat{n}_{l},\hat{n}_{r}\gets Split(T_{P},\hat{n},r_{k})\)
14:\(M[n_{l}]\leftarrow\hat{n}_{l};M[n_{r}]\leftarrow\hat{n}_{r}\)
15:\(Q.push(n_{l});Q.push(n_{r})\)
16:else
17:\(n_{l},n_{r}\gets n.children\); \(\hat{n}_{l},\hat{n}_{r}\leftarrow Split(T_{P},\hat{n},r)\)
18:\(M[n_{l}]\leftarrow\hat{n}_{l};M[n_{r}]\leftarrow\hat{n}_{r}\)
19:\(Q.push(n_{l});Q.push(n_{r})\)
20:return\(T_{P}\)
```
**Algorithm 4** Creating a Pruned Tree
**Input:** Original network \(N\),
Perturbed network \(N^{a}\),
property (\(\phi\), \(\psi\)),
analyzer \(A\),
branching heuristic \(H\),
hyperparameters
\(\alpha\) and \(\theta\),
incremental verifier \(V_{\Delta}\)
**Output:** Verification result for \(N\) and \(N^{a}\)
```
1:resultN, \(T_{f}^{N}\gets V(N,\phi,\psi,H)\)
2:\(T_{0}^{N^{a}}\leftarrow\) PrunedTree(\(T_{f}^{N},\theta\))
3:\(H_{\Delta}\leftarrow\) UpdateH(\(H,T_{f}^{N},\theta,\alpha\))
4:\(resultN^{a}\leftarrow V_{\Delta}(N^{a},\phi,\psi,T_{0}^{N^{a}},H_{\Delta})\)\(\triangleright\)Incremental verification step calls Algorithm 3
5:returnresultN,resultN^{a}
```
**Algorithm 5** Incremental Verification Algorithm
**Output:** Verification result for \(N\) and \(N^{a}\)
**Main algorithm:** Algorithm 5 presents IVAN's main algorithm for incremental verification that combines all the aforementioned techniques. It takes as inputs the original network \(N\), a perturbed network \(N^{a}\), input specification \(\phi\), and an output property \(\psi\). It prunes the final tree \(T_{f}^{N}\) obtained in the verification of \(N\) and constructs \(T_{0}^{N^{a}}\) (line 2). It computes the updated branching heuristic \(H_{\Delta}\) using Equation 7 (line 3). It uses \(T_{0}^{N^{a}}\) and \(H_{\Delta}\) for performing fast incremental verification of networks \(N^{a}\) (line 4).
We next state the following lemma that states - verifying the property \((\phi,\psi)\) is equivalent to verifying the specifications for all the leaves.
Lemma 1 ().: _The specifications encoded by the leaf nodes of a specification tree \(T\) maintain the following invariance._
\[\left(\bigwedge_{n\in Leaves(T)}\phi_{n}\right)\Longleftrightarrow(\phi \rightarrow\psi)\]
We next use the lemma to prove the soundness and completeness of our algorithm. All the proofs are in Appendix 9.2.
Theorem 2 (Soundness of Verification Algorithm).: _If Algorithm 5 verifies the property \((\phi,\psi)\) for the network \(N^{a}\), then the property must hold._
Theorem 3 (Completeness of Verification Algorithm).: _If for the network \(N^{a}\), the property \((\phi,\psi)\) holds then Algorithm 5 always terminates and produces Verified as output._
**Scope of IVAN:** IVAN utilizes the specification tree to store the trace of the BaB proof. The IVAN algorithm enhances this tree by reusing and refining it to enable faster BaB proof of updated networks. Our paper focuses on using IVAN to verify ReLU networks with BaB that implements ReLU splitting. However, we expect that IVAN's principles can be extended to networks with other activation functions (tanh, sigmoid, leaky ReLU) for which BaB has been applied for verification.
### Network Perturbation Bounds
In this section, we formally characterize a class of perturbations on a network \(N\) where our proposed "Reuse" technique attains maximum possible speed-up. Specifically, we focus on modifications affecting only the last layer which represent many practical network perturbations (e.g, transfer learning, fine-tuning). The last layer modification assumption is only for our theoretical results in this section. Our experiments make no such assumption and consider perturbations applied across the original network.
We leave the derivation of perturbation bounds corresponding to the full IVAN to future work as it requires theoretically modeling the effect of arbitrary network perturbations on DNN output as well as complex interactions between "Reuse" and "Reorder" techniques. Given a specification tree \(T\) and network architecture \(\mathcal{N}\), we identify a set of neural networks \(\mathbf{C}_{T}(\mathcal{N})\) such that any network \(N^{a}\in\mathbf{C}_{T}(\mathcal{N})\) can be verified by reusing \(T\).
We assume the weights are changed by the weight perturbation matrix \(\mathcal{E}\). If \(N_{l}=ReLU(A_{l}\cdot X+B_{l})\) then last layer of \(N^{a}\) is \(N_{l}^{a}=ReLU((A_{l}+\mathcal{E})\cdot X+B_{l})\).
Definition 11 (Last Layer Perturbed Network).: _Given a network \(N\) with architecture \(\mathcal{N}\), the set of last layer perturbed networks is \(\mathcal{M}(N,\delta)\subseteq\mathcal{N}\), such that if \(N^{a}\in\mathcal{M}(N,\delta)\) then \((\forall i\in[l-1])\cdot N_{i}=N_{i}^{a}\), \(N_{l}=ReLU(A_{l}\cdot X+B_{l})\), \(N_{l}^{a}=ReLU((A_{l}+\mathcal{E})\cdot X+B_{l})\) and \(\|\mathcal{E}\|_{F}\leq\delta\). 1_
Footnote 1: \(\|\cdot\|_{F}\) denotes the Frobenius norm of a matrix
We next compute the upper bound of \(\delta\), for which if the property can be proved/disproved using specification tree \(T\) in \(N\) then the same property can be proved/disproved in \(N^{a}\) using the same \(T\). Therefore, once we have the proof tree \(T\) that verifies the property in \(N\) we can reuse \(T\) for
verifying any perturbed network \(N^{a}\in\mathcal{M}(N,\delta)\). Assuming the property \((\phi,\psi)\) and the analyzer \(A\) are the same for any perturbed network \(N^{a}\in\mathcal{M}(N,\delta)\) the upper bound of \(\delta\) only depends on \(N\) and \(T\).
We next introduce some useful notations that help us explicitly compute the upper bound of \(\delta\). Given \(T\) let \(\mathcal{F}(N_{i},T)\) be the over-approximated region computed by the analyzer \(A\) that contains all feasible outputs \(N_{i}\) of the \(i\)-th layer of the original network. Note \(\mathcal{F}(N_{i},T)\) depends on the \(\phi\) and analyzer \(A\) but we omit them to simplify the notation. Let \(V_{\mathcal{T}}(N,T)\) denote whether the property \((\phi,\psi)\) can be verified on network \(N\) with \(T\). Proof of Theorem 3.1 is presented in Section 9.3
\[LB(\mathcal{F}(N_{i},T))=\min_{Y\in\mathcal{F}(N_{i},T)}\mathcal{C}^{T}Y \tag{9}\]
\[V_{\mathcal{T}}(N,T)=(LB(\mathcal{F}(N_{i},T))\geq 0) \tag{10}\]
\[\eta(N,T)=\max_{Y\in\mathcal{F}(N_{i-1},T)}\|Y\|_{2} \tag{11}\]
Theorem 3.1: _If \(\delta\leq\frac{|LB(\mathcal{F}(N_{i},T))|}{\|C\|\cdot\eta(N,T)}\) then for any perturbed network \(N^{a}\in\mathcal{M}(N,\delta)\)\(V_{\mathcal{T}}(N,T)\iff V_{\mathcal{T}}(N^{a},T)\)._
The proof of the theorem is in Appendix 9.3.
## 5. Methodology
**Networks and Properties.** We evaluate IVAN on models with various architectures that are trained with different training methods. Similar to most of the previous literature, we verify \(L_{\infty}\)-based local robustness properties for MNIST and CIFAR10 networks and choose standard \(\epsilon\) values used for evaluating complete verifiers. For the verification of global properties in Section 6.4 we use the standard set of ACAS-XU properties that are part of the VNN-COMP benchmarks (Bak et al., 2021). Table 1 presents the evaluated models and the choice of \(\epsilon\) for the local robustness properties.
**Network Perturbation.** Similar to previous works (Paulsen et al., 2020; Ugare et al., 2022), we use quantization to generate perturbed networks. Specifically, we use int8 and int16 post-training quantizations. The quantization scheme has the form (TTLite, 2017): \(r=s(q-zp)\). Here, \(q\) is the quantized value and \(r\) is the real value; \(s\) which is the scale and \(zp\) which is the zero point are the parameters of quantization. Our experiments use symmetric quantization with \(zp=0\).
**Baseline.** We use the following baseline BaB verifiers:
* For proving the local robustness properties, we use LP-based triangle relaxation for bounding (Bunel et al., 2020; Ehlers, 2017), and we use the estimation based on coefficients of the zonotopes for choosing the ReLU splitting (Henriksen and Lomuscio, 2021).
* For the verification of ACAS-XU global properties, we use RefineZono (Singh et al., 2019). RefineZono uses DeepZ (Singh et al., 2018) analyzer with input splitting. This baseline is used only for experiments in Section 6.4.
**Experimental Setup.** We use 64 cores of an AMD Ryzen Threadripper CPU with the main memory of 128 GB running the Linux operating system. The code for our tool is written in Python. We use the GUROBI (Gurobi Optimization, LLC, 2018) solver for our LP-based analyzer.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline Model & Architecture & Dataset & \#Neurons & Training Method & \(\epsilon\) \\ \hline ACAS-XU Networks & \(6\times 50\) linear layers & ACAS-XU & 300 & Standard (Julian et al., 2019) & - \\ FCN-MNIST & \(2\times 256\) linear layers & MNIST & 512 & Standard & 0.02 \\ CONV-MNIST & \(2\) Conv, \(2\) linear layers & MNIST & 9508 & Certified Robust (Balunovic and Vechev, 2020) & 0.1 \\ CONV-CIFAR & \(2\) Conv, \(2\) linear layers & CIFAR10 & 4852 & Empirical Robust (Dong et al., 2018) & \(\frac{2}{259}\) \\ CONV-CIFAR-WIDE & \(2\) Conv, \(2\) linear layers & CIFAR10 & 6244 & Certified Robust (Wong and Kolter, 2018) & \(\frac{2}{259}\) \\ CONV-CIFAR-DEEP & \(4\) Conv, \(2\) linear layers & CIFAR10 & 6756 & Certified Robust (Wong and Kolter, 2018) & \(\frac{2}{259}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1. Models and the perturbation \(\epsilon\) used for the evaluation for incremental verification.
**Hyperparameters.** We use Optuna tuner (Akiba et al., 2019) for tuning the hyperparameters. We present more details and sensitivity analysis of the hyperparameters in Section 6.3.
## 6. Experimental Evaluation
We evaluate the effectiveness of IVAN in verifying the local robustness properties of the quantized networks. We then analyze how various tool components contribute to the overall result. We further show the sensitivity of speedup obtained by IVAN to the hyperparameters. We also stress-test IVAN on large random perturbation to the network. Finally, we evaluate the effectiveness of IVAN on global property verification with input splitting.
### Effectiveness of IVAN
Figure 6 presents the speedup obtained by IVAN on FCN-MNIST. The x-axis displays the time taken by the baseline verifier for the verification in Seconds. The y-axis denotes the speedup obtained by IVAN over the baseline on a specific verification instance. Each cross in the plot shows results for a specific verification property. The vertical line denotes the timeout for the experiment and the dashed line is to separate instances that have a speedup greater than 1x.
We observe that IVAN gets higher speedup on hard instances that take more time for verification on the baseline. IVAN has a small overhead for storing the specification tree compared to the baseline. For hard specifications that result in large specification trees, this overhead is insignificant compared to the improvement in the verification time. Our techniques that reuse and refine the tree focus on speeding up such hard specifications. However, for specifications that are easy to prove with small specification trees, we see a slight slowdown in verification time. Since these easy specifications are verified quickly by both IVAN and the baseline, they are irrelevant in overall verification time over all the specifications. For instance, the box labeled by \(c_{2}\) in Figure 5(a) contains all of the 83 cases with low (\(<1.2x\)) speedup on int16 quantized network. Despite low speedup, all of them take 16.27s to verify with IVAN. Whereas the case labeled by \(c_{1}\) alone takes 75.54s on the baseline and 1.73s on IVAN, leading to a 43x speedup - caused by reducing BaB tree size from 345 nodes to 28 nodes on pruning, out of which only 14 leaf nodes are active and lead to analyzer calls.
We observe a similar pattern in the case of the int8 quantized network in Figure 5(b). It shows that the cases confined in box \(c_{3}\), despite having lower speedup, take relatively less time. The cases included in box \(c_{4}\) in Figure 5(b) have a much higher impact on the overall verification time. Box \(c_{3}\) includes the majority of the low-speedup 83 cases that take a total of 18.44s time for verification with IVAN. Whereas for 5 cases in box \(c_{4}\) with higher speedups, take 401 analyzer calls with baseline and 118 analyzer calls with IVAN. Accordingly, solving them takes 130.6s with the baseline and 40.26s with IVAN, leading to a 3.3x speedup.
Figure 6. IVAN speedup for the verification of local robustness properties on FCN-MNIST.
Figure 7 presents speedup for several other networks. IVAN is notably more effective on hard-to-verify specifications that take more than 10s to verify using the baseline. It achieves 3.1x geomean speedup on such cases. In many cases, we see more solved cases by IVAN over the baseline. For instance, the box \(c_{5}\) in Figure 6(a) contains 2 cases that baseline does not solve within the timeout of 100s, but IVAN solves them in 90.6s and 95.8s each. We show speedup vs. time plots for other networks (CONV-CIFAR, CONV-CIFAR-DEEP ) and more statistics of our evaluation in Appendix 9.1.
### Overall Speedup
We observe no cases when the baseline verifies the property and IVAN exceeds the timeout. We cannot compute the speedup for the cases where the baseline exceeds the timeout. Therefore, we compute the overall speedup over the set \(S\) that denotes all the cases that are solved by the baseline within the time limit. \(t_{B}(c)\) and \(\tau_{IVAN}(c)\) denote the time taken by baseline and IVAN on the case \(c\) respectively, then we compute the overall speedup as \(Sp=\frac{\sum_{c\leq S}t_{B}(c)}{\sum_{c\leq S}\tau_{IVAN}(c)}\).
Table 2 presents the comparison of the contribution of each technique used in IVAN for each model. Column _+Solved_ in each case displays the number of extra verification problems solved by the technique in comparison to the baseline. Columns in IVAN[Reuse] present results on only using the reuse technique. Columns in IVAN[Reorder] show results on using the reorder technique. Columns in IVAN present the results on using all techniques from Section 4. Column \(Sp\) for each technique demonstrates the overall speedup obtained compared to the baseline. We observe that in most case combination of all techniques performs better than reuse and reordering. We see that reorder performs better than reuse except for one case (FCN-MNIST on int8).
Figure 7. IVAN speedup for the verification of local robustness properties.
### Hyperparameter Sensitivity Analysis
Figure 8 plots the heatmap for IVAN speedup on various hyperparameter values. The x-axis shows the hyperparameter \(\alpha\) value and the y-axis shows the \(\theta\) value. Each point in the greed is annotated with the observed \(Sp\) on setting the corresponding hyperparameter values. Figure 7(a) presents the plot for IVAN with on reorder technique. \((\alpha,\theta)=(0.25,0.01)\) is the highest speedup point. Choosing \(\theta=0\) implies that are not depriitizing the splitting decisions that did not work well. In that case, we observe no speedup with reordering, showing the necessity of \(\theta\) in our \(H_{\Delta}\) formulation. Figure 7(b) presents the same plot for our main algorithm that also reuses the pruned tree. We observe that the speedup is less sensitive to hyperparameter value changes in this plot. This is expected since reordering starts from single node \(T_{0}^{N^{\alpha}}\) and purely relied on \(H_{\Delta}\) formulation for the speedup. While our main technique also reuses the tree, even when \(\theta=0\) it can get \(\sim\)2.5x speedup.
### Global Properties with Input Splitting
We show that IVAN is effective in speeding up the state-of-the-art verifier RefineZono (Singh et al., 2019) when verifying global properties. This baseline employs input splitting based on a strong branching strategy. Figure 9 presents the speedup achieved by IVAN over this baseline. Overall, IVAN achieves a 9.5x speedup in the int16 quantization case and a 3.1x speedup in the int8 quantization case. Previous work has observed that ACAS-XU properties take a large number of splits with most analyzers. For the int16 case, the average value of \(|T_{f}^{N^{\alpha}}|\) with our baseline is 285.4.
\begin{table}
\begin{tabular}{l l|c|c|c|c|c} \hline \hline Model & Approximation & \multicolumn{2}{c|}{IVAN[Reuse]} & \multicolumn{2}{c|}{IVAN[Reorder]} & \multicolumn{2}{c}{IVAN} \\ & & \(Sp\) & \(+Solved\) & \(Sp\) & \(+Solved\) & \(Sp\) & \(+Solved\) \\ \hline FCN-MNIST & int16 & 2.51x & 0 & 2.71x & 0 & **4.43x** & 0 \\ & int8 & 1.07x & 0 & 1.64x & 0 & **2.02x** & 0 \\ CONV-MNIST & int16 & 1.62x & 0 & 2.15x & 0 & **3.09x** & 2 \\ & int8 & 1.27x & 2 & 1.34x & 3 & **1.71x** & 4 \\ CONV-CIFAR & int16 & 1.02x & 0 & 1.57x & 2 & **2.52x** & 2 \\ & int8 & 1.08x & 0 & 1.53x & 0 & **1.78x** & 0 \\ CONV-CIFAR-WIDE & int16 & 1.43x & 1 & 1.51x & 0 & **1.87x** & 2 \\ & int8 & 0.75x & 0 & **1.62x** & 1 & 1.53x & **2** \\ CONV-CIFAR-DEEP & int16 & 1.64x & 0 & 2.29x & 0 & **3.21x** & 0 \\ & int8 & 1.15x & 0 & 1.13x & 1 & **1.25x** & 1 \\ \hline \hline \end{tabular}
\end{table}
Table 2. Ablation study for overall speedup across all properties for different techniques in IVAN.
Figure 8. Speedup for the combination of hyperparameter values on FCN-MNIST with int16 quantization.
The baseline takes a total of 305s time for verifying cases that have large tree \(|T_{f}^{N^{\mathbf{a}}}|>5\). IVAN verifies those properties in 32s.
### Random Weight Perturbations
In this experiment, we stress-test IVAN for incremental verification by applying uniform random perturbation on the DNN weights. Here, we perturbed each weight in the network by 2%, 5%, and 10%. Even the smallest of these perturbations (2%) to each of the weights already induces larger overall changes in the network than those caused by practical methods such as quantization, pruning, and fine-tuning that often non-uniformly affect specific layers of the network. For each network and perturbation, we run IVAN and the baseline to verify 100 properties and compute the average speedup of IVAN over the baseline.
Table 3 presents the average speedups obtained by IVAN. Each row shows the IVAN speedup under various weight perturbations for a particular network. We see that in most cases IVAN speedup reduces as the perturbations to the weights increase. It is because the specification tree for the perturbed network is no longer similar to the one for the original network. If IVAN is used in such cases, it uses suboptimal splits leading to higher verification time.
## 7. Related Work
**Neural Network Verification:** Recent works introduced several techniques for verifying properties of neural networks (Anderson et al., 2019, 2020; Bunel et al., 2020; Ehlers, 2017; Kabaha and Drachsler-Cohen, 2022; Katz et al., 2017; Laurel et al., 2022; Tjeng et al., 2017; Wang et al., 2018, 2021; Yang et al., 2022). For BaB-based complete verification, previous works used distinct strategies for ReLU splitting. Ehlers (2017) and Katz et al. (2017) used random ReLU selection for splitting. Wang et al. (2018) computes scores based on gradient information to rank ambiguous ReLU nodes. Similarly, Bunel et al. (2020) compute scores based on a formula based on the estimation equations in (Wong and Kolter, 2018). Henriksen and Lomuscio (2021) use coefficients of zonotopes for these scores.
**Incremental Neural Network Verification:** Fischer et al. (2022) presented the concept of sharing certificates between specifications. They reuse the proof for \(L_{\infty}\) specification computed with abstract interpretation-based analyzers based on the notion of proof templates, for faster verification of patch and geometric perturbations. Ugare et al. (2022) showed that the reusing of proof is possible
\begin{table}
\begin{tabular}{c|c|c|c} \hline \hline & \multicolumn{3}{c}{Weight perturbation} \\ Model & 2\% & 5\% & 10\% \\ \hline FCN-MNIST & 1.65x & 1.57x & 0.87x \\ CONV-MNIST & 1.97x & 0.57x & 0.57x \\ CONV-CIFAR & 1.29x & 1.09x & 0.69x \\ CONV-CIFAR-WIDE & 1.42x & 1.08x & 0.96x \\ CONV-CIFAR-DEEP & 1.32x & 1.06x & 1.05x \\ \hline \hline \end{tabular}
\end{table}
Table 3. IVAN speedup on uniform random weight perturbations
Figure 9. IVAN speedup for the verification of global ACAS-XU properties.
between networks. It uses a similar concept of network adaptable proof templates. It is limited to certain properties (patch, geometric, \(L_{0}\)) and works with abstract interpretation-based incomplete verifiers. Wei and Liu (2021) considers incremental incomplete verification of relatively small DNNs with last-layer perturbation. All of these works cannot handle incremental and complete verification of diverse specifications, which is the focus of our work
**Differential Neural Network Verification:** ReluDiff (Paulsen et al., 2020) presented the concept of differential neural network verification. The follow-up work of (Paulsen et al., 2020) made it more scalable. ReluDiff can be used for bounding the difference in the output of an original network and a perturbed network corresponding to an input region. ReluDiff uses input splitting to perform complete differential verification. Our method is complementary to ReluDiff and can be used to speed up the complete differential verification with multiple perturbed networks, performing it incrementally. Cheng and Yan (2020) reuse previous interval analysis results for the verification of the fully-connected networks where the specifications are only defined over the last linear layer of an updated network. In contrast, IVAN performs end-to-end verification and operates on a more general class of networks, specifications, and perturbations.
**Warm Starting Mixed Integer Linear Programming (MILP) Solvers:** State-of-the-art MILP solvers such as GUROBI (Gurobi Optimization, LLC, 2018) and CPLEX (Cplex, 2009) support warm starting that can accelerate the optimization performance. MILP can warm start based on initial solutions that are close to the optimal solution. This allows MILP solvers to avoid exploring paths that do not improve on the provided initial solution and can help the solver to converge faster. The exact implementation details of these closed-sourced commercial solvers are unavailable. Regardless, our experiments with MILP warm starting of GUROBI for incremental DNN verification showed insignificant speedup.
**Incremental Program Verification:** Incremental verification has improved the scalability of traditional program verification to an industrial scale (Johnson et al., 2013; Lakhnech et al., 2001; O'Hearn, 2018; Stein et al., 2021). Incremental program analysis tasks reuse partial results (Yang et al., 2009), constraints (Visser et al., 2012) and precision information (Beyer et al., 2013) from previous runs for faster analysis of individual commits. Frequently, the changes made by the program are limited to a small portion of the overall program (and its analysis requires significant attention to the impact on control flow). whereas DNN updates typically alter the weights of multiple layers throughout the network (but with no impact on control flow). Therefore, incremental DNN verification presents a distinct challenge compared to the incremental verification of programs.
**Incremental SMT Solvers:** Modern SMT solvers such as Z3 (De Moura and Bjorner, 2008) and CVC5 (Barbosa et al., 2022) during constraint solving learn lemmas, which are later reused to solve similar problems. The incrementality of these solvers is restricted to the addition or deletion of constraints. They do not consider reuse in cases when the constraints are perturbed as in our case.
## 8. Conclusion
Current complete approaches for DNN verification re-run the verification every time the network is modified. In this paper, we presented IVAN, the first general, incremental, and complete DNN verifier. IVAN captures the trace of the BaB-based complete verification through the specification tree. We evaluated our IVAN on combinations of networks, properties, and updates. IVAN achieves up to 43x speedup and geometric mean speedup of 2.4x in verifying DNN properties.
###### Acknowledgements.
We thank the anonymous reviewers for their comments. This research was supported in part by NSF Grants No. CCF-1846354, CCF-1956374, CCF-2008883, CCF-2217144, CCF-2238079, CNS-2148583, USDA NIFA Grant No. NIFA-2024827 and Qualcomm innovation fellowship. |
2306.08783 | HOSSnet: an Efficient Physics-Guided Neural Network for Simulating Crack
Propagation | Hybrid Optimization Software Suite (HOSS), which is a combined
finite-discrete element method (FDEM), is one of the advanced approaches to
simulating high-fidelity fracture and fragmentation processes but the
application of pure HOSS simulation is computationally expensive. At the same
time, machine learning methods, shown tremendous success in several scientific
problems, are increasingly being considered promising alternatives to
physics-based models in the scientific domains. Thus, our goal in this work is
to build a new data-driven methodology to reconstruct the crack fracture
accurately in the spatial and temporal fields. We leverage physical constraints
to regularize the fracture propagation in the long-term reconstruction. In
addition, we introduce perceptual loss and several extra pure machine learning
optimization approaches to improve the reconstruction performance of fracture
data further. We demonstrate the effectiveness of our proposed method through
both extrapolation and interpolation experiments. The results confirm that our
proposed method can reconstruct high-fidelity fracture data over space and time
in terms of pixel-wise reconstruction error and structural similarity. Visual
comparisons also show promising results in long-term | Shengyu Chen, Shihang Feng, Yao Huang, Zhou Lei, Xiaowei Jia, Youzuo Lin, Estaben Rougier | 2023-06-14T23:39:37Z | http://arxiv.org/abs/2306.08783v1 | # HOSnet: an Efficient Physics-Guided Neural Network for Simulating Crack Propagation
###### Abstract
Hybrid Optimization Software Suite (HOSS), which is a combined finite-discrete element method (FDEM), is one of the advanced approaches to simulating high-fidelity fracture and fragmentation processes but the application of pure HOSS simulation is computationally expensive. At the same time, machine learning methods, shown tremendous success in several scientific problems, are increasingly being considered promising alternatives to physics-based models in the scientific domains. Thus, our goal in this work is to build a new data-driven methodology to reconstruct the crack fracture accurately in the spatial and temporal fields. We leverage physical constraints to regularize the fracture propagation in the long-term reconstruction. In addition, we introduce perceptual loss and several extra pure machine learning optimization approaches to improve the reconstruction performance of fracture data further. We demonstrate the effectiveness of our proposed method through both extrapolation and interpolation experiments. The results confirm that our proposed method can reconstruct high-fidelity fracture data over space and time in terms of pixel-wise reconstruction error and structural similarity. Visual comparisons also show promising results in long-term reconstruction.
+
Footnote †: This is an example for title footnote coding.
+
Footnote †: This is an example for title footnote coding.
## 1 Introduction
Knowledge of brittle materials failure is significant since they have been widely applied in a variety of areas, including structural engineering, geotechnical engineering, mechanical engineering, geothermal engineering, and the oil & gas industry. The mechanism of brittle material failure is strongly affected by the microstructure of the material, especially the pre-existing micro cracks [1; 2]. The dynamic propagation and interaction behavior of micro-cracks is
essential to estimate the failure of brittle material [3; 4]. The growth and coalescence of micro-cracks will result in a complex stress state near the crack tips, leading to the catastrophic failure of brittle material in macro-scale [5; 6].
Many approaches have been proposed in the literature to analyze the crack initiation, and propagation behavior in brittle materials, including theoretical constitutive models and numerical methods [7; 8; 9; 10; 11; 12; 13; 14; 15; 16]. The finite-discrete element method (FDEM) is one of the advanced approaches [17]. Based on the FDEM, the Hybrid Optimization Software Suite (HOSS) [18] is developed as a fracture simulator. This high-fidelity model simulates fracture and fragmentation processes or materials deformation in both 2D and 3D complex systems, providing accurate predictions of fracture growth and material failure. However, the application of pure HOSS simulation has computational limitations since they resolve all the individual cracks with highly resolved meshes and small time steps at large scales. Hence, a new approach can provide key quantities of interest with more efficiency but retain reasonable accuracy in demand.
Machine learning (ML) models are gaining significant attention as a promising alternative to physical simulations in recent years due to their remarkable success and high efficiency in various applications, such as image segmentation [19; 20; 21] and image reconstruction [22; 23; 24]. Encoder-decoder networks [25] have been employed to simulate the intricate two-dimensional subsurface fluid dynamics occurring in porous media [26]. Fourier Neural Operator [27], which combines Fourier analysis and neural networks, has been utilized to solve complex physical problems, such as wave propagation [28] and multiphase flow dynamics [29]. Temporal models, such as the long short-term memory (LSTM) model, have been extensively utilized to capture long-term dependencies in temporal propagation. For instance, in the hydrology domain, the LSTM model has been broadly used to model temporal patterns of water dynamics [30; 31].
To speed up the simulation of HOSS, ML approaches had been utilized to emulate the high-fidelity model efficiently. The micro-cracks are represented by a feature vector, then a variety of ML methods, such as decision trees, random forest, and artificial neural networks, are utilized to predict the time and the location of fracture coalescence [32; 33; 34]. But the fracture damage, which is the key input needed for upscaling micro-scale information to macro-scale models, is not estimated. A graph convolutional network (GCNs), coupled with a recurrent neural network (RNN), was used to model the evolution of those features from the reduced graph representation of large fracture networks [35], where the sum of the fracture damage in each finite element mesh is represented as total damage. However, these works only consider using the fracture features to predict its propagation.
In this paper, we develop a new data-driven method, termed Hybrid Optimization Software Suite Network (HOSSnet), to improve the high-fidelity of micro-crack fracture reconstruction from Cauchy stress and fracture damage in spatial and temporal fields. We also leverage the underlying physical constraints to further regularize the model learning of generalizable spatial and temporal patterns in the reconstruction process. In particular, our proposed method consists of two components: high-fidelity reconstruction unit (HRU) and physics-guided regularization. HRU is designed based on an Unet-based [36] encoder-decoder structure to improve the reconstruction performance in the spatial field. HRU also uses an LSTM layer to capture long-term temporal dependencies. In addition, The physics-guided regularization method adjusts the reconstructed data over time by enforcing consistency with known physical constraints such as optical flow [37] and positive direction.
Our evaluations of the HOSS dataset [18] have shown the promising performance of the proposed HOSSnet over space and time in both interpolation and extrapolation experiments. We also demonstrate the effectiveness of each component of our design by showing the improvement both qualitatively and quantitatively.
## 2 Related Work
### Machine Learning Approaches in Scientific Domain
Machine learning (ML) models, given their tremendous success in several commercial applications (e.g., computer vision, natural language processing, etc.), are increasingly being considered promising alternatives to physics-based models in the scientific domains. For example, in the hydrology domain, researchers have used graph neural networks (GNN) in modeling spatial dependencies [38; 39] of river networks. In streamflow problems, Moshe et al. [38] proposed the HydroNets model, which uses ML models to integrate the information from river segments and their upstream segments to improve the streamflow predictions. Chen et al. [39] proposed a Heterogeneous Stream-reservoir Graph Network (HRGN) model to represent underlying stream-reservoir networks and improve streamflow temperature prediction in all river segments within a river network.
The convolutional neural network (CNN) based and generative adversarial networks (GAN) based models have been widely used in high-resolution turbulent flow simulation. Fukami et al. [40] propose an improved CNN-based hybrid downsampled skip-connection/multi-scale (DSC/MS) model by extracting patterns from multiple scales. This method has been shown to produce a good performance in reconstructing the turbulent velocity and vorticity fields from extremely low-resolution input data. Similarly, Liu et al. [41] also propose another CNN-based multiple temporal paths convolutional neural network (MTPC) to simultaneously handle spatial and temporal information in turbulent flow simultaneously to fully capture features in different time ranges. Deng et al. [42] demonstrate that both super-resolution generative adversarial networks (SRGAN) and enhanced super-resolution generative adversarial networks (ESRGAN) [43] can produce a reasonable reconstruction of high-resolution turbulent flow in their datasets.
In the micro-crack problem, Perera [44] proposed a graph neural network to represent the relationship of different cracks and initially realize simulating cracks propagation for a wide range of initial microcrack configurations. This method only considers the cracks propagation from fracture features to fracture features but cannot apply to the model mapping from Cauchy features to fracture features.
### Physics-based Loss Function
There are still several challenges faced by existing machine learning methods. Standard machine learning models can fail to capture complex relationships amongst physical variables. Results will be even worse if only contain limited observation data. This is one reason for their failure to generalize to scenarios not encountered in training data. Hence, researchers are beginning to incorporate physical knowledge into loss functions to help machine learning models capture generalizable dynamic patterns consistent with established physical relationships. In recent studies [45; 46], the use of physical-based loss functions has already shown promising results in a variety of scientific disciplines. For example, Karpatne et al. [47] propose an additional physics-based penalty based on known monotonic physical relationship to guarantee that the density of water at lower depth is always greater than the density of water in any depths above. Kahana et al. [48] apply an additional loss function to ensure the physical consistency in the time evolution of waves, improving the prediction results and making the model more robust.
## 3 Problem Definition
The objective of our proposed work is to reconstruct the missing fracture image \(\mathbf{Y}\) for each micro-crack \(m\in\{1,...,M\}\) at each time step \(t\in\{1,...,T\}\), given input physical variables that drive the dynamics of the physical system. In detail, we use \(\mathbf{X}_{c}=\{\mathbf{x}_{m,c}^{t}\}\) to represent dynamic Cauchy input features for each micro-crack \(m\) at a specific time step \(t\). \(\mathbf{X}_{c}\) includes three Cauchy features of all micro-crack: Cauchy1 \(C_{x}\), Cauchy2 \(C_{y}\), and Cauchy12 \(C_{xy}\) In another case, we also regard \(\mathbf{X}_{f}=\{\mathbf{x}_{m,f}^{t}\}\) as a dynamic fracture input feature for each micro-crack on each time step. Then we use these two different groups of input features: \(\mathbf{X}_{c}\) and \(\mathbf{X}_{f}\) to reconstruct the missing target variables (i.e., fracture) \(\mathbf{Y}=\{y_{m}^{t}\}\) for certain micro-crack on certain time step respectively. When \(\mathbf{X}_{c}\) is used as the input, we represent this scenario as Cauchy \(\rightarrow\) Fracture. On the contrary, the scenario would be Fracture \(\rightarrow\) Fracture if \(\mathbf{X}_{f}\) is the input.
## 4 Methods
The purpose of our proposed HOSSnet model is used to achieve an end-to-end reconstruction mechanism from input feature \(\mathbf{X}\) (Cauchy feature \(\mathbf{X}_{c}\) or fracture feature \(\mathbf{X}_{f}\)) to missing target variable (i.e., fracture) \(\mathbf{Y}\) over the time and over the sample. In Figure.1. we show the overall structure of the proposed HOSSnet method. These are described here, in order:
### HOSSnet Architecture
This proposed HOSSnet aims to learn the model mapping from \(\mathbf{X}\) to \(\mathbf{Y}\), containing two main components: HRU and physics-guided regularization.
HRU is designed based on benchmark Unet-based [36] encoder-decoder structure. It contains a contracting path (encoder), recurrent network layer (RTL), and expansive path (decoder). The contracting path consists of the repeated application of two 3x3 convolutions (unpadded convolutions), each followed by a rectified linear unit (ReLU) [49] and a 2x2 max pooling operation with stride 2 for downsampling. Additionally, we also introduce multiple residual blocks between every two convolutions. And each residual block contains convolutional layers [50], batch normalization
layers [51], and parametric ReLUs following previous literature [52]. We feed the input feature \(\mathbf{X}\) into this encoder and encode the \(\mathbf{X}\) into a series of hidden representations \(\mathbf{H}=\{h^{\prime}_{m}\}\) for certain micro-crack on certain time steps respectively. After extracting the hidden representations \(\mathbf{H}=\{h^{\prime}_{m}\}\), we feed them into the recurrent transition layer (RTL), which is built based on LSTM structure [53]. We obtain the new hidden representation \(\tilde{\mathbf{H}}=\{\tilde{h}^{\prime}_{m}\}\) in this RTL structure. Lastly, we continue to move the obtained \(\tilde{\mathbf{H}}\) to the expansive path (decoder) for outputting reconstructed fracture data \(\mathbf{Y}\). In particular, the expansive path consists of an upsampling of the feature map followed by a 2x2 convolution ("up-convolution") that halves the number of feature channels, a concatenation with the correspondingly cropped feature map from the contracting path, and two 3x3 convolutions, each followed by a ReLU activation. The overall framework details are shown in Table. 1.
The HOSSnet model outputs a reconstructed fracture data \(\mathbf{Y}\), the model is optimized to reduce the difference between obtained \(\mathbf{Y}=\{y^{\prime}_{m}\}\) and provided ground truth fracture data \(\hat{\mathbf{Y}}=\{\hat{y}^{\prime}_{m}\}\) for certain micro-crack \(m\) on certain time step \(t\). Such a difference is represented as a loss \(\mathcal{L}_{\text{MSE}}(\mathbf{Y},\hat{\mathbf{Y}})\), which can be implemented as mean squared loss (MSE) that measure the difference between two sets of data. The loss function can be represented as
\[\mathcal{L}_{\text{MSE}}(\mathbf{Y},\hat{\mathbf{Y}})=\frac{1}{|\mathbf{Y}|} \sum_{\{(m,t)\}_{m}^{(t)}\in\mathbf{Y}\}(y^{t}_{m}-\hat{y}^{\prime}_{m})^{2} \tag{1}\]
### Physics-Guided Regularization
We further regularize our proposed model by leveraging the physical knowledge: optical flow [54] and positive direction to optimize the reconstruction performance by several approaches described below.
#### 4.2.1 Optical Flow
Optical flow has been used as a method to describe the movement of an object between consecutive frames of the images [54], and we adopt it to represent the development of the fracture \(\mathbf{Y}\). At each pixel location, the optical flow vector represents the motions of the pixel from one frame to another. Specifically, we consider each pixel in the fracture image \(\mathbf{Y}(x,y,t)\) moving to \(\mathbf{Y}(x+\Delta x,y+\Delta y,t+\Delta t)\) after the time interval \(\Delta t\). If the movements is small, \(\mathbf{Y}(x+\Delta x,y+\Delta y,t+\Delta t)\) can be expanded using a Tayler series:
\[\mathbf{Y}(x+\Delta x,y+\Delta y,t+\Delta t)=\mathbf{Y}(x,y,t)+\mathbf{Y}_{x} \Delta x+\mathbf{Y}_{y}\Delta y+\mathbf{Y}_{t}\Delta t+\ldots, \tag{2}\]
Figure 1: The architecture of the proposed HOSSnet model and different components in the loss function.
where \(\mathbf{Y}_{x}\), \(\mathbf{Y}_{y}\) and \(\mathbf{Y}_{t}\) are partial derivative with respect to \(x\), \(y\) and \(t\). If we consider the target pixel to be the same before and after the movement, we have \(\mathbf{Y}(x+\Delta x,y+\Delta y,t+\Delta t)=\mathbf{Y}(x,y,t)\). Combining with Eq. (2) by ignoring the higher order terms (i.e.,"..."), we have
\[\mathbf{Y}_{x}u+\mathbf{Y}_{y}v+\mathbf{Y}_{t}=0, \tag{3}\]
where \(u\) and \(v\) are the \(x\) and \(y\) components of the motion vector \(\mathbf{m}=(u,v)\), which is defined as
\[u=\frac{\Delta x}{\Delta t},v=\frac{\Delta y}{\Delta t}. \tag{4}\]
The motion vector \(\mathbf{m}=(u,v)\) can be obtained by minimizing the objective function with a smooth regularization,
\[\zeta^{\text{op}}=\iint\left(\left(\mathbf{Y}_{x}u+\mathbf{Y}_{y}v+\mathbf{Y }_{t}\right)^{2}+\lambda^{2}M^{2}\right)dxdy, \tag{5}\]
where \(\lambda\) is the weight for the smoothing regularization, and \(M\) is the magnitude of the flow gradient given by
\[M(u,v)=\sqrt{\|\nabla u\|^{2}+\|\nabla v\|^{2}}. \tag{6}\]
In this paper, we calculate the observed optical flow vector \(\mathbf{m}_{1}=(u_{1},v_{1})\) from the observed fractures \(\hat{\mathbf{Y}}\) and the predicted optical flow vector \(\mathbf{m}_{2}=(u_{2},v_{2})\) from the predicted fractures \(\mathbf{Y}\). Then we use the included angle between these two vectors as the regularization in the optical loss function \(\mathcal{L}_{\text{op}}\), which can be described as
\[\mathcal{L}_{\text{op}}=\sum_{n}\left\|\arccos(r_{n}(u_{1},u_{2},v_{1},v_{2})) \right\|^{2}, \tag{7}\]
where the operator \(\arccos(\cdot)\) is the inverse of the cosine function. The function of \(r_{n}(u_{1},u_{2},v_{1},v_{2})\) is defined as
\[r_{n}(u_{1},u_{2},v_{1},v_{2})=\frac{u_{1}u_{2}+v_{1}v_{2}}{\sqrt{u_{1}^{2}+v _{1}^{2}}\sqrt{u_{2}^{2}+v_{2}^{2}}}, \tag{8}\]
Thus, the overall loss function for the extrapolation network with optical flow regularization can be posed as
\[\mathcal{L}_{\text{MSE}_{\text{op}}}=\mathcal{L}_{\text{MSE}}+\alpha_{op} \mathcal{L}_{\text{op}}, \tag{9}\]
where the terms of \(\mathcal{L}_{\text{MSE}}\) and \(\mathcal{L}_{\text{op}}\) are provided in Eqs. (10) and (7), respectively, and \(\alpha_{op}\) is the weight for the regularization term. The regularization term would constrain the direction of the fracture development.
#### 4.2.2 Positive Direction
Ideally, the fracture development should always be positive with time changes. Without the awareness of such development patterns, the neural network model can still output negative changes. In order to force the fracture to develop in a positive direction, we have eliminated the negative change points during the prediction stage and set these points the same as the previous time step.
### Machine Learning Optimization Approaches
#### 4.3.1 Perceptual Loss
In order to further improve the reconstruction performance, we not only use the content loss based on MSE but also introduce the perceptual loss calculated based on the feature map of the VGG network [55], which works by summing all the squared errors between all the pixels and taking the mean. This is in contrast to a pixel-wise loss function(e.g., MSE) which sums all the absolute errors between pixels. This perceptual loss \(\mathcal{L}_{\text{Perceptual}}\) can be represented as:
\[\mathcal{L}_{\text{Perceptual}}(\mathbf{E},\hat{\mathbf{E}})=\frac{1}{| \mathbf{E}|}\sum_{(m,j)\ell_{m}\in\mathbf{E}}(v_{m}^{\prime}-\hat{e}_{m}^{ \prime})^{2}, \tag{10}\]
where \(\mathbf{E}\) and \(\hat{\mathbf{E}}\) represent the pixel features extracted from VGG network for \(\mathbf{Y}\) and \(\hat{\mathbf{Y}}\) respectively. Hence, the loss function \(\mathcal{L}_{\text{recon}}\) can be replaced by
\[\mathcal{L}_{\text{recon}}(\mathbf{Y},\hat{\mathbf{Y}})=\mathcal{L}_{\text{ MSE}}(\mathbf{Y},\hat{\mathbf{Y}})+\alpha_{Perc}\mathcal{L}_{\text{ Perceptual}}(\mathbf{Y},\hat{\mathbf{Y}}), \tag{11}\]
where \(\alpha_{Perc}\) is the weight of perceptual loss. Overall, after adding all the regularization and physical information, the loss function for the optimization will be
\[\mathcal{L}_{\text{all}}(\mathbf{Y},\hat{\mathbf{Y}})=\mathcal{L}_{\text{MSE}}( \mathbf{Y},\hat{\mathbf{Y}})+\alpha_{Perc}\mathcal{L}_{\text{Perceptual}}( \mathbf{Y},\hat{\mathbf{Y}})+\alpha_{op}\mathcal{L}_{\text{op}}(\mathbf{Y}, \hat{\mathbf{Y}}). \tag{12}\]
#### 4.3.2 Sub-region
In addition, when we utilize the extrapolation and the interpolation task, we know the location of the fracture from the known data. Instead of training with the whole image, we train the network with the sub-region of the images \(\mathbf{Y}_{sub}\) containing the fracture dynamic. This is because the most of region of the whole image has no change, making the built HOSSnet model less sensitive to the sub-region containing fracture dynamic. Hence, the loss function after adding this optimization will be replaced as:
\[\mathcal{L}_{\text{all}}(\mathbf{Y}_{sub},\hat{\mathbf{Y}}_{sub})=\mathcal{L}_ {\text{MSE}}(\mathbf{Y}_{sub},\hat{\mathbf{Y}}_{sub})+\alpha_{\text{{Perc}}} \mathcal{L}_{\text{Perceptual}}(\mathbf{Y}_{sub},\hat{\mathbf{Y}}_{sub})+ \alpha_{op}\mathcal{L}_{\text{op}}(\mathbf{Y}_{sub},\hat{\mathbf{Y}}_{sub}). \tag{13}\]
## 5 Experiment
This section will first describe the HOSS dataset and the experiment settings. Then we evaluate the performance of our proposed methods for fracture reconstruction.
### Experiment Setting
We consider two experiments: Extrapolation and Interpolation. The former is designed to verify the ability of the proposed method to reconstruct micro-crack fractures in the future state. We have conducted experiments over the sample, in which the test is performed on different samples from the training set. In addition, The experiments over time are performed, wherein predictions were made on the same sample, but across different time periods from the training set. The latter is designed to verify the ability of the proposed method to reconstruct micro-crack fracture in the intermediate time period.
#### 5.1.1 Dataset
The high-fidelity HOSS simulator has been applied to simulate the fracture growth and material failure of a concrete sample under uniaxial tensile loading conditions. Then, the generated simulation data are converted to images by ParaView for the training of HossNet. The problem is considered a 2D problem, and the model setup is shown in Figure 2. The domain is a rectangular sample 2m wide and 3m high. The bottom edge of the sample is fixed. The top edge of the sample moves upwards with a constant velocity of 0.3m/s, which results in a strain rate of 0.1s\({}^{-1}\). The material is assumed to be elastically isotropic for all cases with the elastic material properties: 2500kg/m\({}^{3}\) for density, 22.6 GPa for Young's modulus, and 0.242 for Poisson's ratio.
\begin{table}
\begin{tabular}{|l|l|l|} \hline
**Layers** & Filter Size & State size \\ \hline Convolutional layer & (3, 3) & 64 \\ Residual Block & (3, 3) & 64 \\ Residual Block & (3, 3) & 64 \\ Residual Block & (3, 3) & 64 \\ Maxpooling & None & None \\ Convolutional layer & (3, 3) & 64 \\ Residual Block & (3, 3) & 64 \\ Residual Block & (3, 3) & 64 \\ Residual Block & (3, 3) & 64 \\ Convolutional layer & (3, 3) & 64 \\ \hline LSTM Layer & None & 64 \\ \hline Convolutional layer & (3, 3) & 64 \\ Upsampling & None & None \\ Add & None & None \\ Residual Block & (3, 3) & 64 \\ Residual Block & (3, 3) & 64 \\ Residual Block & (3, 3) & 64 \\ Convolutional layer & (3, 3) & 64 \\ Fully-Connection layer & None & None \\ \hline \end{tabular}
\end{table}
Table 1: The detailed parameters’ information of overall HOSSnet Framework including encoder(upper), recurrent transition model(middle), and decoder(lower).
#### 5.1.2 Evaluation Metrics
We evaluate the performance of micro-reconstruction using three different metrics, root mean squared error (RMSE), structural similarity index measure (SSIM) [56], and weighted fracture error (WFE). We use RMSE to measure the difference (error) between reconstructed data and target fracture data. The lower value of RMSE indicates better reconstruction performance. SSIM is used to appraise the similarity between reconstructed data and target fracture on three aspects, luminance, contrast, and overall structure. The higher value of SSIM indicates better reconstruction performance. Lastly, we also create a new evaluation metric: weighted local fracture error (WFE), to measure the difference by using mean squared error, given the higher weight of the local region with dynamic change of micro-crack fracture. This evaluation metric can be represented as:
\[\text{WFE}=10\text{RMSE}(\mathbf{Y}_{dyn}-\mathbf{\hat{Y}}_{dyn})+\text{RMSE }(\mathbf{Y}_{fix}-\mathbf{\hat{Y}}_{fix}), \tag{14}\]
where \(\mathbf{Y}_{dyn}\) and \(\mathbf{Y}_{fix}\) denote the dynamic region and fixed region, respectively. The local crack prediction performance is more effectively represented by the WFE metric as compared to RMSE due to its emphasis on the local region. Similar to RMSE, the lower value of WFE indicates better reconstruction performance as well.
#### 5.1.3 Baselines
We compare the performance of the proposed HOSSnet method against several existing methods that have been widely used for data reconstruction. Specifically, we implement the HRU, and CNN-LSTM as baselines. To better verify the effectiveness of each component in our proposed method, we further compared HOSSnet with its variants: HOSSnet-F, which removes the perception loss from the complete model and only includes physical regularization in the proposed method. Additionally, by comparing HOSSnet-F and CNN-LSTM, we show improvement by incorporating physical regularization. We can further verify the effectiveness of the perception loss by comparing HOSSnet-F and HOSSnet.
#### 5.1.4 Experimental Design
We evaluate the performance of our proposed method in two different scenarios.
First, we conduct the extrapolation experiment, which includes over-time and over-sample tests, to study how the proposed HOSSnet method helps simulate fracture change in both over-time and over-sample situations. Specifically, in the over-time test, we select 6 different cracks in our test. We use the 5 complete crack samples and the first 150 time steps of the sixth crack sample as training and reconstruct the crack data in the remaining 150-time steps. In the
Figure 2: Model setup for the 2D uni-axial tensile failure problem with 20 pre-existing cracks.
over-sample test, we also use 6 different crack samples, each sample containing 300-time steps crack data. We use the 5 different crack samples as training and reconstruct the complete sequence of the remaining crack sample. Both over-time and over-sample tests are challenging tasks since micro-crack fracture is changing over time following complex non-linear patterns (driven by a partial differential equation). Hence, in the over-time test, the model trained from available data may not be able to generalize to future data that look very different from training data. Furthermore, it is more difficult to achieve the complete sequence of data reconstruction in the over-sample test because the model cannot capture any pattern of the testing data.
Second, we consider the case in which the intermediate time period of data is missing. For example, the fracture data is available at the beginning and end time periods but missing in the intermediate time steps. We can use the model trained using available data (beginning and ending) to reconstruct the fracture data in the missing time steps.
We refer to this test as an interpolation experiment. In this test, we use the single crack sample in our test, containing 300 available time steps. In the first case, we divide the data into 3 parts and each part has 100-time steps of data. We use the first and last 100 time steps as training and use the remaining intermediate 100-time steps for testing. In the other case, we only use 10% time steps as the training set, which we include the dataset into the training set every 10-time steps, the remaining are the test set.
#### 5.1.5 Training Settings
Data normalization is performed on both training and testing datasets, to normalize input variables to the range [0, 1]. Then, the model is trained by ADAM optimizer [57] The initial learning rate is set to 0.0005 and iterations are 500 epochs. We use Tensorflow 2.6 and Keras to implement our models with V100 GPU.
### Cauchy 1, 12, 2 \(\rightarrow\) Fracture
This section shows the experimental results of Cauchy features \(\rightarrow\) Fracture, which also includes extrapolating over the sample, over time, and interpolating experiments.
#### 5.2.1 Extrapolation Over Sample
**Quantitative Results.** The upper section of Table. 2, shows quantitative comparisons amongst all the methods, including average RMSE, SSIM, and WFE in the first 50-time steps of the testing phase. When comparing our proposed HOSSnet method with baseline methods. our proposed HOSSnet performs the best in three evaluation ways: obtain the lowest RMSE and WFE values, and the highest SSIM values. According to this table, the proposed method HOSSnet, in general, outperforms other baselines in terms of three different evaluation metrics. By comparing HRU and CNN-LSTM, we show the improvement by using the RTL structure. Furthermore, the comparison amongst CNN-LSTM, HOSSnet-F, and HOSSnet shows the effectiveness of incorporating the physics-guided regularization method (optical flow and perpetual loss) of the proposed method. In particular, the incorporation of perpetual loss brings the most significant performance improvement in terms of RMSE, SSIM, and WFE values.
**Temporal Analysis** In the temporal analysis, we show the change in performance as we reconstruct fracture data over 60-time steps after the training data. We show the performance of long-term prediction in terms of RMSE and WFE in Figures. 3. Several observations are highlighted: (1) With larger time intervals between training data
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Experiment & Method & RMSE\(\downarrow\) & SSIM\(\uparrow\) & WFE\(\downarrow\) \\ \hline \multirow{4}{*}{Over Sample} & HRU & 0.057 & 0.965 & 0.091 \\ & CNN-LSTM & 0.046 & 0.971 & 0.083 \\ & HOSSnet-F & 0.038 & 0.979 & 0.077 \\ & HOSSnet & **0.028** & **0.982** & **0.060** \\ \hline \multirow{4}{*}{Over Time} & HRU & 0.023 & 0.985 & 0.061 \\ & CNN-LSTM & 0.018 & 0.985 & 0.052 \\ & HOSSnet-F & 0.011 & 0.991 & 0.034 \\ & HOSSnet & **0.009** & **0.995** & **0.024** \\ \hline \multirow{4}{*}{Interpolation} & HRU & 0.025 & 0.985 & 0.057 \\ & CNN-LSTM & 0.020 & 0.987 & 0.046 \\ \cline{1-1} & HOSSnet-F & 0.018 & 0.988 & 0.049 \\ \cline{1-1} & HOSSnet & **0.014** & **0.991** & **0.036** \\ \hline \end{tabular}
\end{table}
Table 2: Average reconstruction performance (measured by RMSE, SSIM, and WFE) of first 50th time steps in Cauchy \(\rightarrow\) Fracture’s experiment on extrapolating over the sample (upper), extrapolating over time (middle), interpolating (lower) respectively.
and prediction data, the performance becomes worse. In general, our proposed HOSSnet method outperforms other baselines. (2) We can obtain a similar conclusion by comparing HRU and CNN-LSTM. The proposed RTL component can bring significant improvement in long-term propagation. (3) We also can justify the effectiveness of the proposed physics-guided regularization method and extra perpetual loss by comparison amongst CNN-LSTM, HOSSnet-F, and HOSSnet.
**Visual Results.** Figures. 4 shows the reconstructed fracture at multiple time steps (50th, 60th, 70th, 80th, and 90th-time steps) after the training period. For each time step, we show the images from ground truth and the HOSSnet method. In the first few time steps, we can observe that our proposed HOSSnet can obtain ideal visual results. The performance is very similar to the ground truth image. After several time steps (e.g., 70th-time step), we can find the reconstruction performance is become slightly worse than the ground truth image but still can effectively fine-level captures textures and patterns, and obtain reasonable performance, only containing a slight blur in the sub-region of the dynamic of fracture. It also proves the effectiveness of the proposed HOSSnet method in long-term reconstruction.
#### 5.2.2 Extrapolation Over Time
**Quantitative Results.** In this experiment, we compare the same set of existing methods as in the previous Extrapolation Over Sample experiment. In the middle part section of Table 2, we show the quantitative results of the first 50-time steps in the testing set. We can observe similar results that our proposed HOSSnet outperforms other methods by a considerable margin. We still can notice a significant improvement by adding each component: RTL, Physics-Guided Regularization, and perpetual loss, but the degree of improvement is not as high as in the over-sample experiments. This is because we also use the first 150-time steps of testing crack data as training except for 5 complete sequences of crack data. This is very helpful for the ML model to capture the initial complex pattern and dynamic transformation of the testing data. In such a case, a simple HRU model can get a relatively reasonable result. Therefore, the improvement brought by adding other components later will not be as obvious as in the previous over-sample experiment.
We also show the details of temporal analysis and visual results for this extrapolating over-time experiment in the supplementary file. We almost can obtain a similar conclusion as the experiment of extrapolating over the sample. Our proposed method can achieve promising reconstruction performance in long-term propagation.
#### 5.2.3 Interpolation
**Quantitative Results.** In this single crack experiment, we compare the same set of existing methods as in the previous two extrapolating experiments. In the lower part section of Table 2, we show the quantitative results including average RMSE, SSIM, and WFE in the first 50 time steps of the testing set. From the table, we also obtain the same conclusion that our proposed HOSSnet model can outperform other baselines and each new component can bring significant improvement in terms of these three evaluation metrics. In addition, we can observe that the overall performance from the interpolation experiment is slightly worse than the extrapolation over-time experiment. This is because we only conduct interpolating tests in a single-crack experiment and use the beginning and end of crack data as training. It causes ML models to have difficulty capturing the complete dynamics of the crack without adding additional complete sequences of crack data (similar to extrapolating experiments' settings).
We also show the details of temporal analysis and visual results for this interpolation experiment in the supplementary file. We almost can obtain a similar conclusion as quantitative analysis. Our proposed method can achieve
Figure 3: Change of RMSE and WFE values produced by different models from the 1st to the 60th time (interval is 2 time steps) step in the extrapolating over sample experiment.
promising reconstruction performance in this interpolation condition.
### Fracture \(\rightarrow\) Fracture
This section represents the experimental results with Fracture \(\rightarrow\) Fracture, which includes extrapolation over the sample and over time. The setting of these experiments is the same with Cauchy features \(\rightarrow\) Fracture except the inputs are fracture damage.
#### 5.3.1 Extrapolation Over Sample
Table 3 gives the RMSE, SSIM, and WFE of all the networks. From the results, we can see the improvement of the network with different regularization terms. When considering solely the implementation of physical regularization, the HOSSnet-F model does not exhibit a notable improvement in performance as compared to the CNN-LSTM model. With all the regularization terms, our proposed HOSSnet method still yields the lowest RMSE and WFE and the highest SSIM. The examples of the predicted fracture from HOSSnet at different time steps are shown in Figure 5. Compared to the scenario that uses Cauchy stress as the input, the network performs better when the input is the fracture because both input and output have the same properties. However, the damage is harder to obtain in the survey which makes this scenario harder to implement in the real world.
#### 5.3.2 Extrapolation Over Time
According to the results presented in Table 3, the HOSSnet demonstrates the most superior performance. This is attributed to the fact that both the training and testing phases are carried out at distinct temporal intervals within the same crack. Consequently, the HOSSnet prediction exhibits the highest SSIM of 0.999 compared to other scenarios. Nonetheless, it is worth noting that a distinct network must be trained for each crack to be applied to diverse fractures, leading to increased computational expenses for this scenario.
## 6 Conclusion
In this paper, we develop a new data-driven method HOSSnet that integrates a series of physics regulations: optical flow, and positive direction to achieve fracture reconstruction in spatial-temporal. We also introduce extra machine learning optimization approaches to further improve the reconstruction. Moreover, we come up with two different experiment settings: Cauchy \(\rightarrow\) Fracture and Fracture \(\rightarrow\) Fracture to show the robustness and applicability of our proposed HOSSnet model. Our experiments demonstrate the effectiveness of these proposed methods in improving fracture reconstruction in extrapolating and interpolating tests. Furthermore, we also will propose new extensions by incorporating the underlying physical relationships (e.g. underlying partial differential equation(PDE) into the proposed model to further improve the reconstruction performance and evaluate our proposed HOSSnet method in larger micro-crack regions.
## Acknowledgments
This work was funded by the Los Alamos National Laboratory (LANL) - Laboratory Directed Research and Development program under project number 20210542MFR.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Experiment & Method & RMSE\(\downarrow\) & SSIM\(\uparrow\) & WFE\(\downarrow\) \\ \hline \multirow{4}{*}{Over Sample} & HRU & 0.029 & 0.976 & 0.124 \\ & CNN-LSTM & 0.026 & 0.985 & 0.058 \\ & HOSSnet-F & 0.023 & 0.986 & 0.074 \\ & HOSSnet & **0.013** & **0.993** & **0.045** \\ \hline \multirow{4}{*}{Over Time} & HRU & 0.010 & 0.992 & 0.051 \\ & CNN-LSTM & 0.008 & 0.996 & 0.051 \\ \cline{1-1} & HOSSnet-F & 0.007 & 0.998 & 0.031 \\ \cline{1-1} & HOSSnet & **0.006** & **0.999** & **0.030** \\ \hline \end{tabular}
\end{table}
Table 3: Reconstruction performance (measured by RMSE, SSIM, and WFE) of Fracture \(\rightarrow\) Fracture’s experiment on the extrapolating over the sample (upper), extrapolating over time tests (middle), interpolating (lower) respectively. |
2302.02759 | Detecting Reddit Users with Depression Using a Hybrid Neural Network
SBERT-CNN | Depression is a widespread mental health issue, affecting an estimated 3.8%
of the global population. It is also one of the main contributors to disability
worldwide. Recently it is becoming popular for individuals to use social media
platforms (e.g., Reddit) to express their difficulties and health issues (e.g.,
depression) and seek support from other users in online communities. It opens
great opportunities to automatically identify social media users with
depression by parsing millions of posts for potential interventions. Deep
learning methods have begun to dominate in the field of machine learning and
natural language processing (NLP) because of their ease of use, efficient
processing, and state-of-the-art results on many NLP tasks. In this work, we
propose a hybrid deep learning model which combines a pretrained sentence BERT
(SBERT) and convolutional neural network (CNN) to detect individuals with
depression with their Reddit posts. The sentence BERT is used to learn the
meaningful representation of semantic information in each post. CNN enables the
further transformation of those embeddings and the temporal identification of
behavioral patterns of users. We trained and evaluated the model performance to
identify Reddit users with depression by utilizing the Self-reported Mental
Health Diagnoses (SMHD) data. The hybrid deep learning model achieved an
accuracy of 0.86 and an F1 score of 0.86 and outperformed the state-of-the-art
documented result (F1 score of 0.79) by other machine learning models in the
literature. The results show the feasibility of the hybrid model to identify
individuals with depression. Although the hybrid model is validated to detect
depression with Reddit posts, it can be easily tuned and applied to other text
classification tasks and different clinical applications. | Ziyi Chen, Ren Yang, Sunyang Fu, Nansu Zong, Hongfang Liu, Ming Huang | 2023-02-03T06:22:18Z | http://arxiv.org/abs/2302.02759v2 | # Detecting Reddit Users with Depression Using a Hybrid Neural Network
###### Abstract
Depression is a widespread mental health issue, affecting an estimated 3.8% of the global population. It is also one of the main contributors to disability worldwide. Recently it is becoming popular for individuals to use social media platforms (e.g., Reddit) to express their difficulties and health issues (e.g., depression) and seek support from other users in online communities. It opens great opportunities to automatically identify social media users with depression by parsing millions of posts for potential interventions. Deep learning methods have begun to dominate in the field of machine learning and natural language processing (NLP) because of their ease of use, efficient processing, and state-of-the-art results on many NLP tasks. In this work, we propose a hybrid deep learning model which combines a pretrained sentence BERT (SBERT) and convolutional neural network (CNN) to detect individuals with depression with their Reddit posts. The sentence BERT is used to learn the meaningful representation of semantic information in each post. CNN enables the further transformation of those embeddings and the temporal identification of behavioral patterns of users. We trained and evaluated the model performance to identify Reddit users with depression by utilizing the Self-reported Mental Health Diagnoses (SMHD) data. The hybrid deep learning model achieved an accuracy of 0.86 and an F1 score of 0.86 and outperformed the state-of-the-art documented result (F1 score of 0.79) by other machine learning models in the literature. The results show the feasibility of the hybrid model to identify individuals with depression. Although the hybrid model is validated to detect depression with Reddit posts, it can be easily tuned and applied to other text classification tasks and different clinical applications.
## I Introduction
Depression is one of the most common mental illnesses and the influence of depression on society is a serious concern for public health communities. American Psychiatric Association (APA) approximates that 6.9 percent of the population in the United States is currently affected [1], costing the United States S233 billion in 2016 [2]. The World Health Organization (WHO) estimates that depression affects around 322 million people, at a cost of approximately 2.5 trillion dollars worldwide in 2017 [3]. WHO reported that the incidence of depression increased an extra 25% in the first year of the COVID-19 pandemic, mostly due to the aspects of social lockdown and isolation and work reduction [4]. Depression characterized by a sad, empty, or irritated mood with somatic and cognitive changes can worsen the individuals' capacity to function, limiting performance in everyday duties and social life [5]. Depression is a leading cause of disability around the world [2] and depression can lead to suicide, one of the leading causes of death [6]. Moreover, it is estimated that around two-thirds of depression cases are undiagnosed. The undiagnosed and thus untreated depression have a negative impact on quality of life and workplace productivity [7]. Prevention and intervention strategies such as screening and early detection are required to reduce the impact of depression.
Recently it is becoming popular for individuals to use social media platforms (e.g., Reddit) to express their life difficulties and health issues (e.g., depression) and seek support from other users in online communities [8, 9]. The social media platforms allow the public to anonymously disclose personal information and discuss life experiences and problems (e.g., depression) that may be stigmatizing, with less fear of offline harm or consequences [10, 11]. Recently research studies also confirmed that social media is effective platforms for self-disclosure and social support seeking for mental health difficulties (e.g., depression) [10, 12]. This makes social media platforms including Reddit important resources for studying user narratives related to depression [13, 14].
Among these social media platforms, Reddit ranks the 5th most visited website in the United States, having 52 million users per day and more than 430 million active users per month [15, 16]. Reddit consists of network of communities (subreddits) with focuses on specific topics such as depression, which |
2310.02548 | Exact and soft boundary conditions in Physics-Informed Neural Networks
for the Variable Coefficient Poisson equation | Boundary conditions (BCs) are a key component in every Physics-Informed
Neural Network (PINN). By defining the solution to partial differential
equations (PDEs) along domain boundaries, BCs constrain the underlying boundary
value problem (BVP) that a PINN tries to approximate. Without them, unique PDE
solutions may not exist and finding approximations with PINNs would be a
challenging, if not impossible task. This study examines how soft loss-based
and exact distance function-based BC imposition approaches differ when applied
in PINNs. The well known variable coefficient Poisson equation serves as the
target PDE for all PINN models trained in this work. Besides comparing BC
imposition approaches, the goal of this work is to also provide resources on
how to implement these PINNs in practice. To this end, Keras models with
Tensorflow backend as well as a Python notebook with code examples and
step-by-step explanations on how to build soft/exact BC PINNs are published
alongside this review. | Sebastian Barschkis | 2023-10-04T03:16:03Z | http://arxiv.org/abs/2310.02548v1 | # Exact and soft boundary conditions in
###### Abstract
Boundary conditions (BCs) are a key component in every Physics-Informed Neural Network (PINN) [8]. By defining the solution to partial differential equations (PDEs) along domain boundaries, BCs constrain the underlying boundary value problem (BVP) that a PINN tries to approximate. Without them, unique PDE solutions may not exist and finding approximations with PINNs would be a challenging, if not impossible task. This study examines how soft loss-based [8] and exact distance function-based [10] BC imposition approaches differ when applied in PINNs. The well known variable coefficient Poisson equation serves as the target PDE for all PINN models trained in this work. Besides comparing BC imposition approaches, the goal of this work is to also provide resources on how to implement these PINNs in practice. To this end, Keras [5] models with Tensorflow [1] backend as well as a Python notebook with code examples and step-by-step explanations on how to build soft/exact BC PINNs are published alongside this review [2].
## 1 Introduction
Solving the Poisson equation is the most computationally expensive step in many incompressible flow simulations. MFIX's [11] fluid solver, for example, spends more than 80% of its time on solving the Poisson equation. This finding alone motivates the development of and search for new solvers that can accelerate this step. The advantages of faster Poisson solvers would be manifold: For one, PDE solutions could be found using fewer computational resources. For another, more time could be spent testing and evaluating algorithms that are currently constrained by time spent on solving the Poisson equation.
To this end, we examine PINNs, a promising class of neural PDE approximators. Their performance, training time, and accuracy varies depending on the availability of data but also hinges on how this data is processed. The way a PINN learns labelled data points directly affects training and inference times. To better understand which PINN implementations perform best, this work compares two approaches that can be used to impose boundary conditions: the soft and the exact BC imposition approach.
These BC imposition methods have been discussed in detail by Raissi et al. [8] and Sukumar and Srivastava [10]. The primary goal of this work is to show how to embed them in PINNs and provide a flexible, re-usable workflow that can approximate the solution to a PDE. The target PDE in this work is the variable coefficient Poisson equation with Dirichlet boundary conditions.
Related works
A common approach to learn BCs in PINNs is the soft loss-based approach where a BC data loss is part of the total loss. The deep learning framework that introduced PINNs [8] followed this approach by incorporating the Mean-squared-error (MSE) of points found along domain boundaries to the overall loss. Using this approach Raissi et al. [8] show that PINNs can successfully learn periodic and Dirichlet boundary conditions while approximating solutions to instances of Schrodingers, Burgers, and Navier-Stokes equations. Many other PDEs have been solved with PINNs and similar MSE losses for boundary conditions: Bischof and Kraus [3], for instance, showed solutions to Kirchhoff's plate bending problem and the Helmholtz equation.
While having exact values at domain boundaries constrains the problem space, BCs are a sufficient and not a necessary condition for PINNs to converge. If knowledge on domain boundaries is given in another form, PINNs can still find PDE solutions. Raissi et al. [9] trained PINNs that successfully approximate fluid velocities and pressure in laminar flows even in the absence of specific boundary conditions. They show that a concentration field for a passive scalar with sufficient gradients near domain boundaries can compensate the lack of explicit BCs.
Recent works have looked into hard constraints for PDEs in PINNs. Negiar et al. [7] developed a differentiable PDE-constrained layer (PDE-CL) through which hard constraints can be placed on the interior of a domain. Their approach stands in contrast to previous work with hard constraints which finds PDE solutions by enforcing hard constraints at domain boundaries Lu et al. [6], Sukumar and Srivastava [10].
The comparisons shown in this work will make use of the commonly used soft BC imposition approach introduced by Raissi et al. [8]. For exact BCs, Sukumar and Srivastava [10]'s distance-based method will be used.
## 3 Background
All PINNs from this study were trained to approximate Poisson's equation with Dirichlet boundary conditions. Special cases of Poisson's equation, such as Laplace's equation and Poisson's equation with zero-BC, were examined as well.
### Poisson's equation
We consider Poisson's equation in a 2-dimensional (2D) setting. More precisely, the PINNs from this work use the variation of Poisson's equation with variable coefficient and Dirichlet boundary conditions which can be expressed as follows
\[\nabla\cdot(a\nabla p(x,y)) =f(x,y),\qquad(x,y)\in\Omega \tag{1}\] \[p(x,y) =g(x,y),\qquad(x,y)\in\partial\Omega \tag{2}\]
where \(a\) is the variable coefficient, \(f\) the right-hand side (RHS), \(g\) the boundary condition (BC), and \(p\) the solution.
### Variations of Poisson's equation
By forcing certain variables in Poisson's equation (1) to zero we obtain PDE variations that can easily be approximated with the same PINNs. This work considers the cases where either \(f=0\) or \(g=0\). That is, Laplace's equation in the former and the zero BC Poisson equation in the latter case.
### Data generation
Training and validation data for coefficient \(a\), RHS \(f\), and BC \(g\) were generated artificially (i.e. not derived from experiments), uniformly and randomly. The generation process followed the following strategy:
1. Distribute \(n\times n\) knots uniformly and in a grid pattern on the discretized target domain (Figure 0(a)).
2. Generate a Sobol sequence with given minimum and maximum values.
3. For each knot, select a value from the Sobol sequence.
4. Use a Gaussian process to interpolate from the values found at knots and sample interpolated values at every discrete x-y domain position.
5. Repeat this process for variables \(a\), \(f\), and \(g\).
Note that for \(g\), the above process was executed in 1-dimensional (1D) setting since for BCs only a vector that wraps around the 2D domain is needed. This data generation approach ensures that the resulting value distributions in \(a\), \(f\), and \(g\) are smooth and not completely random. I.e. the resulting datasets are free of discontinuities. Since PINN models compute physics losses and expect training data to be differentiable, smooth value distributions are highly desirable.
## 4 Soft and exact BCs in PINNs
### Loss function
The total loss consists of a supervised data and a PDE loss:
\[\mathcal{L}_{Total}=\mathcal{L}_{Data}+\alpha\cdot\mathcal{L}_{PDE}\]
Both \(\mathcal{L}_{Data}\) and \(\mathcal{L}_{PDE}\) use the Mean-squared-error (MSE) metric.
\(\mathcal{L}_{Data}\) is used to compute the error for points near the domain boundary. While soft BC models rely on minimized errors at these points, exact BC models are less impacted by them as they enforce BCs at domain boundaries and, hence, errors in these areas will be very small anyways. \(\mathcal{L}_{PDE}\) on the other hand is plays a key role in both soft and exact BC models. It measures the error between \(\nabla\cdot(a\nabla p)\) and RHS \(f\). The solution in the domain interior depends on this error.
### Soft BC models
When sampling points uniformly and randomly in domain space, the number of points placed at domain boundaries is usually insufficient for PINNs to learn BCs efficiently. As a result, training becomes inefficient since the PDE loss computed at collocation points found in the domain interior cannot converge sufficiently due to the lack of learned BC knowledge. A common strategy that compensates this behavior is to selectively sample more points near domain boundaries (Figure 0(b)). These additional domain boundary points make it possible to minimize the data loss \(\mathcal{L}_{Data}\) quicker and consequently also help to find the PDE solution values that minimize \(\mathcal{L}_{PDE}\) in the domain interior.
Figure 1: Knots (4x4) in domain and interpolated instances for \(a\) and \(f\) (32x32). (0(a)): Knots (big dots) and x-y coordinates (small dots) in domain space. (0(b)): \(a\) after interpolating from knots. (0(c)): \(f\) after interpolating from knots.
### Exact BC models
The exact BC imposition approach with distance functions follows work from Sukumar and Srivastava [10]. Their approach is based on the idea that instead of learning BCs through a loss term, BCs can be hard-coded into PINNs too. By reformulating network solutions as functions of an interpolation and a filter function, the exact BC becomes part of the predicted solution. I.e. the approximation \(\hat{p}\) from a neural network with exact BCs can be expressed as
\[\hat{p}(x,y)=G(x,y)+\phi(x,y)\cdot\hat{p}(x,y)\]
where \(G(x,y)\) is the interpolating and \(\phi(x,y)\) the filter function.
#### 4.3.1 Interpolation function
\(G(x,y)\) is the function that interpolates BCs. That is, at any position \((x,y)\) in the domain \(G\) returns an approximated solution value that is only based on the values found at domain boundaries. By
Figure 3: Exact BC enforcement in a fully-connected architecture: The BC is “injected” into the result tensor.
Figure 2: Points sampled in domain space: (2a) 200 collocation points, sampled randomly and uniformly. (2b) Collocation points with 20 additional boundary points per domain side. (Note: number of points shown only for illustration purposes, not used in actual models).
using an interpolation method such as Inverse-Distance-Weighting (IDW), it is possible to broadcast the BC to the domain interior with varying weight. IDW achieves this by measuring the distance from collocation points to each boundary point and then computing the weighted solution value based on the distance to the BC points.
\[G(\mathbf{x}) =\sum_{i}^{N_{bc}}\frac{w_{i}z_{i}}{\sum_{i}^{N_{bc}}w_{i}}\] \[w_{i} =|\mathbf{x}-\mathbf{x_{i}}|\]
Note that special treatment is required in situations where a collocation point coincides with a BC point, i.e. the case where \(w_{i}=0\). The resulting zero-division can be avoid in practice by adding \(\epsilon>0\) to \(w_{i}\).
#### 4.3.2 Filter function
Filter function \(\phi(x,y)\) forms the second part in exact BC models. It scales the network and reduces output contributions in areas close to the BC. \(\phi(x,y)\) should be smooth and chosen with domain knowledge in mind. Depending on the shape of the domain, the domain sides where exact BCs should be enforced, and the overall filtering strength, \(\phi(x,y)\) can take different shapes. For square domains and exact BCs along all domain sides, a good choice is
\[\phi(x,y)=x\cdot(1-x)\cdot y\cdot(1-y)\]
where \(x\) and \(y\) are locations in a domain with width and height in range \([0,1]\).
## 5 Experiments
The PINN models with soft and exact BCs were trained on instances of the Poisson equation with non-zero BC, the Laplace equation, and the Poisson equation with zero-BC. To ensure comparability, all datasets had the same dimension (128x128) and were used to train one soft BC and one exact BC model. I.e. every side-by-side comparision of soft and exact BC models is the result from training with the same instances of \(a\), \(f\), and \(g\).
All models were trained using fully-connected architectures with 4 layers and 128 neurons per layer. \(GELU\) activations were used in all layers except for the last layer which used a linear activation. The optimizer was a Adam optimizer with \(\beta_{1}=0.9\), \(\beta_{2}=0.999\) and initial learning rate of \(0.0005\). Learning rate was dropped on plateau with factor \(0.1\) after a patience of \(10\) epochs and \(\delta_{min}=0.01\).
In both soft and exact BC models, the loss was computed using \(10.000\) (collocation) points, each of which contributed to the PDE or data loss. The latter was only computed for points in vicinity to domain boundaries. In soft BC models, an additional \(1.000\) boundary points were sampled near domain boundaries. Exact BC models did not make use of additional boundary points. Instead, to compute \(G\), they used \(20\) boundary condition points per domain side.
### Parameter selection
\(GELU\) was chosen over \(tanh\) as the activation function since models tended to yield slightly lowers errors in the former case. The choice, however, is highly dependent on training data. Poisson datasets were the random generator produced narrow value distributions (i.e. datasets were values at knots (see section 3.3) were close to each other) benefited more from \(tanh\) activations.
The choice of \(\alpha=0.1\) was based on the assumption that the number of collocation and boundary points has a ratio of 10:1. Exact BC models don't have to learn the BC through a loss term and could use a larger \(\alpha\). However, to ensure fairness during training, \(\alpha=0.1\) was used in both soft and exact BC models.
### Poisson's equation with non-zero BC
The plots in Figure 4 show the results for solution \(p\) to an instance of Poisson's equation with non-zero BC. Grids for \(a\) and \(g\) resulted from knots with random values in range \([-1,1]\). \(f\) was generated with knots ranging from \([-10,10]\).
### Laplace's equation
The models approximating Laplace's equation, shown in Figure 5, are based on the same training data used in the models for Poisson's equation with non-zero BC (Section 5.2). Only RHS \(f=0\) differs.
### Poisson's equation with zero BC
Similarly to the experiments with non-zero BCs, the zero-BC model for Poisson was trained with the same \(a\) and \(f\). Note how since \(g=0\), the solutions \(p\) are on a different scale.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Dataset \# & BC Type & MAE & RMSE & MAPE & Avg. time / epoch (sec) \\ \hline \multirow{2}{*}{0} & Soft & 4.97e-2 & 7.01e-2 & 114.55\% & 1.78 \\ & Exact & 4.29e-2 & 6.46e-2 & 116.48\% & 1.86 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Error metrics for models solving Poisson’s equation with non-zero BC (plots in Figure 4)
Figure 4: Results from models solving Poisson’s equation with non-zero BC.
Figure 5: Results from models solving Laplace’s equation with non-zero BC.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Dataset \# & BC Type & MAE & RMSE & MAPE & Avg. time / epoch (sec) \\ \hline \multirow{2}{*}{0} & Soft & 2.53e-2 & 3.19e-2 & 150.71\% & 1.79 \\ & Exact & 2.19e-2 & 2.99e-2 & 140.02\% & 1.84 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Error metrics for models solving Laplace’s equation with non-zero BC (plots in Figure 5)
## 6 Conclusions
The experiments with both Poisson's and Laplace's equation have shown that exact BC models generally yield smaller errors than soft BC models. Since exact BC models don't have to learn boundary points, training can focus on collocation points and minimizing the PDE loss. I.e. all effort goes into learning areas where no labelled data is present and where hidden physics should be recovered.
However, exact BC models are not always the better choice. The following points should be considered when using exact BCs:
* When increasing the number of BC points in exact BC models, both training and inference times will be prolonged due to the interpolation overhead (every collocation point has to evaluate every BC point).
* The number of BC points in exact BC models should be chosen carefully and with the domain size in mind. A small domain will require fewer BC points than a larger domain.
* Filter function \(\phi\) should be chosen carefully as well. Especially when the domain is non-square or when boundary conditions are not enforced everywhere. An inappropriate choice of \(\phi\) can significantly increase training times and lead to models that converge to sub-optimal solutions.
## 7 Additional resources
This work includes a notebook [2] with annotations and code examples that show how to implement soft and exact BCs in PINN models. All models in this notebook are based on Keras[5] and Tensorflow [1] as the backend. The code is open-source and readers are free to use it in their own PINN projects.
## 8 Future work
Multiple PINN instances from this study could be combined into a Mixture-of-Experts (MoE) PINN [4]. I.e. a network composed of multiple child PINNs and one gating network that combines their output (e.g. through the use of a softmax function). By using such an architecture, solutions would no longer be constrained to a single BC and the Poisson equation could be solved for arbitrary BCs. Future studies should evaluate the accuracy and performance of such an MoE Poisson PINN. Additionally, a comparison with existing Poisson neural operators that can solve for arbitrary BCs should be carried out.
Figure 6: Results from models solving Poisson’s equation with non-zero BC.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Dataset \# & BC Type & MAE & RMSE & MAPE & Time / epoch (sec) \\ \hline \multirow{2}{*}{0} & Soft & 6.08e-2 & 8.39e-2 & 152.46\% & 1.77 \\ & Exact & 5.79e-2 & 8.27e-2 & 119.99\% & 1.86 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Error metrics for models solving Poisson’s equation with non-zero BC (plots in Figure 6)
While the above PINN ensembles would consist of PINNs trained on different instances of the same equation (e.g. mutiple instances for Poisson non-zero BC), PINN ensembles could also consist of PINNs trained on different equations. The Laplace and Poisson zero-BC PINNs from this work, for example, could be combined into an MoE PINN that solves the Poisson non-zero BC equation. The gating function could leverage the superposition principle
\[p_{\text{Laplace}}+p_{\text{Poisson zero BC}}=p_{\text{Poisson non-zero BC}}\]
In general, more research should evaluate the benefits of splitting up PDEs into simpler ones, and approximating solutions with PINN ensembles.
|
2310.07430 | Non-backtracking Graph Neural Networks | The celebrated message-passing updates for graph neural networks allow
representing large-scale graphs with local and computationally tractable
updates. However, the updates suffer from backtracking, i.e., a message flowing
through the same edge twice and revisiting the previously visited node. Since
the number of message flows increases exponentially with the number of updates,
the redundancy in local updates prevents the graph neural network from
accurately recognizing a particular message flow relevant for downstream tasks.
In this work, we propose to resolve such a redundancy issue via the
non-backtracking graph neural network (NBA-GNN) that updates a message without
incorporating the message from the previously visited node. We theoretically
investigate how NBA-GNN alleviates the over-squashing of GNNs, and establish a
connection between NBA-GNN and the impressive performance of non-backtracking
updates for stochastic block model recovery. Furthermore, we empirically verify
the effectiveness of our NBA-GNN on the long-range graph benchmark and
transductive node classification problems. | Seonghyun Park, Narae Ryu, Gahee Kim, Dongyeop Woo, Se-Young Yun, Sungsoo Ahn | 2023-10-11T12:32:13Z | http://arxiv.org/abs/2310.07430v2 | # Non-Backtracking Graph Neural Network
###### Abstract
The celebrated message-passing updates for graph neural networks allow the representation of large-scale graphs with local and computationally tractable updates. However, the local updates suffer from backtracking, i.e., a message flows through the same edge twice and revisits the previously visited node. Since the number of message flows increases exponentially with the number of updates, the redundancy in local updates prevents the graph neural network from accurately recognizing a particular message flow for downstream tasks. In this work, we propose to resolve such a redundancy via the non-backtracking graph neural network (NBA-GNN) that updates a message without incorporating the message from the previously visited node. We further investigate how NBA-GNN alleviates the over-squashing of GNNs, and establish a connection between NBA-GNN and the impressive performance of non-backtracking updates for stochastic block model recovery. We empirically verify the effectiveness of our NBA-GNN on long-range graph benchmark and transductive node classification problems.
## 1 Introduction
Recently, graph neural networks (GNNs) (Kipf and Welling, 2016; Hamilton et al., 2017; Xu et al., 2019) have shown great success in various applications, including but not limited to, molecular property prediction (Gilmer et al., 2017) and community detection (Bruna and Li, 2017). Such success can be largely attributed to the message-passing structure of GNNs, which provides a computationally tractable way of incorporating the overall graph through iterative updates based on local neighborhoods. However, the message-passing structure also brings challenges due to the parallel updates and memory-less behavior of messages passed along the graph.
In particular, the message flow in a GNN is prone to backtracking, where the message from vertex \(i\) to vertex \(j\) isincorporated in the subsequent message from \(j\) to \(i\), e.g., Figure 1. Since the message-passing iteratively aggregates the information, the GNN inevitably encounters an exponential surge in the number of message flows, proportionate to the vertex degrees. This issue is compounded by backtracking, which accelerates the growth of message flows with redundant information.
Interestingly, despite these challenges, non-backtracking updates--a potential solution--have been largely overlooked in the existing GNN research, while they have been thoroughly investigated for non-GNN message-passing algorithms or random walks (Fitzner and van der Hofstad, 2013; Rappaport et al., 2017) (Figure 1). For example, given a pair of vertices \(i,j\), the belief propagation algorithm (Pearl, 1982) forbids an \(i\to j\) message from incorporating the \(j\to i\) message. Another example is the non-backtracking random walks (Alon et al., 2007) which are non-Markovian walks
Figure 1: Message flows of simple (above) and non-backtracking (below) updates.
that do not traverse the same edge twice and revisit the previous node. Such classic algorithms have demonstrated great success in applications like probabilistic graphical model inference and stochastic block models (Massoulie, 2014; Bordenave et al., 2015; Abbe & Sandon, 2015). In particular, the spectrum of the non-backtracking operator contains more useful information than that of the adjacency matrix in revealing the hidden structure of a graph model (Bordenave et al., 2015).
**Contribution.** In this work, we propose the non-backtracking graph neural network (NBA-GNN) which employs non-backtracking updates on the messages, i.e., forbids the message from vertex \(i\) to vertex \(j\) from being incorporated in the message from vertex \(j\) to \(i\). To this end, we associate the hidden features with transitions between a pair of vertices, e.g., \(h_{j\to i}\), and update them from features associated with non-backtracking transitions, e.g., \(h_{k\to j}\) for \(k\neq i\).
To motivate our work, we formulate "message flows" as the sensitivity of a GNN with respect to walks in the graph. Then we explain how the message flows are redundant; the GNN's sensitivity of a walk with backtracking transitions can be covered by that of other non-backtracking walks. We explain how the redundancy is harmful to the GNN since the number of walks increases exponentially as the number of layers grows and the GNN becomes insensitive to a particular walk information. Hence, reducing the redundancy by only considering non-backtracking walks would benefit the message-passing updates to better recognize each walk's information. We further make a connection from our sensitivity analysis to the over-squashing phenomenon for GNNs (Topping et al., 2022; Black et al., 2023; Di Giovanni et al., 2023) in terms of access time.
Furthermore, we analyze our NBA-GNNs from the perspective of over-squashing and their expressive capability to recover sparse stochastic block models (SBMs). To this end, we prove that NBA-GNN improves the Jacobian-based measure of over-squashing (Topping et al., 2022) compared to its original GNN counterpart.
Next, we investigate NBA-GNN's proficiency in node classification within SBMs and its ability to distinguish between graphs originating from the Erdos-Renyi model or the SBM, from the results of (Stephan & Massoulie, 2022; Bordenave et al., 2015). Unlike traditional GNNs that operate on adjacency matrices and necessitate an average degree of at least \(\Omega(\log n)\), NBA-GNN demonstrates the ability to perform node classification with a substantially lower average degree bound of \(\omega(1)\) and \(n^{o(1)}\). Furthermore, the algorithm can accurately classify graphs even when the average degree remains constant.
Finally, we empirically evaluate our NBA-GNN on the long-range graph benchmark (Dwivedi et al., 2022) and transductive node classification problems (Sen et al., 2008; Pei et al., 2019). We observe that our NBA-GNN demonstrates competitive performance and even achieves state-of-the-art performance on the long-rage graph benchmark. For the node classification tasks, we demonstrate that NBA-GNN consistently improves over its conventional GNN counterpart.
To summarize, our contributions are as follows:
* We propose NBA-GNN as a solution for the message flow redundancy problem in GNNs.
* We analyze how the NBA-GNN alleviates over-squashing and is expressive enough to recover sparse stochastic block models with an average degree of \(o(\log n)\).
* We empirically verify our NBA-GNNs to show state-of-the-art performance on the long-range graph benchmark and consistently improve over the conventional GNNs across various tasks.
## 2 Related works
**Non-backtracking algorithms.** Non-backtracking updates have been considered by many classical algorithms (Newman, 2013; Kempton, 2016). For example, belief propagation (Pearl, 1982) infers the marginal distribution on probabilistic graphical models and has demonstrated success for tree graphs (Kim & Pearl, 1983) and graphs with large girth (Murphy et al., 2013). Non-backtracking random walks have shown superior performance over simple random walks in terms of mixing time for regular expander graphs (Alon et al., 2007). They have been shown to yield better recovery for stochastic block models. Furthermore, the non-backtracking matrix has been shown to yield better spectral separation properties, and its eigenspace contains information about the hidden structure of
a graph model (Bordenave et al., 2015; Stephan and Massoulie, 2022). These properties have also been proven to enhance the performance of non-backtracking PageRank (Arrigo et al., 2019).
**Analyzing over-squashing of GNNs.** When a node receives information from a \(k\)-hop neighbor node, an exponential number of messages passes through node representations with fixed-sized vectors. This leads to the loss of information, denoted as _over-squashing_(Alon and Yahav, 2020), and has been formalized in terms of sensitivity (Topping et al., 2022; Di Giovanni et al., 2023). In particular, the sensitivity term is defined as the Jacobian of a node feature at a layer of the GNN with respect to the input node. Researchers have shown how the over-squashing phenomenon can be analyzed by bounding the sensitivity term via curvature (Topping et al., 2022), effective resistance (Black et al., 2023), or access time of random walks (Di Giovanni et al., 2023).
To resolve this issue, prior works have considered graph rewiring methods that add or remove edges to mitigate over-squashing by creating an optimal computation graph. One approach is using topological metrics, e.g., curvature (Topping et al., 2022) or effective resistance (Black et al., 2023). Another line of work uses global aspects, e.g., connecting a virtual global node (Cai et al., 2023) or using graph Transformers (Ying et al., 2021; Kreuzer et al., 2021; Rampasek et al., 2022; Shirzad et al., 2023; He et al., 2023).
**Expressivity of GNNs with the Weisfeiler-Lehman (WL) test.** In the past few years, researchers have conducted significant studies on various aspects of the expressive power of GNNs. One line of research involves comparing GNNs with the WL test (Leman and Weisfeiler, 1968) to assess their expressiveness. For instance, Xu et al. (2019) demonstrated that MPNNs are at best as powerful as the WL test and introduced the graph isomorphism network (GIN), which matches the representational power of the WL test.
**Expressivity of GNNs for the stochastic block model (SBM).** Furthermore, certain studies have analyzed the expressive power of GNNs using variations of the SBM (Holland et al., 1983). Fountoulakis et al. (2022) established conditions for the existence of graph attention networks (GATs) that can precisely classify nodes in the contextual stochastic block model (CSBM) with high probability. Similarly, Baranwal et al. (2022) investigated the effects of graph convolutions within a network on the XOR-CSBM. These works focused primarily on the probability distribution of node features, such as the distance between the means of feature vectors.
## 3 Non-backtracking graph neural network
### Motivation from sensitivity analysis
We first explain how the conventional message-passing updates are prone to backtracking. To this end, consider a simple, undirected graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) where \(\mathcal{N}(i)\) denotes the neighbor of the node \(i\). Each node \(i\in\mathcal{V}\) is associated with a feature \(x_{i}\). Then the conventional graph neural networks (GNNs), i.e., message-passing neural networks (MPNNs) (Gilmer et al., 2017), iteratively updates the \(t\)-th layer node-wise hidden feature \(h_{i}^{(t)}\) as follows:
\[h_{i}^{(t+1)}=\phi^{(t)}\bigg{(}h_{i}^{(t)},\Big{\{}\psi^{(t)}\left(h_{i}^{(t )},h_{j}^{(t)}\right):j\in\mathcal{N}(i)\Big{\}}\,\bigg{)}, \tag{1}\]
where \(\phi^{(t)}\) and \(\psi^{(t)}\) are architecture-specific non-linear update and permutation invariant aggregation functions, respectively. Our key observation is that the message from the node feature \(h_{i}^{(t)}\) to the node feature \(h_{j}^{(t+1)}\) is reincorporated in the node feature \(h_{i}^{(t+2)}\), e.g., Figure 2(a) shows the computation graph of conventional GNNs where redundant messages are incorporated.
**Sensitivity analysis.** To concretely describe the backtracking nature of message-passing updates, we formulate the sensitivity of the final node feature \(h_{i}^{(T)}\) with respect to the input as follows:
\[\sum_{j\in\mathcal{V}}\frac{\partial h_{i}^{(T)}}{\partial h_{j}^{(0)}}=\sum _{s\in\mathcal{W}(i)}\prod_{t=1}^{T}\frac{\partial h_{s(t)}^{(t)}}{\partial h _{s(t-1)}^{(t-1)}}\,, \tag{2}\]
where \(h_{i}^{(0)}=x_{i}\), \(\mathcal{W}(i)\) denotes the set of \(T\)-step walks ending at node \(i\), and \(s(t)\) denotes the \(t\)-th node in the walk \(s\in\mathcal{W}(i)\). Intuitively, this equation shows that a GNN with \(T\) layer recognizes
the graph via aggregation of random walks with length \(T\). Our key observation from Equation (2) is on how the feature \(h_{i}^{(T)}\) is insensitive to the information to an initial node feature \(h_{j}^{(0)}\), due to the information being "squashed" by the aggregation over the exponential number of walks \(\mathcal{W}(i)\). A similar analysis has been conducted on how a node feature \(h_{i}^{(T)}\) is insensitive to the far-away initial node feature \(h_{j}^{(0)}=x_{j}\), i.e., the over-squashing phenomenon of GNNs (Topping et al., 2022).
**Redundancy of walks with backtracking.** In particular, a walk \(s\) randomly sampled from \(\mathcal{W}(i)\) is likely to contain a transition that backtracks, i.e., \(s(t)=s(t+2)\) for some \(t<T\). Then the walk \(s\) would be redundant since the information is contained in two other walks in \(\mathcal{W}(i)\): \(s(0),\ldots,s(t+1)\) and \(s(0),\ldots,s(t+1),s(t)=s(t+2),s(t+3),...,s(T)\). See Figure 2 for an illustration. This idea leads to the conclusion that non-backtracking walks, i.e., walks that do not contain backtracking transitions, are sufficient to express the information in the walks \(\mathcal{W}(i)\). Since the exponential number of walks in \(\mathcal{W}(i)\) causes the GNN to be insensitive to a particular walk information, it makes sense to design a non-backtracking GNN that is sensitive to the constrained set of non-backtracking walks. We note that similar concepts of redundancy for GNNs have been studied in prior works (Jia et al., 2020; Chen et al., 2022).
**Relation to over-squashing.** Finally, we point out an intriguing motivation for our work in terms of over-squashing. In particular, we note that Di Giovanni et al. (2023) analyzed the lower bound for the Jacobian obstruction that measures the degree of over-squashing in terms of access time with respect to a simple random walk. They conclude that the degree of over-squashing, i.e., the size of Jacobian obstruction, is higher for a pair of nodes with longer access time.
Hence, to design a GNN architecture robust to the over-squashing phenomenon, one could (i) propose a random walk that has shorter access time in general for a pair of nodes in the graph and (ii) design a GNN that aligns with the optimized random walk. Non-backtracking random walks have been empirically shown and believed to generally yield faster access time than simple random walks (Lin and Zhang, 2019; Fasino et al., 2023). Hence, one could aim to design a GNN architecture that aligns with the non-backtracking random walks.
However, to the best of our knowledge, there is no formal proof of scenarios where non-backtracking random walks yield smaller access time. As a motivating example, we provide a theoretical result comparing the access time of non-backtracking and simple random walks for tree graphs.
**Proposition 1**.: _Given a tree \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) and a pair of nodes \(i,j\in\mathcal{V}\), the access time of begrudgingly backtracking random walk is equal or smaller than that of a simple random walk, where the equality holds if and only if the random walk length is 1._
The begrudgingly backtracking random walk (Rappaport et al., 2017) modifies non-backtracking random walks to remove "dead ends" for tree graphs. Please refer to Appendix A for the proof.
### Method description
In this section, we present our **Non-BA**cktracking GNNs (NBA-GNN) with the motivation described in Section 3.1. Given an undirected graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), our NBA-GNN associates a pair of hidden features \(h_{i\to j}^{(t)},h_{j\to i}^{(t)}\) for each edge \(\{i,j\}\). Then the non-backtracking message passing update for a hidden feature \(h_{j\to i}^{(t)}\) is defined as follows:
\[h_{j\to i}^{(t+1)}=\phi^{(t)}\bigg{(}h_{j\to i}^{(t)};\Big{\{}\psi^{(t)} \left(h_{k\to j}^{(t)},h_{j\to i}^{(t)}\right):k\in\mathcal{N}(j)\setminus\{ i\}\Big{\}}\,\bigg{)}, \tag{3}\]
where \(\phi^{(t)}\) and \(\psi^{(t)}\) are backbone-specific non-linear update and permutation-invariant aggregation functions at the \(t\)-th layer, respectively. For example, \(\psi^{(t)}\) and \(\phi^{(t)}\) are multi-layer perceptron and summation over a set for the graph isomorphism network (Xu et al., 2019), respectively. Given the
Figure 2: Two non-backtracking walks (right) are sufficient to express information in a walk with backtracking transition.
update in Equation (3), one can observe that the message \(h^{(t)}_{i\to j}\) is never incorporated in the message \(h^{(t+1)}_{j\to i}\), and hence the update is free from backtracking.
**Initialization and node-wise aggregation of messages.** The message at the \(0\)-th layer \(h^{(0)}_{i\to j}\) is initialized by encoding the node features \(x_{i},x_{j}\), and the edge feature \(e_{ij}\) using a non-linear function \(\phi\). After updating hidden features for each edge based on Equation (3), we apply permutation-invariant pooling over all the messages for graph-wise predictions. Since we use hidden features for each edge, we construct the node-wise predictions at the final layer as follows:
\[h_{i}=\sigma\left(\rho\left\{h^{(T)}_{j\to i}:j\in\mathcal{N}(i) \right\},\rho\left\{h^{(T)}_{i\to j}:j\in\mathcal{N}(i)\right\} \right), \tag{4}\]
where \(\sigma\) is a non-linear aggregation function with different weights for incoming edges \(j\to i\) and outgoing edges \(i\to j\), \(\rho\) is a non-linear aggregation function invariant to the permutation of nodes in \(\mathcal{N}(i)\). We provide a computation graph of NBA-GNNs in Figure 2(b) to summarize our algorithm.
**Begrudgingly backtracking update.** While the messages from our update are resistant to backtracking, a message \(h^{(t)}_{j\to i}\) may get trapped in node \(i\) for the special case when \(\mathcal{N}(i)=\{j\}\). To resolve this issue, we introduce a simple trick coined begrudgingly backtracking update (Rappaport et al., 2017) that updates \(h^{(t+1)}_{i\to j}\) using \(h^{(t)}_{j\to i}\) only when \(\mathcal{N}(i)=\{j\}\). We empirically verify the effectiveness of begrudgingly backtracking updates in Section 5.3.
**Implementation.** To better understand our NBA-GNN, we provide an example of non-backtracking message-passing updates with a GCN backbone (Kipf and Welling, 2016), coined NBA-GCN. The message update at the \(t\)-th layer of NBA-GCN can be written as follows:
\[h^{(t+1)}_{j\to i}=h^{(t)}_{j\to i}+\sigma_{\text{GCN}}\left( \frac{1}{|\mathcal{N}(j)|-1}\textbf{W}^{(t)}\sum_{k\in\mathcal{N}(j)\setminus \{i\}}h^{(t)}_{k\to j}\right), \tag{5}\]
where \(\sigma_{\text{GCN}}\) is an element-wise nonlinear function, e.g., rectified linear unit (Agarap, 2018), \(\textbf{W}^{(t)}\) is the weight matrix, and the messages are normalized by their population \(|\mathcal{N}(j)|-1\).
## 4 Theoretical analysis
In this section, we analyze the proposed NBA-GNN framework. To be specific, we show that (a) our NBA-GNN improves the upper bound for sensitivity-based measures of GNN over-squashing and (b) NBA-GNNs can detect the underlying structure of SBMs even for very sparse graphs.
### Sensitivity analysis on over-squashing
We first analyze how NBA-GNNs alleviate the over-squashing issue. To this end, following Topping et al. (2022), we use the Jacobian of node-wise output with respect to another node initial feature as
Figure 4: Begrudgingly backtracking update (solid).
Figure 3: Computation graph of typical GNN (a) and NBA-GNN predicting node “0”. (a) For typical GNNs, redundant messages in typical GNNs increase size of the computation graph. (b) NBA-GNN assigns a pair of features for each edge updates them via non-backtracking updates. For node-wise prediction, e.g., for “0”, at the last layer, we aggregate messages on the edges connected to the node.
a way to assess over-squashing effect. We first derive the Jacobian quantification of over-squashing for the NBA-GNNs, and then demonstrate that the upper bound for Jacobian is larger than that of the conventional GNNs derived by Topping et al. (2022).
To derive our analysis, we introduce the non-backtracking matrix \(B\in\{0,1\}^{2|\mathcal{E}|\times 2|\mathcal{E}|}\) and the incidence matrix \(C\in\mathbb{R}^{2|\mathcal{E}|\times|\mathcal{V}|}\) which describe the NBA-GNN message-passing and node-wise aggregation via linear operation, respectively. To be specific, the backtracking matrix \(B\) and the incidence matrix \(C\) are defined as follows:
\[B_{(\ell\to k),(j\to i)}=\begin{cases}1&\text{if }k=j,\ell\neq i\\ 0&\text{otherwise}\end{cases},\qquad C_{(k\to j),i}=\begin{cases}1&\text{if }j=i \text{ or }k=i\\ 0&\text{otherwise}\end{cases}.\]
We also let \(D\) denote the degree matrix of NBA-GNN counting the number of outgoing edges for each edge, i.e., it is a diagonal matrix with \(D_{(j\to i),(j\to i)}=\sum_{\ell\to k}B_{(j\to i),(\ell\to k)}\) for each index \(j\to i\). Next, we define \(\widehat{B}\) as the normalized non-backtracking matrix augmented with self-loops, i.e., \(\widehat{B}=(D+I)^{-\frac{1}{2}}(B+I)(D+I)^{-\frac{1}{2}}\). Finally, we let \(\tilde{C}\) denote a matrix where \(\tilde{C}_{(k\to j),i}=C_{(k\to j),i}+C_{(j\to k),i}\). Then, one obtains the following sensitivity bound of NBA-GNN.
**Lemma 1**.: _Consider two nodes \(i,j\in\mathcal{V}\) with distance \(T\) given a \((T-1)\)-layer NBA-GNN as described in Equation (3) and Equation (4). Suppose \(|\nabla\phi^{(t)}|,|\nabla\sigma|\leq\alpha\), \(|\nabla\psi^{(t)}|,|\nabla\rho|\leq\beta\), and \(|\nabla\phi|\leq\gamma\) for \(0\leq t<T\). Then the following holds:_
\[\left\|\frac{\partial h_{j}}{\partial x_{i}}\right\|\leq(\alpha\beta)^{T} \gamma(\tilde{C}^{\top}\widehat{B}^{(T-1)}\tilde{C})_{j,i}.\]
We provide the proof in Appendix B. Lemma 1 states how _the over-squashing effect is controlled by the power of \(\widehat{B}\)_. Consequently, one can infer that increasing the upper bound likely leads to mitigation of the GNN over-squashing effect (Topping et al., 2022).
From this motivation, we provide an analysis to support our claim on how the NBA-GNN suffers less from over-squashing effect due to its larger sensitivity bound.
**Theorem 1**.: _Let \(\widehat{A}\) denote the degree-normalized matrix. Then, for any pair of nodes \(i,j\in\mathcal{V}\) with distance \(T\), the sensitivity bound of NBA-GNN is larger than that of conventional GNNs (Topping et al., 2022), i.e., \((\tilde{C}^{\top}\widehat{B}^{T-1}\tilde{C})_{j,i}\geq(\widehat{A}^{T})_{j,i}\). For \(d\)-regular graphs, \((\tilde{C}^{\top}\widehat{B}^{T-1}\tilde{C})_{j,i}\) decays slower by \(O(d^{-T})\), while \((\widehat{A}^{T})_{j,i}\) decays with \(O((d+1)^{-T})\)._
Note that \((\alpha\beta)^{T}(\widehat{A}^{T})_{j,i}\) provides an upper bound for the sensitivity term in conventional GNNs (Topping et al., 2022). We provide the proof in Appendix B. Our proof is based on comparing the degree-normalized number of non-backtracking and simple walks from node \(i\) to node \(j\).
### Expressive power of NBA-GNN in the lens of SBMs
In the literature on the expressive capabilities of GNNs, comparisons with the well-known \(k\)-WL test are common. However, even when certain models surpass the \(k\)-WL test in performance, evaluating their relative merits remains a nontrivial task. Furthermore, due to the substantial performance gap between the 1-WL and 3-WL tests, many algorithms fall into a range between these two tests, making it more difficult to compare them with each other. It is also worth noting that comparing GNNs with the WL test does not always accurately reflect their performance on real-world datasets.
To address these issues, several studies have turned to spectral analysis of GNNs. From a spectral viewpoint, GNNs can be seen as functions of the eigenvectors and eigenvalues of the given graph. NT & Meahara (2019) showed that GNNs operate as low-pass filters on the graph spectrum, and Balcilar et al. (2020) analyzed the use of various GNNs as filters to extract the relevant graph spectrum and measure their expressive power. Moreover, Oono & Suzuki (2020) argue that the expressive power of GNNs is influenced by the topological information contained in the graph spectrum.
The eigenvalues and the corresponding adjacency matrix eigenvectors play a pivotal role in establishing the fundamental limits of community detection in SBM, as evidenced by Abbe et al. (2015), Abbe & Sandon (2015), Hajek et al. (2016), Yun & Proutiere (2016), and Yun & Proutiere (2019).
The adjacency matrix exhibits a spectral separation property, and an eigenvector containing information about the assignments of the vertex community becomes apparent (Lei and Rinaldo, 2015). Furthermore, by analyzing the eigenvalues of the adjacency matrix, it is feasible to determine whether a graph originates from the Erdos-Renyi (ER) model or the SBM (Erdos et al., 2013; Avrachenkov et al., 2015). However, these spectral properties are particularly salient when the average degree of the graph satisfies \(\Omega(\log n)\). For graphs with average degrees \(o(\log n)\), vertices with higher degrees predominate, affecting eigenvalues and complicating the discovery of the underlying structure of the graph (Benaych-Georges et al., 2019).
In contrast, the non-backtracking matrix exhibits several advantageous properties even for constant-degree cases. In (Stephan and Massoulie, 2022), the non-backtracking matrix demonstrates a spectral separation property and establishes the presence of an eigenvector containing information about vertex community assignments, when the average degree only satisfies \(\omega(1)\) and \(n^{o(1)}\). Furthermore, Bordenave et al. (2015) have demonstrated that by inspecting the eigenvalues of the non-backtracking matrix, it is possible to discern whether a graph originates from the ER model or the SBM, even when the graph's average degree remains constant. This capability sets NBA-GNNs apart and enhances their performance in both node and graph classification tasks, especially in sparse settings. These lines of reasoning lead to the formulation of the following theorem.
**Theorem 2**.: _(**Informal**) Assume that the average degree in the stochastic block model satisfies the conditions of being at least \(\omega(1)\) and \(n^{o(1)}\). In such a scenario, the NBA-GNN can map from graph \(\mathcal{G}\) to node labels._
**Theorem 3**.: _(**Informal**) Suppose we have a pair of graphs with a constant average degree, one generated from the stochastic block model and the other from the Erdos-Renyi model. In this scenario, the NBA-GNN is capable of distinguishing between them._
For an in-depth exploration of this argument, please refer to Appendix C. The rationale behind these valuable properties of the non-backtracking matrix \(B\) in sparse scenarios lies in the fact that the matrix \(B^{k}\) exhibits similarity to the \(k\)-hop adjacency matrix, while \(A^{k}\) is mainly influenced by high-degree vertices. For these reasons, NBA-GNNs would outperform traditional GNNs in both node and graph classification tasks, particularly in sparse graph environments.
## 5 Experiment
In this section, we assess the effectiveness of NBA-GNNs across multiple benchmarks on graph classification, graph regression, and node classification tasks. Our method shows competitive performance compared to graph Transformers within long-range graph benchmarks, and robust improvements in handling transductive node classification tasks. We also conduct ablation studies to verify our method, and provide experimental details in Appendix D.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c}{**Peptides-func**} & \multicolumn{2}{c}{**Peptides-struct**} & \multicolumn{2}{c}{**Pascal VOC-SP**} \\ \cline{2-5} & AP \(\uparrow\) & Imp. & MAE \(\downarrow\) & Imp. & F1 \(\uparrow\) & Imp. \\ \hline GCN & 0.5930 \(\pm\) 0.0023 & & 0.3496 \(\pm\) 0.0013 & & 0.1268 \(\pm\) 0.0006 & \\ + NBA & 0.6951 \(\pm\) 0.0024 & +17\% & 0.2656 \(\pm\) 0.0009 & +22\% & 0.2537 \(\pm\) 0.0054 & +100\% \\ + NBA+LapPE & **0.7206 \(\pm\) 0.0028** & +22\% & **0.2472 \(\pm\) 0.0008** & +29\% & **0.3005 \(\pm\) 0.0010** & +137\% \\ \hline GIN & 0.5498 \(\pm\) 0.0079 & & 0.3547 \(\pm\) 0.0045 & & 0.1265 \(\pm\) 0.0076 & \\ + NBA & 0.6961 \(\pm\) 0.0045 & +27\% & 0.2534 \(\pm\) 0.0025 & +29\% & 0.3040 \(\pm\) 0.0119 & +140\% \\ + NBA+LapPE & **0.7071 \(\pm\) 0.0067** & +29\% & **0.2424 \(\pm\) 0.0010** & +32\% & **0.3223 \(\pm\) 0.0010** & +155\% \\ \hline GatedGCN & 0.5864 \(\pm\) 0.0077 & & 0.3420 \(\pm\) 0.0013 & & 0.2873 \(\pm\) 0.0219 & \\ + NBA & 0.6429 \(\pm\) 0.0062 & +10\% & 0.2539 \(\pm\) 0.0011 & +26\% & 0.3910 \(\pm\) 0.0010 & +36\% \\ + NBA+LapPE & **0.6982 \(\pm\) 0.0014** & +19\% & **0.2466 \(\pm\) 0.0012** & +28\% & **0.3969 \(\pm\) 0.0027** & +38\% \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of conventional MPNNs and GNNs in the long-range graph benchmark, with and without Laplacian positional encoding. We also denote the relative improvement by Imp.
### Long-range graph benchmark
The long-range graph benchmark (LRGB) (Dwivedi et al., 2022) is a set of tasks that require learning long-range interactions. We validate our method using three datasets from the LRGB benchmark: graph classification (Peptides-func), graph regression (Peptides-struct), and node classification (PascalVOC-SP). We use performance scores from (Dwivedi et al., 2022) and from each baseline papers: subgraph based GNNs (Abu-El-Haija et al., 2019; Michel et al., 2023; Giusti et al., 2023), graph Transformers (Kreuzer et al., 2021; Rampasek et al., 2022; Shirzad et al., 2023; He et al., 2023), and graph rewiring methods (Gasteiger et al., 2019; Gutteridge et al., 2023). For NBA-GNNS and NBA-GNNs with bergudingly backtracking, we report the one with better performance. Furthermore, LapPE, i.e., Laplacian positional encoding (Dwivedi et al., 2020), is applied, as it enhances the performance of NBA-GNNs in common cases.
As one can see in Table 1, NBA-GNNs show improvement regardless of the combined backbone GNNs, i.e., GCN (Kipf and Welling, 2016), GIN (Xu et al., 2019), and GatedGCN (Bresson and Laurent, 2017). Furthermore, even against a variety of baselines in Table 2, at least one NBA-GNNs shows competitive performance with the best baseline in LRGB. Also, it is noteworthy that the improvement of NBA-GNNs is higher in dense graphs, PascalVOC-SP having an average degree of 8 while Peptides-func, Peptides-struct has an average degree of 2.
### Transductive node classification tasks
We conduct experiments on three citation networks (Cora, CiteSeer, and Pubmed) (Sen et al., 2008), and three heterophilic datasets (Texas, Wisconsin, and Cornell) (Pei et al., 2019) for transductive node classification. In our experiments, we employed various GNN architectures (Kipf and Welling, 2016; Hamilton et al., 2017; Velickovic et al., 2018; Li et al., 2016), along with their corresponding NBA versions, as illustrated in Table 3. The results indicate that the NBA operation improves the performance of all GNN variants. Specifically, it demonstrates significant enhancements, particularly on heterophilic datasets. Given that related nodes in heterophilic graphs are often widely
\begin{table}
\begin{tabular}{l l c c c} \hline \hline
**Method** & **Model** & \begin{tabular}{c} Peptides-func \\ AP \\ \end{tabular} & \begin{tabular}{c} Peptides-struct \\ MAE \(\downarrow\) \\ \end{tabular} &
\begin{tabular}{c} VOC-SP \\ F1 \\ \end{tabular} \\ \hline \multirow{4}{*}{GNNs} & GCN & \(0.5930\pm 0.0023\) & \(0.3496\pm 0.0013\) & \(0.1268\pm 0.0060\) \\ & GIN & \(0.5498\pm 0.0079\) & \(0.3547\pm 0.0045\) & \(0.1265\pm 0.0076\) \\ & GatedGCN & \(0.5869\pm 0.0077\) & \(0.3420\pm 0.0013\) & \(0.2873\pm 0.0219\) \\ & GatedGCN+PE & \(0.6069\pm 0.0045\) & \(0.3357\pm 0.0066\) & \(0.2860\pm 0.0085\) \\ \hline \multirow{4}{*}{Subgraph GNNs} & MixHop-GCN & \(0.6592\pm 0.0066\) & \(0.2921\pm 0.0023\) & \(0.2506\pm 0.0133\) \\ & MixHop-GCN+LapPE & \(0.6843\pm 0.0049\) & \(0.2614\pm 0.0023\) & \(0.2218\pm 0.0174\) \\ & PathNN & \(0.6816\pm 0.0046\) & \(0.2545\pm 0.0032\) & - \\ & CIN++ & \(0.6569\pm 0.0117\) & \(0.2523\pm 0.0013\) & - \\ \hline \multirow{4}{*}{Transformers} & Transformer+LapPE & \(0.6326\pm 0.0126\) & \(0.2529\pm 0.0016\) & \(0.2694\pm 0.0098\) \\ & GraphGPS+LapPE & \(0.6535\pm 0.0041\) & \(0.2500\pm 0.0005\) & \(0.3748\pm 0.0119\) \\ & SAN+LapPE & \(0.6384\pm 0.0012\) & \(0.2683\pm 0.0043\) & \(0.3230\pm 0.0039\) \\ & Explorer & \(0.6527\pm 0.0043\) & \(0.2481\pm 0.0007\) & \(0.3966\pm 0.0207\) \\ & Graph MLP-Mixer/ViT & \(0.6970\pm 0.0080\) & \(0.2449\pm 0.0016\) & - \\ \hline \multirow{4}{*}{Rewiring methods} & DIGI+MPNN & \(0.6469\pm 0.0019\) & \(0.3173\pm 0.0067\) & \(0.2824\pm 0.0039\) \\ & DIGI+MPNN+LapPE & \(0.6530\pm 0.0056\) & \(0.2616\pm 0.0018\) & \(0.2921\pm 0.0038\) \\ & DRw-GCN+LapPE & \(0.7150\pm 0.0044\) & \(0.2536\pm 0.0015\) & \(0.1851\pm 0.0022\) \\ & DRw-GIN+LapPE & \(0.7126\pm 0.0045\) & \(0.2606\pm 0.0014\) & \(0.2692\pm 0.0059\) \\ & DRw-GatedGCN+LapPE & \(0.6977\pm 0.0026\) & \(0.2539\pm 0.0067\) & \(0.3314\pm 0.0024\) \\ \hline \multirow{4}{*}{**NBA-GNNs (Ours)**} & NBA-GCN & \(0.6951\pm 0.0024\) & \(0.2656\pm 0.0009\) & \(0.2537\pm 0.0054\) \\ & NBA-GCN+LapPE & \(0.7270\pm 0.0028\) & \(0.2472\pm 0.0088\) & \(0.3005\pm 0.0010\) \\ \cline{1-1} & NBA-GIN & \(0.6961\pm 0.0045\) & \(0.2775\pm 0.0057\) & \(0.3040\pm 0.0119\) \\ \cline{1-1} & NBA-GIN+LapPE & \(0.7071\pm 0.0077\) & \(0.2424\pm 0.0010\) & \(0.3223\pm 0.0063\) \\ \cline{1-1} & NBA-GatedGCN & \(0.6429\pm 0.0062\) & \(0.2539\pm 0.0011\) & \(0.3910\pm 0.0010\) \\ \cline{1-1} & NBA-GatedGCN+LapPE & \(0.6982\pm 0.0014\) & \(0.2466\pm 0.0012\) & \(0.3969\pm 0.0277\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Evaluation of NBA-GNN on the LRGB benchmark. We color the first-, second- and third-best results. Performance within a standard deviation of one another is considered equal. Non-reported values are denoted by -.
separated (Zheng et al., 2022), the ability of NBA-GNN to alleviate over-squashing plays a vital role in classifying nodes in such scenarios.
### Ablation studies
Here, we conduct ablation studies to empirically verify our framework. For simplicity, we use BA for backtracking GNNs and BG for begrudgingly backtracking GNNs. All experiments are averaged over 3 seeds. We use hyper-parameters for GCN from Tonshoff et al. (2023).
Non-backtracking vs. backtracking GNNs.We first verify whether the performance improvements indeed stem from the non-backtracking updates. To this end, we compare our NBA-GNN with the backtracking variant, termed BA-GNN, which similarly assigns a pair of hidden features for each edge but allows backtracking update, i.e., use \(h_{i\to j}^{(\ell)}\) for updating \(h_{j\to i}^{(\ell+1)}\). We report the corresponding result in Figures 4(a) and 4(b). Here, one can observe how NBA-GNN outperforms the BA-GNN consistently with varying numbers of layers. Intriguingly, one can also observe that BA-GNN outperforms the naive backbone, i.e., GCN, consistently.
graph classification tasks on SBMs. Additionally, we demonstrate that NBA-GNNs achieve competitive performance on the LRGB benchmark and outperform conventional GNNs across various tasks.
**Limitations.** The space complexity of our framework is larger than the conventional GNNs. It creates \(2|\mathcal{E}|\) messages considering directions, and \(\sum_{i\in\mathcal{V}}\binom{d_{i}}{d_{2}}\) connections between them where \(d_{i}\) is the degree of a node \(i\). Therefore, investigation of space complexity reduction for our work would be an interesting future work.
**Ethics Statement.** The authors are committed to upholding the ICLR Code of Ethics and maintaining the highest ethical standards in research and scholarly activities. Additionally, it is crucial to acknowledge the potential ethical considerations associated with the utilization of our model. Although the model can offer valuable insights and advantages, there exists a possibility of misuse or unintended consequences. For instance, it could be exploited to extract sensitive information, such as affiliations or political inclinations, from social networks. Therefore, it is important to apply such models carefully and actively work towards preventing any adverse side effects. Responsible implementation is essential to ensure that the potential benefits of our work are maximized while minimizing any potential risks or adverse implications.
**Reproducibility.** The code for all experiments conducted in this paper is included in the accompanying zip file. We also provide comprehensive proofs for the theoretical analysis in Appendices A, B, and C. Further information about the experiments, including details about the datasets and models, can be found in Section 5. For a deeper dive into experiment specifics such as hyperparameter settings, please refer to Appendix D.
|
2303.07520 | Multi-class Skin Cancer Classification Architecture Based on Deep
Convolutional Neural Network | Skin cancer detection is challenging since different types of skin lesions
share high similarities. This paper proposes a computer-based deep learning
approach that will accurately identify different kinds of skin lesions. Deep
learning approaches can detect skin cancer very accurately since the models
learn each pixel of an image. Sometimes humans can get confused by the
similarities of the skin lesions, which we can minimize by involving the
machine. However, not all deep learning approaches can give better predictions.
Some deep learning models have limitations, leading the model to a
false-positive result. We have introduced several deep learning models to
classify skin lesions to distinguish skin cancer from different types of skin
lesions. Before classifying the skin lesions, data preprocessing and data
augmentation methods are used. Finally, a Convolutional Neural Network (CNN)
model and six transfer learning models such as Resnet-50, VGG-16, Densenet,
Mobilenet, Inceptionv3, and Xception are applied to the publically available
benchmark HAM10000 dataset to classify seven classes of skin lesions and to
conduct a comparative analysis. The models will detect skin cancer by
differentiating the cancerous cell from the non-cancerous ones. The models
performance is measured using performance metrics such as precision, recall, f1
score, and accuracy. We receive accuracy of 90, 88, 88, 87, 82, and 77 percent
for inceptionv3, Xception, Densenet, Mobilenet, Resnet, CNN, and VGG16,
respectively. Furthermore, we develop five different stacking models such as
inceptionv3-inceptionv3, Densenet-mobilenet, inceptionv3-Xception,
Resnet50-Vgg16, and stack-six for classifying the skin lesions and found that
the stacking models perform poorly. We achieve the highest accuracy of 78
percent among all the stacking models. | Mst Shapna Akter, Hossain Shahriar, Sweta Sneha, Alfredo Cuzzocrea | 2023-03-13T23:16:18Z | http://arxiv.org/abs/2303.07520v1 | # Multi-class Skin Cancer Classification Architecture Based on Deep Convolutional Neural Network
###### Abstract
Skin cancer is a deady disease. Melanoma is a type of skin cancer responsible for the high mortality rate. Early detection of skin cancer can enable patients to treat the disease and minimize the death rate. Skin cancer detection is challenging since different types of skin lesions share high similarities. This paper proposes a computer-based deep learning approach that will accurately identify different kinds of skin lesions. Deep learning approaches can detect skin cancer very accurately since the models learn each pixel of an image. Sometimes humans can get confused by the similarities of the skin lesions, which we can minimize by involving the machine. However, not all deep learning approaches can give better predictions. Some deep learning models have limitations, leading the model to a false-positive result. We have introduced several deep learning models to classify skin lesions to distinguish skin cancer from different types of skin lesions. Before classifying the skin lesions, data preprocessing and data augmentation methods are used. Finally, a Convolutional Neural Network (CNN) model and six transfer learning models such as Resnet-50, VGG-16, Densenet, Mobilenet, Inceptionv3, and Xception are applied to the publi-cally available benchmark HAM10000 dataset to classify seven classes of skin lesions and to conduct a comparative analysis. The models will detect skin cancer by differentiating the cancerous cell from the non-cancerous ones. The models' performance is measured using performance metrics such as precision, recall, f1 score, and accuracy. We receive accuracy of 90, 88, 88, 87, 82, and 77 percent for inceptionv3, Xception, Densenet, Mobilenet, Resnet, CNN, and VGG16, respectively. Furthermore, we develop five different stacking models such as inceptionv3-inceptionv3, Densenet-mobilenet, inceptionv3-Xception, Resnet50-Vgg16, and stack-six for classifying the skin lesions and found that the stacking models perform poorly. We achieve the highest accuracy of 78 percent among all the stacking models.
Index Terms--Skin cancer; Transfer learning; CNN; Densenet; VGG-16; Resenet-50; Inceptionv3; Xception; Mobilenet
## I Introduction
The superficial layer of skin, called the epidermis, consists of Squamous, Basal, and melanocyte cells. Squamous is the outermost layer of cells. Basal cells are the epidermis' lowermost cells. Melanocyte cells protect deeper layers of skin from sun exposure by producing melanin, a brown pigment substance. Due to ultraviolet light exposure, the DNA mutations induce the growth of skin cells, leading to skin cancer [1]. Melanoma, Squamous Cell Carcinoma, and Basal Cell Carcinoma are the substantial group of skin cancer associated with Squamous, Basal, and Melanocytes cells. Worldwide, Almost 10 million skin cancer deaths took place in 2020. According to the world health organization, it is estimated that, globally, one-third of all diagnosed cancer cases are skin cancer. Nowadays, skin cancer is a global public health issue that causes approximately 5.4 million newly identified skin cancer incidences each year in the United States [2, 3]. However, melanoma alone causes three-fourths of all skin cancer-related deaths, about 10,000 deaths each year in the United States. In Europe, over 1,00,000 cases, and in Australia, nearly 15,229 new cases of melanoma have been accounted for annually [4]. Moreover, an increasing trend of skin cancer has been recorded in the past decades. In the United Kingdom, the percentage of melanoma has increased by 119 percent since the 1990s; in the same duration, it has increased by 250 percent in the United States [5? ]. Skin cancer is an alarming issue, and it should be detected as early as possible. The ritual diagnostic method of detecting skin cancer disease usually is the biopsy method. The biopsy method requires removing a portion of tissue from the cell of the patient's body so that they can analyze it in the laboratory [6]. The whole procedure is painful, time-consuming, and costly. Sometimes, patients may get into trouble due to the hassle of visiting the hospital.
Recently, the most popular non-surgical instruments used for diagnosis systems are macroscopic and dermoscopic images [7]. The macroscopic image has a lower resolution problem since the images are usually captured with a camera and mobile phone [8]. Dermoscopy images are high-resolution skin images derived from visualizing the deeper skin structures [9]. Since multiple skin cancer types share similar symptoms, it becomes challenging for dermatologists to diagnose even with dermoscopy images. Expert dermatologists are limited in their studies and experiences, and it is not possible for a human to recognize all possible appearances of skin cancer. It is found that the ability of dermatologists for skin cancer diagnosis is an average accuracy of 62 percent to 80 percent [10, 11, 12]. The accuracy varies with the experience of dermatologists. The worst observation is that the performance further can be dropped for less experienced dermatologists [11]. Such
conditions of a skin cancer diagnosis can be very dangerous for cancer patients with false-negative results. It will not improve the deadly condition of the world.
Nowadays, technology has become so advanced that it plays a significant role in the medical sector. The research commu-nity has made significant progress in developing computer-aided diagnosis tools to overcome the life-threatening issue [8, 13, 14]. Modern technology has brought us the concept of a neural network that can perform magnificently for class-sifying images in the medical domain. However, previous investigations fail to extend their study for multiple classes in skin cancer classification [15, 16, 17, 18, 12]. Additionally, those are limited by exploring a few deep learning models [197, 20]. The model must learn pixel by pixel properly of an image to detect the cancerous cell accurately. There are some limitations in each model, which prevent those models from giving accurate results. Therefore, we can not be sure which model will work best. Previously, some models have performed very well in the medical sector. Since the underlying concept of the deep learning model is like a black box, we cannot even say which model will work best on which kind of dataset. Therefore, we have made a comparative analysis by training several neural network models such as Mobilenet, Inceptionv3, Resnet-50, Vgg-16, Xception, Densenet, and a Convolutional Neural Network (CNN).
The deep learning models are capable of learning seven types of skin cancer such as melanoma, melanocytic nevus, basal cell carcinoma, actinic keratosis, benign keratosis, dermatofibroma, vascular lesion, and Squamous cell carcinoma, and distinguish the cancer cells from non-cancerous cells. At first, we preprocess the data by resizing it to 120X120 resolution. Then we augmented the dataset using augmentation methods such as rotating, horizontal and vertical flipping, std normalization, random zooming, width height shifting, and ZCA whitening. Finally, we fed the images to the neural network models. Furthermore, we developed five stacking ensemble models such as inceptionv3-inceptionv3, Densenet-mobile net, inceptionv3-Xception, Resnet50-Vgg16 and stack-six model and fed the images to the models to check how the stacking ensemble model performs on the skin cancer dataset. We achieve 90 percent accuracy using inceptionv3, which is the highest accuracy among all the models, including the stacking ensemble models. Our proposed method outperformed expert dermatologists, which will contribute to the medical sector by saving many lives. Our comparative analysis has been done using pre-trained deep learning models, which are more efficient than simple models. We have proposed the stacking ensemble models using the weights of existing deep learning models. Our unique experiment has given a new observation that will help future researchers working on ensemble learning models. The overall process will help to identify which models are appropriate for making the best decision for detecting skin cancer disease.
The rest of this paper is arranged as follows. Section 2 provides the background needed for the study. The data sources, preprocessing methods, and models used in this work for aggression detection tasks are discussed in Section 3. The simulation results based on the classification algorithms and the comparison using the derived results are analyzed in Section 4. Finally, this paper is summarized in Section 5.
## II Background and Literature Review
Many investigations have been done on the topic of image classification. We have gone through some of the related papers, which helped us significantly improve our analysis.
Previously, M. Vidya and M. V. Karki [21] showed a machine-learning approach for detecting melanoma skin cancer. Their approach includes five phases: data acquisition, preprocessing of data, segmentation, feature extraction, and classification. They preprocessed the dataset to remove unwanted noises such as artifacts, low contrasts, hairs, veins, skin colors, moles, etc. After that, they used a segmentation process called Geodesic Active Contours (GAC). The features of the shape and edge information are extracted using HOG. Finally, they applied SVM, KNN, Naive Bayes, and Neural Networks to the extracted features. They obtained 97.8 percent accuracy using the SVM classifier and 85 percent specificity using the KNN classifier.
K. Manasa and D. G. V. Murthy [22] used VGG16 and Resnet-50 models for classifying skin cancer disease using malignant and benign images. They used 3297 images to train their models, of which 1497 images belong to the malignant class, and 1800 images belong to the benign class. They got 80 percent accuracy for the VGG16 model and 87 percent accuracy for the Resnet50 model.
M. Hasan et al. [23] used a convolutional neural network to classify cancer images. Their dataset contained benign and malignant classes. Those images are converted into greyscale images for better CPU usage. The preprocessed data are fed into the convolutional neural network. Finally, they evaluated their model using precision, recall, specificity, f1 score, and accuracy. They got 89.5 percent accuracy on the test dataset. U. B. Ansari and T. Sarode [24] showed an image preprocessing method for detecting skin cancer. Their system is implemented by using Gray Level Co-occurrence Matrix (GLCM) and Support Vector Machine (SVM). GLIM is used to extract features from the image. The features are then fed to the SVM classifier for making the classification result. Before extracting the features, they preprocessed the data by using three methods- Grayscale Conversion, Noise removal, and Image enhancement to reduce unwanted distortions. They preprocessed the images to get the important features from the image. Using their approach, they achieved 95 percent accuracy on the test dataset.
P. G. Brindha et al. [25] showed a comparative study of SVM and CNN for detecting the types of skin cancer. Their image preprocessing method includes reducing the channel of the image by converting the original image into a greyscale image. They used SVM and CNN models to classify skin
cancer, where they observed that SVM produced a better result.
T. Saba [26] showed a review on skin cancer analysis. They reviewed the investigations that have been done on the classification of skin cancer. The investigation found that most of the previous research works have been done using the SVM, CNN, and ANN models. Some of the research has been done using the segmentation process, which we have already discussed earlier.
M. A. Kassem et al. [27] proposed a modified GoogleNet model for classifying eight classes of skin lesions. They added more filters to each layer to enhance and reduce noise. They replaced the last three layers in two different ways. The last three layers have been dropped out and replaced with a fully connected layer, a softmax layer, and a classification output layer. That change has been made to increase the probability of the target class. Secondly, the last two layers have been dropped. The original fully connected layer has been kept as same to detect the outliers. They achieved 63 percent accuracy using the original GoogleNet model and 81 percent accuracy using their proposed model, which indicates that their proposed model works better for classification purposes.
D. N. Le et al. [28] used Transfer learning techniques such as pre-trained ResNet50, VGG16, and MobileNet models in combination with focal loss and class weights for classifying the skin cancer. To balance the classes, they used weights in each of the classes. Higher weights were given to the classes with fewer samples, whereas lower weights were assigned to the classes with more samples. Using their approach, they achieved 93 percent average accuracy on the test data.
I. A. Ozkan and M. KOLLU [29] used four different machine learning algorithms such as Artificial Neural Network (ANN), Support Vector Machine (SVM), K-Nearest Neighbors (KNN), and Decision Tree (DT) for classifying melanoma skin cancer. They achieved 92.50 percent accuracy for ANN, 89.50 percent accuracy for SVM, 82.00 percent accuracy for KNN, and 90.00 percent accuracy for DT, which indicates that ANN has a better classification performance.
M. Elgamal [30] used two hybrid approaches to identify skin cancer. At first, they extracted the features using discrete wavelet transformation. After that, the features from the images were reduced using Principal Component Analysis. Finally, the features were fed to the artificial neural network and the k-nearest neighbor classifier to perform the classification. Their approach gave 95 percent accuracy and 97.5 percent accuracy in the two classifiers, respectively.
J. Daghir et al. [31] classified melanoma skin cancer using the Convolutional Neural Network (CNN), Support Vector Machine (SVM), and K-Nearest Neighbor (KNN) model. Before classifying the images, they segmented the images using Otsu's method. They extracted the features from the segmented images and fed those features to the classifiers. Finally, they combined all three models and made predictions based on the majority votes. Their proposed models gave the best result, which is 88.4 percent accuracy; on the other hand, the KNN model gave 57.3 percent accuracy, the SVM model gave 71.8 percent accuracy, and CNN gave 88.4 percent accuracy.
M. Q. Khan et al. [32] proposed an image processing technique to detect and distinguish melanoma from nevus. At first, they used the Gaussian filter to remove the noise from the images. After that, they used SVM for classifying melanoma and nevus skin cancer. Their proposed methodology achieved almost 96 percent accuracy.
## III Methodology
Since the architectures are not developed for multi-classs classification, we propose the generalized architecture for the multi-class classification of skin cancer, shown in Figure 1. At first, all dermoscopic skin cancer images are preprocessed to meet the requirement of models before feeding. The processed images are then fed to the architecture for feature extraction and finetuning. Finally, the input images are divided into seven classes of skin cancer, i.e. Melanocytic Nevi, Melanoma, Benign Keratosis, Actinic Keratosis, Vascular Lesions, Der-matofibroma, and Basal Cell Carcinoma. The classifiers such as InceptionV3, Xception, Densenet, Mobilenet, Resnet-50, CNN, and VGG16 are designed for classifying these seven skin lesion types. Using the weights of the aforementioned models, five different stacking models have been developed. The models are named inceptionv3-inceptionv3, Densenet-mobilenet, inceptionv3-Xception, Resnet50-Vgg16, and stack-six.
Figure 1 illustrates a high-level schematic representation of the classification with existing deep learning models.
### Dataset
The HAM10000 dataset, a large collection of the multi-source dermatoscopic dataset, has been used in this work [33]. It is downloaded from the Kaggle web-site url [https://www.kaggle.com/kmader/skin-cancer-mnist-ham10000](https://www.kaggle.com/kmader/skin-cancer-mnist-ham10000).
The dataset consists of 10,015 skin lesion images of 7 classes. The classes are Melanocytic nevi (6705 images), Melanoma (1113 images), Benign keratosis (1099 images), Basal cell carcinoma (514 images), Actinic keratosis (327 images), Vascular Lesions (142 images), and Dermatofibroma (115 images). All dermoscopic images have a resolution of
Fig. 1: Process for Multi-class skin cancer classification.
600 x 450-pixels, and the channel is three. The images are taken by dermatoscopy instrument. The instrument is a type of magnifier that is used to take pictures of skin lesions.
Figure 2 shows a sample of skin lesion types.
### Preprocessing
Raw data is not considered well-prepared inputs for fetching into the deep learning models since the data can have different sizes and noises. Transfer learning models can only work on the image size 224X224 pixels or less than that. Therefore, we have preprocessed the dataset by resizing it from 600 x 450 pixels to 120X120 pixels, and the channel of the images has been kept the same as before. We did not remove the noises, such as hair, discoloration, and other issues, as we wanted our model to learn the noises to perform well on the data with the presence of noise.
### Classification Models and Fine Tuning
We perform modifications on the architectures such as Resnet-50, VGG-16, Densenet, Mobilenet, Inceptionv3, and Xception for performing multi-class classification. Deep learning architecture customizations include
1) dense layers with'relu' activation.
2) dropout layers and softmax layers at the bottom of the architecture.
3) improvement in the parameters' values.
Then, we fine-tune using HAM10000 dataset for classifying skin cancer disease.
1) CNN: A Convolutional Neural Network is a class of neural networks. It is used for image processing and classification. A CNN contains a convolutional layer, a pooling layer, and a fully connected layer. The convolutional layer is the core building block of the CNN model. The convolutional layer performs a dot product between two matrices; one matrix is the kernel, and another is a portion of the input image. The kernel size is spatially smaller than the input image, but the kernel dimension is the same as the input image. A pooling layer is another building block of a CNN. It reduces the spatial size of CNN, which helps reduce the network's parameters and computation. Finally, the fully connected layer is used to provide the final output. The final convolutional neural network output is converted to a flattened one-dimensional input fed to the fully connected layer. The final fully connected layers use the softmax activation function, which gives the probability of input being in a particular class. Finally, we trained the model on 9617 sample images for 30 epochs with a learning rate of 0.0001 and Adam optimizer [34, 35].
2) Inceptionv3: Inceptionv3 was first introduced by Szegedy et al. [36] in 2015. It is a convolutional neural network for analyzing image datasets such as image classi-fication, object localization, and segmentation. The network is an extended version of the GoogleNet model. Inceptionv3 model avoids computational complexity since it concatenates multiple different sizes of convolutional filters into a new filter, which allows the model to reduce the number of parameters to be trained. Therefore, the performance of classification remains well while keeping a fewer number of parameters. Inceptionv3 has become popular among researchers since it can be efficiently trained over a huge dataset. In addition, we have included a dense layer with'relu' activation, Dropout, and softmax layers with seven outputs at the bottom of the architecture to better fine-tune the model on the dataset. Finally, we fine-tuned the model on 9617 sample images for 30 epochs with a learning rate of 0.0001 and Adam optimizer.
3) Xception: Xception model was first developed by Google researchers named Chollet [37]. Xception is a deep convolutional neural network built with a linear combination of depth-wise separable convolution with residual connections. This novel deep neural network architecture was inspired by the Inception model, where the Inception layers are re-placed with the depth-wise separable convolution. The concept of building the architecture with fully depth-wise separable convolution makes it less complex and more efficient than other deep convolutional neural networks. Moreover, we have included a dense layer with 'Relu' activation, Dropout, and Softmax layers with seven outputs at the bottom of the architecture to fine-tune the dataset model. Finally, we have fine-tuned the model on 9617 images (for 30 epochs) with a learning rate of 0.001 and adam optimizer with a momentum of 0.9.
4) Densenet: A Densely Connected convolutional Network (Densenet) was first proposed by Gao Huang, Zhuang Liu, and their team in 2017 [38]. It is a deep convolutional neural network that uses dense connections between the layers, and
Fig. 2: Sample skin cancer images from HAM10000 dataset (a) Actinic keratosis (b) Basal cellcarcinoma (c) Benign keratosis-like lesions (d) dermatofibroma (e) Melanocytic nevi (f) Melanoma (g) Vascular lesions.
each layer connects to each layer in a feed-forward manner. It diminishes the vanishing gradient issue and requires fewer parameters to train the model. So, it has the advantage to use in the computer vision field. Moreover, we have included a dense layer with 'Relu' activation, Dropout, and Softmax layers with seven outputs at the bottom of the architecture to fine-tune the dataset model. Finally, we have fine-tuned the model on 9617 images (for 30 epochs) with a learning rate of 0.001 and adam optimizer with a momentum of 0.9.
5. Mobilenet: Andrew G. Howard et al. [39], first intro-duced MobileNet architecture. Among the deep neural net-works, Mobilenet is a lightweight model which is appropriate for reducing computational cost and time. To reduce the model size and computation, MobileNet uses depthwise separable convolutions instead of standard convolutions. Depthwise separable convolution is a factorized convolution that factorizes standard convolution into two convolutions: a depthwise convolution and a pointwise convolution. Pointwise convolution is a 1* 1-dimensional convolution. Depthwise convolution aims to filter, whereas the purpose of pointwise convolution is to combine. The sum of depthwise convolution and pointwise convolution is the operation of depthwise separable convolution. The architecture of MobileNet is built with separable convolutions except for the first layer. The first layer is built with a complete convolutional layer. In addition, we have included a dense layer with 'Relu' activation, Dropout, and Softmax layers with seven outputs at the bottom of the architecture to better fine-tune the model on the dataset. Finally, we fine-tuned the model on 9617 sample images for 30 epochs with a learning rate of 0.0001 and Adam optimizer.
6. Resnet-50: A residual Neural Network( ResNet) is a kind of Artificial Neural Network(ANN) that forms a net-work by stacking residual blocks on top of each. Resnet has many variants, and the most popular networks are ResNet-34, ResNet-50, and ResNet-101. Each variant follows the same concept of Resnet; the only difference is in the number of layers. Resnet-50 works on 50 neural network layers. Large layers are useful for solving complex problems as each of the layers deals with a unique task. But the problem with the deeper network is it shows a degradation issue. Usually, the degradation problem causes either by the initialization of the network, by the optimization function, or by the problem of vanishing or exploding gradients. The Resnet model aims to avoid such issues. The strength of the Resnet model is the skip connection, which lies at the core of the residual blocks and is responsible for avoiding the degradation problem. Skip connections work in two ways. Firstly, they extenuate the vanishing gradient issue by creating an alternate shortcut for passing the gradient. Secondly, they allow the model to learn an identity function that ensures that the higher layers work almost similarly to the lower ones. Since the model is trained on more than a million images, it can classify the small dataset accurately. In addition, we have included a dense layer with 'Relu' activation, Dropout, and Softmax layers with seven outputs at the bottom of the architecture to better fine-tune the model on the dataset. Finally, we fine-tuned the model on 9617 sample images for 30 epochs with a learning rate of 0.0001 and Adam optimizer [40].
7. VGG-16: VGG-16 is the state-of-the-art deep neural network for analyzing image input. The model was used to win ILSVR (Imagenet) competition in 2014 and is considered one of the excellent vision model architectures to date. It is a large network and contains 138 million parameters. The number 16 of the VGG16 network refers to 16 layers with weights. The unique thing about vgg16 is that it maintains a convolution layer of 3x3 filter with a stride 1, and the same padding and max pool layer of a 2x2 filter of stride 2, throughout the whole architecture. Finally, the model ends with 2 Fully Connected Layers followed by a softmax for output. In addition, we have included a dense layer with 'Relu' activation, Dropout, and Softmax layers with seven outputs at the bottom of the architecture to better fine-tune the model on the dataset. Finally, we fine-tuned the model on 9617 sample images for 30 epochs with a learning rate of 0.0001 and Adam optimizer [41].
### Proposed stacking models
1. Inceptionv3-Inceptionv3: Inceptionv3-Inceptionv3 stacking model is developed using the weights derived from two inceptionv3 models. The models are trained with the same input but individually. The weights are saved in a folder for further process. The weights are fed to a decision tree model worked as a meta-model. Meta model is a fast learner since it predicts the input, which is already the predictions from the classifiers.
3. Inceptionv3-Xception: Inceptionv3-Xception stacking model is developed using the weights derived from one Inceptionv3 and one Xception model. The models are trained with the same input but individually. The weights are saved in a folder for further process. The weights are fed to a decision tree model for final prediction. The decision tree model worked as a meta-model. Meta model is a fast learner since it predicts the input, which is already the predictions from the classifiers.
4. Resnet50-Vgg16: Resnet50-Vgg16 stacking model is developed using the weights derived from one Resnet50 and one vgg16 model.The models are trained with the same input but individually. The weights are saved in a folder for further process. The weights are saved in a folder for further process. The weights are fed to a decision tree model for final prediction. The decision tree model worked as a meta-model. Meta model is a fast learner since it predicts the input, which is already the predictions from the classifiers.
5. Stack-Six: Stack-six stacking model is developed using the weights derived from six models such as Resnet-50, VGG
16, Densenet, Mobilenet, Inceptionv3, and Xception. The models are trained with the same input but individually. The weights are saved in a folder for further process. The weights are fed to a decision tree model for final prediction. The decision tree model worked as a meta-model. Meta model is a fast learner since it predicts the input, which is already the predictions from the classifiers.
### Evaluation Metrics
Evaluating a model's performance is necessary since it gives an idea of how close the model's predicted outputs are to the corresponding expected outputs. The evaluation metrics are used to evaluate a model's performance. However, the evaluation metrics differ with the types of models. The types of models are classification and regression. Regression refers to the problem that involves predicting a numeric value. Classification refers to the problem that involves predicting a discrete value. The regression problem uses the error metric for evaluating the models. Unlike the regression problem model, the classification problem uses the accuracy metric for evaluation. Since our motive is to classify the cancerous cell, we used accuracy, f1 score, precision, and Recall for our evaluation metric [42, 43, 44, 45].
Precision : When the model predicts positive, it should be specified that how much the positive values are correct. Precision is used when the False Positives are high. For skin cancer classification, if the model gives low precision, then many non-cancerous images will be detected as cancerous; for high precision, it will ignore the False positive values by learning with false alarms. The precision can be calculated as follows:
\[\text{P~{}recision}=\frac{\text{TP}}{\text{TP}+\text{FP}} \tag{1}\]
Here TP refers to True Positive values and FP refers to False Positive values.
Recall : The metric recall is the opposite of Precision. The precision is used when the false negatives (FN) are high. In the aggressive detection classification problem, if the model gives low recall, then many cancerous cells will be said as non-cancerous cells; for high recall, it will ignore the false negative values by learning with false alarms. The recall can be calculated as follows:
\[\text{Recall}=\frac{\text{TP}}{\text{TP}+\text{FP}} \tag{2}\]
F1 score: F1 score combines precision and recall and provides an overall accuracy measurement of the model. The value of the F1 score lies between 1 and 0. If the predicted value matches with the expected value, then the f1 score gives 1, and if none of the values matches with the expected value, it gives 0. The F1 score can be calculated as follows:
\[\text{F~{}1score}=\frac{\text{2-precision}\text{- recall}}{\text{precision}+ \text{recall}} \tag{3}\]
Accuracy : Accuracy determines how close the predicted output is to the actual value.
\[\text{Accuracy}=\frac{\text{TP}+\text{TN}}{\text{TP}+\text{TN}+\text{FP}+ \text{FN}} \tag{4}\]
here, TN refers to True Negative and FN refers to False Negative.
## IV Results and Discussions
### Results of existing deep learning models
The model's accuracy is derived from the validation data containing 1103 images of seven classes. We have used TensorFlow and Keras library for implementing the Neural Network models. Model training is done on the Google Colab using GPU runtime. We have evaluated the performance of seven different models such as Inceptionv3, Xception, Densenet, Mobilenet, Resnet-50, CNN, and VGG-16 for the classification of skin cancer among seven classes: Melanocytic nevi, Melanoma, Benign keratosis, Basal cell carcinoma, Actinic keratosis, Vascular Lesions, and Dermatofibroma using performance metrics such as precision, recall, f1-score, and accuracy. The categorical accuracy for Inceptionv3, Xception, Densenet, Mobilenet, Resnet-50, CNN, and VGG-16 are 90 percent, 88 percent, 87 percent, 82 percent, 77 percent, and 73 percent, respectively. The inceptionv3 model is provided the best result among all the models. The weighted average of precision, recall, and F1-score for InceptionV3 is 91 percent, 90 percent, and 90 percent, respectively. Similarly, the weighted averages of precision, recall, and F1-score for Xception, Densenet, Mobilenet, Resnet-50, CNN, and VGG-1 are also evaluated, which are shown in Table 1.
TABLE-1 : Results of Inceptionv3, Xception, Densenet, Mobilenet, Resnet-50, CNN, and VGG-1 models
For each of the seven models, The training-validation accuracy and training-validation loss curves are represented in Figure 3. For all models, the training accuracy is higher than the validation accuracy, or the training loss is lower than the validation loss from the first epochs. One possible observation can be adding the Dropout layer in the architecture during fine-tuning of the model since it makes a pretrained model less prone to over-fitting. These Dropout layers disable the neurons during training to reduce the complexity of the model.
In Figure 3, the confusion matrix shows the number of True positive and False negative results has been predicted by each of the model.
### Results of Developed Stacking Models
Using the weights of the seven aforementioned classifiers, we develop five stacking ensemble models such as Inceptionv3-Inceptionv3, Densenet-Mobilenet, Inceptionv3-Xception, Resnet-50-vgg16,and stack-six. We evaluate the models performance with the same test dataset which is used for the seven aforementioned models. The weighted averages of precision, recall, and F1-score for Inceptionv3-Inceptionv3, Densenet-Mobilenet, Inceptionv3-Xception, Resnet-50-vgg16,and stack-six are also evaluated, shown in Table 2.
The result shows that the stacking ensemble model provides
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline Models & Accuracy & Precision & Recall & F1-Score \\ \hline Inceptionv3-Inceptionv3 & 0.78 & 0.79 & 0.78 & 0.78 \\ Inceptionv3 & 0.78 & 0.77 & 0.78 & 0.77 \\ Densenet & 0.78 & 0.77 & 0.78 & 0.77 \\ Densenet-Mobilenet & 0.75 & 0.74 & 0.75 & 0.74 \\ Xception & 0.70 & 0.72 & 0.70 & 0.71 \\ Vgg16 & 0.70 & 0.72 & 0.70 & 0.71 \\ \hline stack-six & 0.78 & 0.80 & 0.78 & 0.77 \\ \hline \end{tabular}
\end{table}
Table 2: Results of inceptionv3-inceptionv3, Densenet-mobilenet, inceptionv3-Xception, Resnet50-Vgg16, and stack-six model.
the highest accuracy of 78 percent, which is lesser than the existing deep learning model's performance. Since the stacking ensemble models are less prone to show bias result, it can be the reason why it shows poor result and does not vary the result within the stacking models. The lowest accuracy is 70 and the highest accuracy is 78. The difference is very less. Whereas, The single models accuracy varies largely from one model to another. The lowest accuracy is 73 and the highest accuracy is 90. So, the models could have tendency of showing bias result for a particular dataset.
We stack a model with six models and named it stack-six model. The stack-six model gives 78 percent accuracy and did not improve much even though we have increased the number of weights for stacking. Therefore, it is clear that the stacking model's performance becomes saturated for a particular accuracy range.
## V Conclusion
Since the death rate due to skin cancer increases day by day, it is necessary to address this global public health issue. The outstanding performance of deep convolution models on image datasets can be utilized for skin cancer detection. However, using the deep learning models, a different problem has a different process to solve. Previously, several investigations have been done on classifying skin cancer detection. To the best of our knowledge, none of the work has shown the comparative analysis of multiple skin cancer classes using deep convolutional neural networks. Our work will help the medical sector distinguish skin cancer from different skin lesions accurately. Seven classes of skin lesions have been classified using Resnet-50, VGG-16, Densenet, Mobilenet, Inceptionv3, Xception, and CNN. Finally, the performance of the models is evaluated using evaluation metrics such as precision, recall, f1-score, and accuracy. Among all the models, Inceptionv3 provides the best result, which is 90 per-cent accuracy. Furthermore, we have developed five stacking ensemble models such as inceptionv3-inceptionv3, Densenet-Mobilenet, inceptionv3-Xception, Resnet50-Vgg16, and stack-six using the weights of the aforementioned models for observing how the stacked models perform individually on the same dataset. We have found that stacking models give the highest accuracy of 78 percent, which is lesser than the performance of the existing models. Since single models have a tendency to show biased results, it can be one reason why the accuracy widely varies from one model to another. Therefore, our experiment shows that the stacking model may give a higher lower accuracy than the existing model since the model does not show biased results. Our work will give a clear observation of stacking ensemble models to future researchers for further investigations on the skin cancer dataset as well as the ensemble learning models.
|
2306.06792 | A Neural Network Implementation for Free Energy Principle | The free energy principle (FEP), as an encompassing framework and a unified
brain theory, has been widely applied to account for various problems in fields
such as cognitive science, neuroscience, social interaction, and hermeneutics.
As a computational model deeply rooted in math and statistics, FEP posits an
optimization problem based on variational Bayes, which is solved either by
dynamic programming or expectation maximization in practice. However, there
seems to be a bottleneck in extending the FEP to machine learning and
implementing such models with neural networks. This paper gives a preliminary
attempt at bridging FEP and machine learning, via a classical neural network
model, the Helmholtz machine. As a variational machine learning model, the
Helmholtz machine is optimized by minimizing its free energy, the same
objective as FEP. Although the Helmholtz machine is not temporal, it gives an
ideal parallel to the vanilla FEP and the hierarchical model of the brain,
under which the active inference and predictive coding could be formulated
coherently. Besides a detailed theoretical discussion, the paper also presents
a preliminary experiment to validate the hypothesis. By fine-tuning the trained
neural network through active inference, the model performance is promoted to
accuracy above 99\%. In the meantime, the data distribution is continuously
deformed to a salience that conforms to the model representation, as a result
of active sampling. | Jingwei Liu | 2023-06-11T22:14:21Z | http://arxiv.org/abs/2306.06792v1 | # A Neural Network Implementation for Free Energy Principle+
###### Abstract
The free energy principle (FEP), as an encompassing framework and a unified brain theory, has been widely applied to account for various problems in fields such as cognitive science, neuroscience, social interaction, and hermeneutics. As a computational model deeply rooted in math and statistics, FEP posits an optimization problem based on variational Bayes, which is solved either by dynamic programming or expectation maximization in practice. However, there seems to be a bottleneck in extending the FEP to machine learning and implementing such models with neural networks. This paper gives a preliminary attempt at bridging FEP and machine learning, via a classical neural network model, the Helmholtz machine. As a variational machine learning model, the Helmholtz machine is optimized by minimizing its free energy, the same objective as FEP. Although the Helmholtz machine is not temporal, it gives an ideal parallel to the vanilla FEP and the hierarchical model of the brain, under which the active inference and predictive coding could be formulated coherently. Besides a detailed theoretical discussion, the paper also presents a preliminary experiment to validate the hypothesis. By fine-tuning the trained neural network through active inference, the model performance is promoted to accuracy above 99%. In the meantime, the data distribution is continuously deformed to a salience that conforms to the model representation, as a result of active sampling.
Keywords:Helmholtz machine Free energy Principle Active inference Hierarchical model
## 1 Introduction
Free Energy Principle (FEP) as an encompassing framework and a unified brain theory [10] has been widely applied to account for various phenomena in many cognition, humanity-related fields such as psychology[3], music[21], linguistic communication[15], cultural niche construction[5], embodiment[1], autopoiesis[20], emotion recognition[7]. In the meanwhile, as a computational model deeply rooted in math and statistics, FEP posits an optimization problem based on variational Bayesian inference, which is solved either by dynamic programming[16] or expectation maximization[9]. However, there seems to be a bottleneck in extending FEP to the fields of machine learning, which have been the hotspots
for solving statistical and engineering problems in recent years. There are a few works that bridge active inference and reinforcement learning[26], and here is a survey on seeking the common ground for active inference and deep learning[23]. However, as a variational-based method, FEP cannot be seen equally, or be generalized trivially, to reinforcement learning, a reward-based training method. This work gives a preliminary attempt at bridging FEP and machine learning, via a classical neural network model, the Helmholtz machine. There are three features of using the Helmholtz machine to study FEP under neural network settings:
1. The Helmholtz machine uses variational free energy as its objective, which is in accord with the FEP. In other words, we maintain the variational essence of FEP by using a corresponding variational machine-learning method, which minimizes the free energy.
2. If reinforcement learning is considered as capturing the qualities of expected free energy, which involves planning and future outcomes of sequential events, then the Helmholtz machine is a perfect parallel for the free energy. Although the Helmholtz machine is not temporal, in many aspects, it's an ideal prototype for implementing FEP and active inference in a neural network fashion, which will be argued extensively in this paper.
3. The Helmholtz machine also presents a satisfactory simulation for the hierarchical model of the brain. The forward and backward connections and hierarchical message passing are inherent in the implementation of the Helmholtz machine.
This paper includes two main sections. Section 2 gives a theoretical account to the interrelationship of the free energy principle and the Helmholtz machine from aspects of mathematical formulation, model training and parameter updating, biological interpretability and plausibility. It provides a theoretical basis for generalizing FEP via the Helmholtz machine to broader model-fitting schemas in the neural networks for machine learning. Section 3 presents a preliminary experiment we designed to test the model. The model performs pretty well as the theoretical analysis indicates. In training stage I, the Helmholtz machine achieves an accuracy of 0.94 under traditional data fitting; In training stage II, we apply the active inference in FEP to actively sample the input sensations as salience. After a few rounds of fine-tuning, the model accuracy was boosted above 0.99, which presents high generation accuracy while keeping generation diversity at a satisfactory level.
## 2 Free Energy Principle and Helmholtz Machine
### Variational Inference for Statistics
Here we give a brief overview of variational inference (VI)[2] in Bayesian statistics and show how this concept is linked to the free energy principle and Helmholtz
machine. To maintain the notational consistency, we use the standard notations in [6].
The goal is to determine the posterior \(P(\alpha|d)\), where \(d\) denotes the observable data, and \(\alpha\) is variously referred to as hidden causes, latent variables, or hidden states. According to Bayes' rule,
\[P(\alpha|d)=\frac{P(\alpha,d)}{P(d)}=\frac{P(\alpha,d)}{\int_{\alpha}P(\alpha,d )d\alpha} \tag{1}\]
The integral over underlying causes is usually intractable (either unavailable in closed form or requires exponential time to compute), so the true posterior \(P(\alpha|d)\) cannot be computed directly.
Remark 1: We give a separate account for the conditional data \(d\) in \(P(\alpha|d)\). In classical variational inference[2], \(d\) represents the entire dataset, thus the latent distribution \(P(\alpha|d)\) is independent of single data point, and all model properties resort to latent local and global variables. Therefore, we can see that an approximate posterior \(Q(\alpha)\) is used to approximate the true posterior \(P(\alpha|d)\), where \(Q\) is not conditioned on the observations. This formulation is widely used in statistical variational inference and all resources I've read about the free energy principle such as [11][13][25].
However, in many other settings, we frequently see that another form of approximate posterior \(Q(\alpha|d)\) is used, the most prominent case is in VAE [19]. In the Helmholtz machine, this conditional \(Q(\alpha|d)\) is also used as approximate posterior (it's not explicitly given in [6], but in [18], it's clearly stated). The main reason is that the data \(d\) is treated point-wisely in these models, thus the latent cause distributions conditioned on individual data points vary from each other. However, as the distribution \(Q(\alpha|d)\) is parameterized by \(\phi\), which is amortized to all data points, the approximate posterior is still tractable and works a similar way as in the unconditioned case.
As the Helmholtz machine is the model we adopt, we will use the conditioned approximate posterior in the following discussions. In spirit, it differs little from the vanilla version as the formula deduction unfolds. To recap, we use an approximate posterior \(Q_{\phi}(\alpha|d)\) to approximate the true posterior \(P(\alpha|d)\), where \(Q_{\phi}(\alpha|d)\) belongs to a parameterized family \(\mathscr{Q}_{\phi}\) of probability densities. Our goal is to find the member of this family that minimizes Kullback-Leibler (KL) divergence to the exact posterior,
\[Q_{\phi}^{*}(\alpha|d)=\operatorname*{arg\,min}_{Q_{\phi}(\alpha|d)\in \mathscr{Q}_{\phi}}D_{KL}[Q_{\phi}(\alpha|d)||P(\alpha|d)] \tag{2}\]
The variational method kicks in when we decompose the true posterior in the KL-divergence term,
\[D_{KL}[Q_{\phi}(\alpha|d)||P(\alpha|d)] =\mathbb{E}_{Q}[\log Q_{\phi}(\alpha|d)]-\mathbb{E}_{Q}[\log P( \alpha|d)] \tag{3}\] \[=\mathbb{E}_{Q}[\log Q_{\phi}(\alpha|d)]-\mathbb{E}_{Q}[\log P( \alpha,d)]+\log P(d) \tag{4}\]
The distribution \(P(\alpha,d)\) is called the generative model, as it denotes the joint distribution of the latent and observable variables. The generative model is usually assumed as known in VI and FEP, as the way environment generates observations from causes is innate.
Now the problem falls on the third term in Equation (4), what we call the log-evidence, or negative surprisal, \(\log P(d)\). This term is again intractable, so to circumvent it, we use the nonnegativity of KL-divergence, rewrite (4) as
\[\log P(d)\geq\mathbb{E}_{Q}[\log P(\alpha,d)]-\mathbb{E}_{Q}[\log Q_{\phi}( \alpha|d)] \tag{5}\]
where the right-hand side term \(\mathbb{E}_{Q}[\log P(\alpha,d)]-\mathbb{E}_{Q}[\log Q_{\phi}(\alpha|d)]\) is called the evidence lower bound (ELBO). By maximizing ELBO we implicitly maximize the log-evidence \(\log P(d)\). The free energy is given by the negative ELBO,
\[F=\mathbb{E}_{Q}[\log Q_{\phi}(\alpha|d)]-\mathbb{E}_{Q}[\log P(\alpha,d)]=D_ {KL}[Q_{\phi}(\alpha|d)||P(\alpha,d)] \tag{6}\]
which is the ultimate minimization goal in VI, FEP, and the Helmholtz machine.
Remark 2: The minimization term \(F\) is seen as a compromise in VI. Since we cannot minimize \(D_{KL}[Q_{\phi}(\alpha|d)||P(\alpha|d)]\) directly, we find some cheap approximation that we can compute. However, I claim it's not the case for generative models. In generative models which use generation as an organic component of model construction, the generative density \(P(\alpha,d)\) is a necessity instead of the posterior \(P(\alpha|d)\), since we are not only finding the best set of parameters in a density family that approximates a given distribution, but also using generated samples to regulate the recognition process (Helmholtz machine, VAE) or active sampling the generations to improve accuracy (FEP).
In Helmholtz machine, instead of pre-defining a generative model \(P(\alpha,d)\), in a more realistic way (since the generative density is usually unknown in real-life problems), we parameterize this distribution by \(\theta\), and construct the free energy minimization goal
\[F=D_{KL}[Q_{\phi}(\alpha|d)||P_{\theta}(\alpha,d)] \tag{7}\]
By jointly optimizing the two sets of parameters \(\phi\) and \(\theta\) in an EM (expectation-maximization) manner, we minimize the free energy of the system.
In FEP, the free energy is reformulated as the two equations,
\[F =D_{KL}[Q_{\phi}(\alpha|d)||P(\alpha|d)]-\log P(d) \tag{8}\] \[=D_{KL}[Q_{\phi}(\alpha|d)||P(\alpha)]-\mathbb{E}_{Q}[\log P(d| \alpha)] \tag{9}\]
Equation (8) is interpreted from its first term as optimizing the recognition density of the brain to approximate the true distribution of the world, which shares the same goal as VI, minimizing \(D_{KL}[Q_{\phi}(\alpha|d)||P(\alpha|d)]\); Equation (9) is interpreted more inclined to its second term, \(\mathbb{E}_{Q}[\log P(d|\alpha)]\), as a way to actively sample the sensory inputs that conform to the current representations, thus improving accuracy (please refer to [10] for more details). In this work, we will integrate the classical Helmholtz machine which is trained under minimization of
variational free energy with the active inference in FEP. Besides the parameter optimization, the model also performs active inference by a selective sampling of the environment, which entails a modulation of attention reflected in the distribution of evidence.
### Neural Network for Machine learning
This work explores a way of implementing FEP using neural networks in machine learning. Traditionally, problems formulated under FEP are either solved by DEM (dynamic expectation-maximization)[12] or MDP (Markov decision process)[25]. Although the Helmholtz machine is not a temporal model, it uses real neurons specified by modern neural networks and updates its parameters via gradient descent. Under the current world trend, we believe it's imperative to extend FEP to the machine learning field and to solve problems using neural network architectures.
The structure of the Helmholtz machine is shown in Fig. 1. It's a layered hierarchical model composed of stochastic binary neurons, connected by bottom-up recognition weights \(\phi\) and top-down generative weights \(\theta\). In this work, we did two major modifications to the original model in [6],
1. The activity of the stochastic binary neuron is changed from \(\{0,1\}\) to \(\{-1,1\}\).
Figure 1: The Helmholtz Machine. The Helmholtz Machine is a fully connected feedback neural network with hierarchical architecture. The solid lines show the bottom-up recognition process parameterized by \(\phi\), and the dashed lines show the top-down generation process parameterized by \(\theta\). The activity of each neuron is computed from the activities of all neurons in its previous layer. The activation functions are given in the text.
2. The bias is added when computing the linear activation of each neuron.
The first modification is done due to the derivative form of \(F\) with respect to its parameters, for example, \(\frac{\partial F}{\partial\theta_{k,n}^{m+1,m}}=-s_{k}^{m+1}(s_{n}^{m}-p_{n}^{m})\). The neuron activity from the previous layer is used as a multiplier on the derivatives, which means if \(s_{k}^{m+1}=0\), the gradient equals zero, thus no updating will be performed for this parameter \(\theta_{k,n}^{m+1,m}\) by gradient descent. Therefore, the parameter updating will be half paralyzed when neuron activities alternate between \(0\) and \(1\). To make the learning more efficient, we replace the activity value \(0\) with \(-1\), thus zero gradients won't occur unless \(p_{n}^{m}\) approaches \(s_{n}^{m}\).
The Helmholtz machine is fully connected. The activation of each neuron is a linear combination of all the neurons from its previous layer plus bias,
\[a_{n}^{m}(\theta,\mathbf{s}^{m+1})=\sum_{k}\theta_{k,n}^{m+1,m}s_{k}^{m+1}+b_{ n}^{m+1,m} \tag{10}\]
where the activation of the \(n\)-th neuron in layer \(m\) is computed by a weighted sum of all activities \(s_{k}^{m+1}\) in layer \(m+1\) weighted by its corresponding parameter \(\theta_{k,n}^{m+1,m}\), plus the bias \(b_{n}^{m+1,m}\) for this neuron. Here the previous layer is the upper layer \(m+1\), which corresponds to the top-down generative process indicated by dashed lines in Fig. 1. We added the bias term to the original formulation to expand the parameter set, thus endowing more freedom for the neural network to fit the data.
The probability is calculated by the sigmoid function \(\sigma(x)=1/(1+e^{-x})\) of activation, which nonlinearly compresses the range into \((0,1)\).
\[p_{n}^{m}(\theta,\mathbf{s}^{m+1})=\sigma(a_{n}^{m})=\sigma(\sum_{k}\theta_{k, n}^{m+1,m}s_{k}^{m+1}+b_{n}^{m+1,m}) \tag{11}\]
Similarly, the activation probability of a neuron in the bottom-up recognition process is computed as
\[q_{n}^{m}(\phi,\mathbf{s}^{m-1})=\sigma(\sum_{k}\phi_{k,n}^{m-1,m}s_{k}^{m-1}+ b_{n}^{m-1,m}) \tag{12}\]
where the previous layer is the lower layer \(m-1\), and the probability is denoted with \(q\). The notations are consistent with our notations in Equation (7). The recognition density \(Q_{\phi}(\alpha|d)\) and the generative density \(P_{\theta}(\alpha,d)\) are computed by the product of the probabilities of all neurons, namely
\[Q_{\phi}(\alpha|d) = \prod_{m>1}\prod_{n}[q_{n}^{m}(\phi,\mathbf{s}^{m-1})]^{\frac{1+ s_{n}^{m}}{2}}[1-q_{n}^{m}(\phi,\mathbf{s}^{m-1})]^{\frac{1-s_{n}^{m}}{2}} \tag{13}\] \[P_{\theta}(\alpha,d) = \prod_{m\geq 1}\prod_{n}[p_{n}^{m}(\theta,\mathbf{s}^{m+1})]^{ \frac{1+s_{n}^{m}}{2}}[1-p_{n}^{m}(\theta,\mathbf{s}^{m+1})]^{\frac{1-s_{n}^{m }}{2}} \tag{14}\]
Each neuron gives a Bernoulli distribution as the activity of \(s_{n}^{m}\) takes value \(-1\) or \(1\).
As the explicit form of the free energy \(F=D_{KL}[Q_{\phi}(\alpha|d)||P_{\theta}(\alpha,d)]\) is given by equations (13) (14), now we should consider how to compute the derivatives of \(F_{\phi,\theta}\) thus minimizing this term by gradient descent. If we refer back to the structure of the Helmholtz machine in Fig. 1, we can see the neurons are connected recurrently. Besides that, the change of activities in one layer will affect the behavior of all neurons in higher layers, which makes backpropagation extremely difficult. To tackle this problem, a customized algorithm is designed for training this system, which is the wake-sleep algorithm [18].
The wake-sleep algorithm disentangles the recognition parameters \(\phi\) and the generative parameter \(\theta\) by separating the training into two phases. In the wake phase, the bottom-up recognition process is performed using the current weights \(\phi\) to get an instance of complete neuron assignments \(\alpha\) by sampling activities from \(\{-1,1\}\) based on the probability of each neuron \(q_{n}^{m}\). Now we update the generative parameters \(\theta\) based on the complete neuron activities encoded in \(\alpha\), which makes the target values available for training the hidden units locally, thus disentangling them from the multi-layer coupling. In the sleep phase, the recognition weights \(\phi\) are turned off and a random instance is generated based on the current top-down weights \(\theta\). Alternatively, we update \(\phi\) according to the neuron assignments of this generated instance while keeping the generative weights fixed. By iterating the wake and sleep phases, updating \(\phi\) and \(\theta\) alternatively, the objective function \(F_{\phi,\theta}\) is minimized in an EM manner.
Now let's compute the derivatives of \(F_{\phi,\theta}\) with respect to the generative weights. We can write out Equation (7) as
\[F=\mathbb{E}_{Q_{\phi}}[\log Q_{\phi}(\alpha|d)]-\mathbb{E}_{Q_{\phi}}[\log P_ {\theta}(\alpha,d)] \tag{15}\]
Since \(\phi\) and \(\theta\) are decoupled, the first term in Equation (15) is constant when computing \(\frac{\partial F}{\partial\theta}\), thus we write out the second term as
\[\mathbb{E}_{Q_{\phi}}[\log P_{\theta}(\alpha,d)]=\sum_{\alpha}Q_{\phi}(\alpha |d)\log P_{\theta}(\alpha,d) \tag{16}\]
As the latent cause \(\alpha\) is generated by random sampling as a single instance, the summation over all possible hidden causes in Equation (16) is ignored with the weighting term \(Q_{\phi}(\alpha|d)\), thus the objective is further simplified as
\[\log P_{\theta}(\alpha,d)=\sum_{m\geq 1}\sum_{n}(\frac{1+s_{n}^{m}}{2}\log[p_{ n}^{m}(\theta,\mathbf{s}^{m+1})]+\frac{1-s_{n}^{m}}{2}\log[1-p_{n}^{m}(\theta, \mathbf{s}^{m+1})]) \tag{17}\]
If we plug in Equation (11) and calculate its derivatives with respect to \(\theta\) and \(b\), we easily derive the _local delta rule_,
\[\frac{\partial F}{\partial\theta_{k,n}^{m+1,m}} = -s_{k}^{m+1}(s_{n}^{m}-p_{n}^{m}) \tag{18}\] \[\frac{\partial F}{\partial b_{n}^{m+1,m}} = -(s_{n}^{m}-p_{n}^{m}) \tag{19}\]
To update the recognition weights \(\phi\) in the sleep phase, we exchange the relative positions of \(P\) and \(Q\), using \(\widetilde{F}=\mathbb{E}_{P_{\theta}}[\log P_{\theta}(\alpha,d)]-\mathbb{E}_{P_{ \theta}}[\log Q_{\phi}(\alpha|d)]\) as a modified objective function, then the similar deductions follow which give the same local delta rules for recognition weights. Finally, all the parameters are updated by gradient descent \(x=x-\gamma\frac{\partial f}{\partial x}\), where \(\gamma\) is the learning step size.
Remark 3: From the derivation of the local delta rule we can tell, by decoupling the forward and backward passes, and updating based on a single sampled instance, the objective function is simplified too much to be considered as variational or free-energy. However, this working algorithm is computationally cheap and efficient, and it approximates the true objective in an ensemble sense, which means the system minimizes the variational free energy after enough rounds of iterations with sufficient accuracy (probably converges to a local minima instead of the global minima).
Remark 4: The local delta rule is the simplest updating rule we could derive from Equation (15). If we keep the weighting term \(Q_{\phi}(\alpha|d)\) in Equation (16), the local delta rule will also be weighted by this term, which gives what we call the _weighted local delta rule_. In [6], another rule is given by replacing the stochastic neuron activities with their holistic mean, which is the calculated probabilities of neuron activations. In our preliminary experiments, we started with the classical local delta rules. The comparison and evaluation of all updating rules will be reserved for future work.
Here we present a brief account of the two-phase training mechanism of the Helmholtz machine in comparison to the VAE. The Helmholtz machine is widely acknowledged as the predecessor of VAE and both neural networks resort to variational machine learning. Instead of alternative two-phase training, VAE uses a single objective function that jointly optimizes the recognition and generative processes. The advantage of the wake-sleep algorithm, for our application purposes, mainly lies in three areas:
1. The decoupling of recognition and generation spares a whole lot of space for creative manipulation and subtle mediation between these two processes, thus available the ground for studying computational creativity and foraging artistic usage of the model (future directions).
2. The idea of analysis-by-synthesis and inverting the hierarchical model in real-time make active inference and real-time modifications possible. One of the biggest differences between the Helmholtz machine and the VAE is that the generation in the Helmholtz machine is unconstrained. It's not subject to the maximum likelihood and any generated sample could be used to update the model parameters. In other words, the Helmholtz machine presents more flexibility in the training process, a desired feature to serve our purposes.
3. The hierarchical structure and forward-backward connections in the Helmholtz machine are a good parallel for the cortical hierarchies in our brain. This point be illustrated in more detail in the following arguments.
### Hierarchical Model for the Brain
It's contended extensively in FEP-related works that the cortical responses in our brain observe a hierarchical model with forward and backward connections, where the forward driving connections convey prediction errors from a lower area to a higher area, and nonlinear backward connections construct predictions [8][9][12]. The brain trying to infer the causes of its sensory inputs and generating corresponding sensations is also a prevailing idea in FEP. As the author doesn't have a real background in neurobiology, the statements in this subsection couldn't be presented with too much precision or detail. But in general, the idea of analysis-by-synthesis is well demonstrated in the Helmholtz machine, and Hinton pointed out that one of their motivations for developing the Helmholtz machine is inspired by Friston's cortical hierarchy theory [17]. In [12], Friston also cited the paper on the Helmholtz machine [6]. By this cross-referencing, we believe there is enough evidence to link the brain architecture to the Helmholtz machine, and study the brain functions such as predictive coding, message passing, and predictive processing via the Helmholtz machine, at least at a reasonable metaphorical level. Two preliminary comments could be made at this stage by the working mechanism of the Helmholtz machine, while the validation and more systematic discussions could only be realized after the numerical experiments are carried out and a detailed examination is applied in future work.
1. In predictive processing [4], the processing of sensory inputs goes both ways within the hierarchical model. Besides the bottom-up passive pattern recognition of the stimuli, the brain also actively constructs the predictions via top-down connections. The backward connections regulate the precision of prediction errors that allow the system to cope with noisy and ambiguous sensory inputs. In other words, the separate treatment of forward and backward connections, which correspond to the wake and sleep phases in the Helmholtz machine, is imperative to study PP (predictive processing) and PC (predictive coding) related brain functions. Besides, due to the functional asymmetry of forward and backward connections in the brain[12], where backward connections are more modulatory or nonlinear in their effects on neuronal responses, it's also advantageous to decouple the two processes and treat them separately, which allow more flexibility.
2. In hierarchical message passing, the synapses are characterized by their local efficacy. It means that the prediction errors are resolved by each neuron locally, which only receives responses from neurons at its current and preceding levels. This local efficacy well corresponds to the local delta updating rules derived in the Helmholtz machine. This condition is important because it permits a biologically plausible implementation, where the connections driving inference run only between neighboring levels [8].
## 3 Experiment
The preliminary experiment is designed by a 4-layer Helmholtz machine, with \(10,8,5,3\) neurons in each layer respectively (see Fig. 2). This section will present
a detailed experimental setup with data design and two-stage training, as well as the experimental result which preserves adequate generative diversity, while boosting the generation accuracy above \(0.99\) in the meantime (please see the codes in my GitHub).
### Training Stage I: Experimental Setup
Figure 2: Helmholtz Machine with Active Inference. The experiment is implemented with a 4-layer Helmholtz machine, with \(10,8,5,3\) neurons in each layer ascendingly. In the sleep phase, activities in layer 4 are generated by generative bias from unity. The training is implemented in two stages. In stage I, as the single lines connecting the training set and the model on the left side indicate, the inputs are from the well-formed region and the generations are unconstrained, which falls anywhere in the entire space; in stage II, the parameters are fine-tuned by restricting the generations within the well-formed region while actively deforming the input distribution based on the model generations (double lines on the right side), resulting in the selected region that conforms to the latent model representations.
The data design is inspired by the phenotypic boundary discussed in [14]. For a phenotype to exist it must possess defining characteristics or traits. These traits essentially limit the agent to a bounded region in the space of all states it could be in. Once outside these bounds, it ceases to possess that trait (cf, a fish out of water). This bounded region corresponds to the well-formed region in Fig. 2 and the entire space represents all possible states of the world.
To construct a valid subset of all 1024 possibilities of the combination of binary-valued first-layer data neurons (\(2^{10}\)), we devise three well-formed rules inspired by the musical gestalt. Metaphorically, we consider the 10 neurons as a 10-note sequence, which represents a rhythmic pattern - 0 or \(-1\) denotes the rest (we will use 0 for discussion convenience but in numerical implementation it's always replaced by \(-1\)) and 1 denotes a percussive attack. Then the rules entail, what is a valid rhythmic pattern?
**Rule 1**: The sequence always starts with 1.
**Rule 2**: Forbid single event that's strongly isolated from other groups (00100), and also avoid isolated event at the beginning (100) and the end (001) of the sequence.
**Rule 3**: Forbid the extended break (0000).
We won't give further explanations for the designing logic for these rules, but we refer the readers who are interested to [22] for a better understanding of musical grouping structures.
The advantage of rule-based generation is that, we have a metric to assess the goodness of the generated samples by simply checking them against the rules, thus the model performance could be measured with certainty. In training stage I, we use this generated well-formed set as inputs, and update the recognition weights with arbitrarily generated instances (see the single lines connecting the data and machine on the left side in Fig. 2). As explained in [18], the model aims to find the economical latent representations that prescribe the minimum description length. After sufficient iterations, the generation accuracy reached \(0.94\pm 0.01\), and couldn't be further improved by repeating the current training.
### Training Stage II: Active Inference
In training stage II, we fine-tune the model trained in stage I by active inference, which boosted the generation accuracy to 0.99+ with only 200 rounds of iterations. In this stage, the generated instances in the sleep phase are filtered by the well-formed rules, thus only valid generations within the phenotypic bounds are accepted to train the recognition weights. In the meantime, the valid generations are maintained to actively modify the distribution of the input set. The more an instance is generated, the more salient it becomes in the evidence distribution.
This distribution modification could be viewed either as salience[24], in a similar way of executing eye movements to sample the sensations that conform to the agent's expectations; or as niche construction[27], that renders real modifications on the environment such as the "desire path". Either way, the data
distribution changes due to active inference, thus the given input data (data in the well-formed region) becomes actively sampled data (data in the selected region) that reflects the current representations in the "brain" (see the double lines on the right side connecting the data and machine in Fig. 2).
The input data distribution is described in Fig. 3. The FEP-based active training continuously deforms the uniform equal-probability distribution of the initial dataset (in blue) to the active sampled distribution (in orange) that fits the internal representation and capacity of the machine. After this second-stage fine-tuning, the Helmholtz machine is able to generate almost 100% accurate samples with a more focused range within all possibilities of the well-formed set while keeping generative diversity to a satisfactory degree.
|
2306.16581 | Does Saliency-Based Training bring Robustness for Deep Neural Networks
in Image Classification? | Deep Neural Networks are powerful tools to understand complex patterns and
making decisions. However, their black-box nature impedes a complete
understanding of their inner workings. While online saliency-guided training
methods try to highlight the prominent features in the model's output to
alleviate this problem, it is still ambiguous if the visually explainable
features align with robustness of the model against adversarial examples. In
this paper, we investigate the saliency trained model's vulnerability to
adversarial examples methods. Models are trained using an online
saliency-guided training method and evaluated against popular algorithms of
adversarial examples. We quantify the robustness and conclude that despite the
well-explained visualizations in the model's output, the salient models suffer
from the lower performance against adversarial examples attacks. | Ali Karkehabadi | 2023-06-28T22:20:19Z | http://arxiv.org/abs/2306.16581v1 | # Does Saliency-Based Training bring Robustness for Deep Neural Networks in Image Classification?
###### Abstract
Deep Neural Networks are powerful tools to understand complex patterns and making decisions. However, their black-box nature impedes a complete understanding of their inner workings. While online saliency-guided training methods try to highlight the prominent features in the model's output to alleviate this problem, it is still ambiguous if the visually explainable features align with robustness of the model against adversarial examples. In this paper, we investigate the saliency trained model's vulnerability to adversarial examples methods. Models are trained using an online saliency-guided training method and evaluated against popular algorithms of adversarial examples. We quantify the robustness and conclude that despite the well-explained visualizations in the model's output, the salient models suffer from the lower performance against adversarial examples attacks.
## 1 Introduction
### Saliency Guided Training
Deep Neural Networks have gained immense attention from researchers in different complicated applications LeCun et al. (2015); Farabet et al. (2012); Mikolov et al. (2010); Mittal and Hasija (2020); Brants et al. (2007). However, their black-box nature hampers their application in critical domains. To this end, providing explanations of the inner working process in the models has been investigated by researchers Khakzar et al. (2022). Some works investigate the post-hoc explanations from the model by either evaluating backpropagated gradients Selvaraju et al. (2017); Omeiza et al. (2019) or by evaluating the model's performance in the presence of a perturbation occurring in an area of the input Fong et al. (2019); Fong and Vedaldi (2017); Samek et al. (2016). There are some works trying to shallow down the Deep Neural Network architecture to reduce the ambiguity of the modelFrossi and Hinton (2017); Wu et al. (2018). Others try to include the saliency in the training process.Ross et al. (2017) uses annotations on the irrelevant parts of input and penalizes the model for negative gradients to teach the model learn salient features. The authors add a regularization term that jointly optimizes the accuracy and penalization of negative gradients. Ghaeini et al. (2019) extends the work by involving intermediate layers, too. They teach the model to learn human-understandable features by providing ground truth masks for the positive gradient as explanations. The method trains the model with training data, labels and ground truth explanation masks. Ismail et al. (2021) improves the model's salient features by removing input pixels related to lower gradient values from backpropagation. Input areas with lower gradient impact are masked with random values. Authors add Kullback-Leibler (KL) divergence as a similarity metric in the loss function. The model is enforced to output similar distribution for both the original image and the masked one. The strategy helps the model to focus on the salient features which are considered to be more interpretable and be negligent to low gradient values.
### Adversarial Examples
Although Deep Neural Networks have achieved state of the art performance in different domains, they might be vulnerable to small hardly perceptible artifact manipulation in the input. Szegedy et al. (2013) Revealed that nuisance perturbations in the input images cause a misclassification by the model. Since this early study, others started developing different approaches to add a small perturbation toward the input to fool the model. Goodfellow et al. (2014) Introduced Fast Gradient Sign Method (FGSM). The method adds a small perturbation to all the input image pixels in the direction of the gradient. This simple but effective method built the basic block for more sophisticated attacks. Kurakin et al. (2018) and Madry et al. (2017) developed the algorithm in an iterative approach and introduced Basic Iterative Method (BIM) and Projected Gradient Descent (PGD). These approaches put a constraint on the perturbation to ensure that it is small enough to be imperceptible. Dong et al. (2018) considered generating strong adversarial examples using the previous momentum of gradients iteratively.
Adversarial examples initially intended to attach the models for malfunction. Meanwhile, they are considered as the evaluation metrics for the model's performance against manipulated inputs.
Inspired by Saliency guided training procedures which makes the model explicitly show important features and adversarial example progresses, this paper investigates the effect of saliency-based training on the adversarial robustness of Deep Neural Networks. Although similar works like Etmann et al. (2019) show that adversarial training for the model's robustness leads to improvement in saliency maps, we are unaware of research investigating the effect of saliency, training on adversarial robustness. This paper follows several experiments and gives insights into the connection between saliency training and robustness.
## 2 Methodology
Our objective is to establish a fair comparison between regular and saliency-guided models against different adversarial attacks. Models are trained from scratch for classification task. Saliency-guided models are trained following Ismail et al. (2021). To quantify the robustness, we considered white-box attacks. White-box adversarial examples are generated with full awareness of the model's weights. We evaluated various perturbation-based algorithms following Rauber et al. (2017). We briefly review the adversarial attack algorithms in this section and discuss the results in the next sections.
```
1:Initialize\(f_{\theta_{0}}\)
2:for\(i=1\) to \(N_{epochs}\)do
3: Indexes = Sort(\(\nabla_{x}f_{\theta_{i}}\))
4: Masking(\(X(Indexes)\))
5: Calculating the loss: \(L_{i}=L(f_{\theta}(X_{i}),y_{i})+\lambda D_{KL}(f_{\theta}(X_{i})|f_{\theta}( \widetilde{X}_{i}))\)
6: Updating the model's weights
7:endfor
8:end
```
**Algorithm 1** Saliency Training
### Saliency Guided Training
Following Ismail et al. (2021), the saliency-guided training model is trained by adding a regularization term in the cost function. In each iteration, input images are fed to the model, gradients are calculated and sorted by their magnitude. \(K\) is defined as a parameter which determines the number of the lower gradients being replaced. By attributing the gradient values to the input, related pixels are masked. Algorithm 1 shows the training procedure. Masking values are randomly selected from the uniform distribution of the original image pixel value range. To make the model learn salient features, the model's weights are updated to output same output distribution for the original and masked input. Kullback-Leibler (KL) divergence regularization term in Eq.1 guarantees this.
\[\sum_{i=1}^{n}\left[L(f_{\theta}(X_{i}),y_{i})+\lambda D_{KL}(f_{\theta}(X_{i })|f_{\theta}(\widetilde{X}_{i}))\right] \tag{1}\]
where \(X_{i}\) is input data, \(\widetilde{X}_{i}\) is masked input and \(\theta\) is model's parameters. Hyperparameter \(\lambda\) works as a tradeoff between loss function terms.
### Fast Gradient Sign Method
FGSM Goodfellow et al. (2014) is a white-box method that tries to manipulate the whole input data based on the model's weights in order to lead the model to an incorrect prediction. The perturbation occurs in the direction of the sign of the gradient. It is a fast one-step gradient with a fixed step size. Eq.2 shows the update algorithm where \(\epsilon\) is the amount of perturbation and \(y_{true}\) is the correct label for the input \(x\).
\[x^{\rm adv}=x+\epsilon\cdot\mathrm{sign}(\nabla_{x}J(x,y_{true})) \tag{2}\]
### Basic Iterative Method
BIM Kurakin et al. (2018) extends FGSM by applying a small amount of perturbation in each iteration. After each step, the generated input is clipped so that the final results would be in \(\epsilon\)-neighbourhood of the main input:
\[x_{0}^{adv}=x,x_{n+1}^{adv}=Clip_{x,\epsilon}(x_{N}^{adv}+\alpha sign(\nabla_{ x}J(x_{N}^{adv},y_{true}))) \tag{3}\]
### Projected Gradient Descent
Similar to BIM, PGD Madry et al. (2017) generates the adversarial examples in an iterative way. In each stage, the input is manipulated by solving a constrained optimization problem. The predefined amount of constraint keeps the content as the original input so that the generated example seems imperceptibly different to human eyes. The PGD is different from BIM as the initialization is set randomly.
### Momentum Iterative Method
MIM Madry et al. (2017) iterates over the cumulative direction of previous gradients to perturb the input.
\[g_{(N+1)}=\mu g_{N}+\frac{\nabla_{x}L(f(x_{N}^{adv},\theta^{*}),y)}{\|\nabla_{ x}L(f(x_{N}^{adv},\theta^{*}),y)\|_{2}},x_{(N+1)}^{adv}=x_{N}^{adv}+\alpha \cdot sign(g_{(N+1)}) \tag{4}\]
## 3 Implementation Details
### Datasets
#### Mnist
The dataset LeCun et al. (2010) includes 10 classes of handwritten digits containing 70000 grayscale images with the \(28*28\) pixel size. We split the dataset into 60000 images for training and 10000 for test set.
#### Cifar-10
Including 60000 low-resolution RGB images. The dataset contains 10 classes consisting of Airplanes, Cars, Birds, Cats, Deer, Dogs, Frogs, Horses, Ships, and trucks. Training and test set include 50000 and 10000 images respectively.
#### Model Architecture
We focused on Convolution-based architectures for implementation. Our Architecture follows Ismail et al. (2021) for the Saliency-guided training model. For MNIST dataset, we employed a two-layer CNN network with kernel size 3 and stride of 1. Two Fully-Connected layers were followed by the network. Two drop-out layers also were used with \(p=0.25\) and \(p=0.5\).
For CIFAR-10, following Ismail et al. (2021), we used ResNet18 (Pretrained on ImageNet) He et al. (2016) and 10 neuron classifier as the output. We trained our models using single NVIDIA A100 GPU. The models were trained for 100 epochs with the batch size 256. Models used Adadelta Zeiler (2012) as the optimizer with learning rate \(0.1\).
During the training and within each epoch, low gradient values are replaced with random values. The values are in the range of the remaining pixels for the image. Gradient values are calculated using Captum library Kokhlikyan et al. (2020). A fixed amount of \(50\%\) of lower gradient values are removed.
accuracy and for a fair comparison, we employed the same architecture for regular training.
#### Robustness
For robustness evaluation we used Foolbox Rauber et al. (2017) library for adversarial attack algorithms. We evaluated the model's test accuracy for different degrees of perturbation.
## 4 Results and Discussions
Our purpose is to quantify the connections between robustness and saliency training approach. To this end, we trained the models following saliency-guided training Ismail et al. (2021). Also, we train the same architecture without guided training as a baseline for comparison. Afterward, we evaluate the classification accuracy of the trained models against adversarial examples. Results for MNIST and CIFAR-10 are shown in 1 and 2 respectively.
### Conclusion
The interpretability of Deep Learning models is of high importance based on the application. As the topic grows popular among researchers, we investigate a different aspect of salient training. Despite the fact that high values of gradient might help for better visual interpretability, our results show that it might compromise vulnerability against attacks. Through visual observations (Fig.3) from the models with the salient approach in their learning process, it could be figured out that by highlighting high-value gradients and their pertinent pixels that are considered as the main object, the main object is the center of attention for adversarial examples. According to the adversarial examples formula which work in the direction of gradient, results seems to be grounded. This gives way to adversarial methods to affect the
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Dataset & \# Test & \# Classes & Accuracy & SAL-Accuracy \\ \hline MNIST & 10000 & 10 & 99.3\% & 98.8\% \\ CIFAR-10 & 10000 & 10 & 82.8\% & 82.2\% \\ \hline \hline \end{tabular}
\end{table}
Table 1: Traditional and Saliency Model Training Accuracy
Figure 1: Saliency guided trained and regular model performance against attacks over MNIST dataset for different amount of perturbation. a. FGSM attack b. PGD attack c. BIM attack
Figure 2: Saliency guided trained and regular model performance against attacks over CIFAR10 dataset for different amount of perturbation. a. FGSM attack b. PGD attack c. BIM attack
model's performance. Our work encourages and attracts the attention of the researchers to consider both aspects of saliency and robustness.
|
2308.09858 | Tensor-Compressed Back-Propagation-Free Training for (Physics-Informed)
Neural Networks | Backward propagation (BP) is widely used to compute the gradients in neural
network training. However, it is hard to implement BP on edge devices due to
the lack of hardware and software resources to support automatic
differentiation. This has tremendously increased the design complexity and
time-to-market of on-device training accelerators. This paper presents a
completely BP-free framework that only requires forward propagation to train
realistic neural networks. Our technical contributions are three-fold. Firstly,
we present a tensor-compressed variance reduction approach to greatly improve
the scalability of zeroth-order (ZO) optimization, making it feasible to handle
a network size that is beyond the capability of previous ZO approaches.
Secondly, we present a hybrid gradient evaluation approach to improve the
efficiency of ZO training. Finally, we extend our BP-free training framework to
physics-informed neural networks (PINNs) by proposing a sparse-grid approach to
estimate the derivatives in the loss function without using BP. Our BP-free
training only loses little accuracy on the MNIST dataset compared with standard
first-order training. We also demonstrate successful results in training a PINN
for solving a 20-dim Hamiltonian-Jacobi-Bellman PDE. This memory-efficient and
BP-free approach may serve as a foundation for the near-future on-device
training on many resource-constraint platforms (e.g., FPGA, ASIC,
micro-controllers, and photonic chips). | Yequan Zhao, Xinling Yu, Zhixiong Chen, Ziyue Liu, Sijia Liu, Zheng Zhang | 2023-08-18T23:56:50Z | http://arxiv.org/abs/2308.09858v2 | # Tensor-Compressed Back-Propagation-Free Training for
###### Abstract
Backward propagation (BP) is widely used to compute the gradients in neural network training. However, it is hard to implement BP on edge devices due to the lack of hardware and software resources to support automatic differentiation. This has tremendously increased the design complexity and time-to-market of on-device training accelerators. This paper presents a completely BP-free framework that only requires forward propagation to train realistic neural networks. Our technical contributions are three-fold. Firstly, we present a tensor-compressed variance reduction approach to greatly improve the scalability of zeroth-order (ZO) optimization, making it feasible to handle a network size that is beyond the capability of previous ZO approaches. Secondly, we present a hybrid gradient evaluation approach to improve the efficiency of ZO training. Finally, we extend our BP-free training framework to physics-informed neural networks (PINNs) by proposing a sparse-grid approach to estimate the derivatives in the loss function without using BP. Our BP-free training only loses little accuracy on the MNIST dataset compared with standard first-order training. We also demonstrate successful results in training a PINN for solving a 20-dim Hamiltonian-Jacobi-Bellman PDE. This memory-efficient and BP-free approach may serve as a foundation for the near-future on-device training on many resource-constraint platforms (e.g., FPGA, ASIC, micro-controllers, and photonic chips).
## 1 Introduction
In neural network training, it is widely assumed that the gradient information can be computed via backward propagation (BP) [12] to perform SGD-type optimization. This is true on HPC and desktop computers, because they support Pytorch/Tensorflow backend and automatic differentiation (AD) packages [1] to compute exact gradients. However, performing BP is infeasible on many edge devices (e.g., FPGA, ASIC, photonic chips, embedded micro-processors) due to the lack of necessary computing and memory resources to support AD libraries. Manually calculating the gradient of a modern neural network on edge hardware is time-consuming due to the high complexity and lots of debug iterations. This can significantly delay the development and deployment of AI training accelerators. For instance, designing an FPGA inference accelerator may be done within one week via high-level synthesis by an experienced AI hardware expert, but designing an FPGA training accelerator can take one or two years due to the high complexity of implementing BP. Some analog edge computing platforms (e.g., photonic chips) even do not have scalable memory arrays to store the final or intermediate results of a chain rule-based gradient computation.
This paper investigates the end-to-end training of neural networks and physics-informed neural networks (PINNs) without using BP, which can potentially make on-device neural network training accelerator design as easy as designing an inference accelerator. The demand for edge-device training has been growing rapidly in recent years. One motivation is to ensure AI model performance under varying data stream and environment, as well as device parameter shift. Another motivation is the increasing concerns about data privacy, which requires end-to-end or incremental learning on edge devices based on local data [16]. In science and engineering, PINNs [14, 15] have been increasingly used to solve the forward and inverse problems of high-dimensional partial differential equations (PDE). A PINN needs to be trained again once the PDE initial conditions, boundary conditions, or measurement data changes. Applications of such on-device PINN training include (but are not limited to) safety-aware verification and control of autonomous systems [2, 23, 24] and MRI-based electrical property tomography [25].
We intend to perform BP-free training via stochastic zeroth-order (ZO) optimization [26, 27, 28, 29, 30], which uses a few forward evaluations with perturbed model parameters to approximate the true gradient. In the realm of deep learning, ZO optimization was primarily used for crafting black-box adversarial examples to assess neural network robustness [3, 16], and for parameter and memory-efficient model fine-tuning [26, 27]. Notably, ZO optimization has been rarely used in neural network training from scratch, because the variance of the ZO gradient estimation is large when the number of training variables increases.
Leveraging ZO optimization, we present a scalable BP
free approach for training both standard neural networks and PINNs from scratch. Our novel contributions include:
* [leftmargin=*,noitemsep,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,top,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,top,topsep=0pt,top,sep=0pt,top,topsep=0pt,topsep=0pt,top,topsep=0pt,topsep=0pt,top,topsep=0pt,top,topsep=0pt,top,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,top,topsep=0pt,top,0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,top,topsep=0pt,topsep=0pt,topsep=0pt,top,topsep=0pt,top,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,top,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,top,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsepsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsepsep=0pt,topsep=0pt,topsepsep=0pt,topsep=0pt,topsepsep=0pt,topsep=0pt,topsep=0pt,topsepsep=0pt,topsepsep=0pt,topsepsep=0pt,topsep=0pt,topsepsep=0pt,topsep=0pt,topsep=0pt,top
### Tensor-Train (TT) Variance Reduction
Our goal is to develop a BP-free training framework based on ZO optimization. As explained before, the ZO gradient estimation error is dominated by \(d\), which is the number of total trainable variables. In realistic neural network training, \(d\) can easily exceed \(10^{5}\) even for very simple image classification problems. This prevents the direct application of ZO optimization in end-to-end neural network training.
To improve the scalability of ZO training, we propose to significantly reduce the dimensionality and gradient MSE error via a _low-rank_ tensor-compressed training. Let \(\mathbf{W}\in\mathbb{R}^{M\times N}\) be a generic weight matrix in a neural network. We factorize its dimension sizes as \(M=\prod_{i=1}^{L}m_{i}\) and \(N=\prod_{j=1}^{L}n_{j}\), fold \(\mathbf{W}\) into a \(2L\)-way tensor \(\mathbf{\mathcal{W}}\in\mathbb{R}^{m_{1}\times m_{2}\times\cdots\times m_{L} \times n_{1}\times n_{2}\times\cdots\times n_{L}}\), and parameterize \(\mathbf{\mathcal{W}}\) with the tensor-train (TT) decomposition [10]:
\[\mathbf{\mathcal{W}}(i_{1},i_{2},\dots,i_{L},j_{1},j_{2},\dots,j_{L})\approx\prod_ {k=1}^{L}\mathbf{G}_{k}(i_{k},j_{k}) \tag{7}\]
Here \(\mathbf{G}_{k}(i_{k},j_{k})\in\mathbb{R}^{r_{k-1}\times r_{k}}\) is the \((i_{k},j_{k})\)-th slice of the TT-core \(\mathbf{\mathcal{G}}_{k}\in\mathbb{R}^{r_{k-1}\times m_{k}\times n_{k}\times r_{k}}\)by fixing its \(2\)nd index as \(i_{k}\) and \(3\)rd index as \(j_{k}\). The vector \((r_{0},r_{1},\dots,r_{L})\) is called TT-ranks with the constraint \(r_{0}=r_{L}=1\). This TT representation reduces the number of unknown variables from \(\prod\limits_{k=1}^{L}m_{k}n_{k}\) to \(\sum\limits_{k=1}^{L}r_{k-1}m_{k}n_{k}r_{k}\). The compression ratio can be controlled by the TT-ranks, which can be learnt automatically via the Bayesian tensor rank determination [11, 12].
In the ZO training process, we change the training variables from \(\mathbf{W}\) to the TT factors \(\{\mathbf{\mathcal{G}}_{k}\}_{k=1}^{L}\). This reduces the problem dimensionality \(d\) by several orders of magnitude, leading to dramatic reduction of the variance in the RGE gradient estimation and of the query complexity in the CGE gradient estimator. In the ZO training, we only need forward evaluations, where the original matrix-vector product is replaced with low-cost tensor-network contraction. This offers both memory and computing cost reduction in the ZO training process. While we assume \(\mathbf{W}\) as a weight matrix, other model parameters like embedding tables and convolution filters can be compressed simulatenously in the same way (with adjustable ranks) in the ZO training process.
### A Hybrid ZO Optimizer
With the above tensor-compressed variance reduction, we can employ either RGE or CGE for ZO gradient estimation and perform BP-free training. In practice, the RGE method converges very slowly in the late stage of training due to the large gradient error. CGE needs fewer training epochs due to more accurate gradient estimation, but it needs many more forward evaluations per gradient estimation since it only perturbs one model parameter in each forward evaluation.
To enhance both the accuracy and efficiency of the whole ZO training process, we employ a hybrid ZO training scheme that involves two stages:
* **ZO-signRGE coarse training:** The RGE method perturbs all model parameters simultaneously with a single random perturbation, requiring significantly fewer forward queries per epoch compared to the CGE method. However, the two-level stochasticity (in SGD and in the gradient estimation, respectively) of ZO via RGE (ZO-RGE) may cause a large gradient variance and lead to divergence in high-dimensional tasks. To address this issue, we adopt the concept from signSGD [13] and its ZO counterpart, ZO-signSGD [11], to de-noise the ZO gradient estimation by preserving only the sign for each update: \[\mathbf{\theta}_{t}\leftarrow\mathbf{\theta}_{t-1}-\alpha\mathrm{sign}\left[\sum_{i=1 }^{N}\frac{1}{N\mu}\left[\mathcal{L}\left(\mathbf{\theta}_{t-1}+\mu\mathbf{\xi}_{i} \right)-\mathcal{L}(\mathbf{\theta}_{t-1})\right]\mathbf{\xi}_{i}\right].\] We refer to this method as ZO-signRGE. ZO-signRGE retains only the sign of each gradient estimation, mitigating the adverse effects of the high variance of RGE. As a result, it exhibits better robustness to gradient noise and demonstrates faster empirical convergence.
* **ZO-CGE fine-tuning:** The CGE method requires \(d+1\) forward evaluations per gradient estimation. Therefore, the ZO training with CGE (ZO-CGE) requires many function queries in the whole training process. To accelerate training, we further adopt the idea of momentum [10]. The momentum method accumulates a velocity vector in directions of persistent reduction in the objective across iterations [21]. The descent direction \(\mathbf{b}^{t}\) is given by an exponential moving average of the past gradients \(\mathbf{b}^{t}\gets mb^{t-1}+\mathbf{g}^{t}\). Here \(m\) is the momentum term, \(\mathbf{g}^{t}\) is the estimated gradient vector at time \(t\), and \(\mathbf{b}^{0}=\mathbf{g}^{0}\). The ZO-signRGE coarse training rapidly explores a roughly converged solution with a small number of loss evaluations. When the coarse training fails to learn (e.g., the training loss exhibits trivial updates for several epochs), the optimizer switches to ZO-CGE to fine-tune the model.
## 4 BP-free Training for PINN
In this section, we extend the proposed TT-compressed hybrid ZO training to PINNs. Training a PINN is more challenging because the loss function in (5) involves first or even high-order derivatives. We intend to also avoid BP computation in the loss evaluation.
### Stein Gradient Estimation
Without loss of generality, for an input \(\mathbf{x}\in\mathbb{R}^{D}\) and an approximated PDE solution \(\mathbf{u}_{\mathbf{\theta}}(\mathbf{x})\in\mathbb{R}^{n}\) parameterized by \(\mathbf{\theta}\), we consider the first-order derivative \(\nabla_{\mathbf{x}}\mathbf{u}_{\mathbf{\theta}}\) and Laplacian \(\Delta\mathbf{u}_{\mathbf{\theta}}\) involved in the loss function of a PINN training. Our implementation leverages the Stein estimator [10]. Specifically, we represent the PDE solution \(\mathbf{u}_{\mathbf{\theta}}(\mathbf{x})\) via a Gaussian smoothed model:
\[\mathbf{u}_{\mathbf{\theta}}(\mathbf{x})=\mathbb{E}_{\mathbf{\delta}\sim\mathcal{N}(\mathbf{0},\sigma^{2}\mathbf{I})}f_{\mathbf{\theta}}(\mathbf{x}+\mathbf{\delta}), \tag{8}\]
where \(f_{\mathbf{\theta}}\) is a neural network with parameters \(\mathbf{\theta}\); \(\mathbf{\delta}\in\mathbb{R}^{D}\) is the random noise sampled from a multivariate Gaussian distribution \(\mathcal{N}(\mathbf{0},\sigma^{2}\mathbf{I})\). With this special formulation, the
first-order derivative and Laplacian of \(\mathbf{u}_{\mathbf{\theta}}(\mathbf{x})\) can be reformulated as the expectation terms:
\[\nabla_{\mathbf{x}}\mathbf{u}_{\mathbf{\theta}} =\mathbb{E}_{\mathbf{\delta}\sim\mathcal{N}(\mathbf{0},\sigma^{2}\mathbf{I})} \left[\frac{\mathbf{\delta}}{2\sigma^{2}}(f_{\mathbf{\theta}}(\mathbf{x}+\mathbf{\delta})-f_{ \mathbf{\theta}}(\mathbf{x}-\mathbf{\delta}))\right], \tag{9}\] \[\Delta\mathbf{u}_{\mathbf{\theta}} =\mathbb{E}_{\mathbf{\delta}\sim\mathcal{N}[\mathbf{0},\sigma^{2}\mathbf{I})} \left[f_{\mathbf{\theta}}(\mathbf{x}+\mathbf{\delta})+f_{\mathbf{\theta}}(\mathbf{x}-\mathbf{ \delta})-2f_{\mathbf{\theta}}(\mathbf{x})\right]\] \[\qquad\qquad\qquad\qquad\times\frac{\|\mathbf{\delta}\|^{2}-\sigma^{ 2}D}{2\sigma^{4}}.\]
In (He et al., 2023), the above expectation is computed by evaluating \(f_{\mathbf{\theta}}(\mathbf{x}+\mathbf{\delta})\) and \(f_{\mathbf{\theta}}(\mathbf{x}-\mathbf{\delta})\) at a set of i.i.d. Monte Carlo samples of \(\mathbf{\delta}\). Compared with finite difference (Chiu et al., 2022; Xiang et al., 2022) which requires repeated calculations of gradients for each dimension, the Stein estimator can directly provide the vectorized gradients for all dimensions, allowing efficient parallel implementations. However, the Monte-Carlo Stein gradient estimator (He et al., 2023) needs a huge number of (e.g., \(>10^{3}\)) function queries even if variance reduction is utilized. Therefore, it is highly desirable to develop a more efficient BP-free method for evaluating the derivative terms in the loss function.
### Sparse-Grid Stein Gradient Estimator
Now we leverage the sparse grid techniques (Garcke et al., 2006; Gerstner and Griebel, 1998) to reduce the number of function queries in the Stein gradient estimator. Sparse grids have been extensively used in the uncertainty quantification (Nobile, Tempone, and Webster, 2008) of stochastic PDEs, but they have not been used in PINN training.
To begin, we define a sequence of univariate quadrature rules \(V=\{V_{l}:l\in\mathbb{N}\}\). Here \(l\) denotes an accuracy level so that any polynomial function of order \(\leq l\) can be exactly integrated with \(V_{l}\). Each rule \(V_{l}\) specifies \(n_{l}\) nodes \(N_{l}=\{\delta_{1},\ldots,\delta_{n_{l}}\}\) and the corresponding weight function \(w_{l}:N_{l}\rightarrow\mathbb{R}\). A univariate quadrature rule \(V_{k}\) for a function \(f\) of a random variable \(\delta\), can be written as:
\[\int_{\mathbb{R}}f(\delta)p(\delta)\,d\delta\approx V_{k}[f]=\sum_{\delta_{j} \in N_{k}}w_{k}(\delta_{j})f(\delta_{j}). \tag{10}\]
Here \(p(\delta)\) is the probability density function (PDF) of \(\delta\).
Next, we consider the multivariate integration of a function \(f\) over a random vector \(\mathbf{\delta}=(\delta^{1},\ldots,\delta^{D})\). We denote the joint PDF of \(\mathbf{\delta}\) as \(p(\mathbf{\delta})=\prod_{m=1}^{D}p(\delta^{m})\) and define the \(D\)-variate quadrature rule with potentially different accuracy levels in each dimension indicated by the multi-index \(\mathbf{l}=(l_{1},l_{2},...,l_{D})\in\mathbb{N}^{D}\). The Smolyak algorithm (Gerstner and Griebel, 1998) can be used to construct sparse grids by combining full tensor-product grids of different accuracy levels and removing redundant points. Specifically, for any non-negative integer \(q\), define \(\mathbb{N}_{q}^{D}=\left\{\mathbf{l}\in\mathbb{N}^{D}:\sum_{m=1}^{D}l_{m}=D+q\right\}\) and \(\mathbb{N}_{q}^{D}=\emptyset\) for \(q<0\). The level-\(k\) Smolyak rule \(A_{D,k}\) for \(D\)-dim integration can be written as (Wasilkowski and Wozniakowski, 1995):
\[A_{D,k}[f]=\sum_{q=k-D}^{k-1}(-1)^{k-1-q}\left(\begin{array}{c}D-1\\ k-1-q\end{array}\right)\times \tag{11}\] \[\sum_{t\in\mathbb{N}_{q}^{D}}\left(V_{l_{1}}\otimes\cdots\otimes V_ {l_{D}}\right)[f].\]
It follows that:
\[A_{D,k}[f] =\sum_{q=k-D}^{k-1}\sum_{\mathbf{l}\in\mathbb{N}_{q}^{D}}\sum_{\mathbf{ \delta}^{1}\in N_{l_{1}}}\cdots\sum_{\mathbf{\delta}^{D}\in N_{l_{D}}}(-1)^{k-1-q}\times\] \[\left(\begin{array}{c}D-1\\ k-1-q\end{array}\right)\prod_{m=1}^{D}w_{l_{m}}(\delta^{m})f(\delta^{1}, \ldots,\delta^{D}),\]
which is a weighted sum of function evaluations \(f(\mathbf{\delta})\) for \(\mathbf{\delta}\in\bigcup_{q=k-D}^{k-1}\bigcup_{\mathbb{N}_{q}^{D}}\left(N_{l_{1}} \times\cdots\times N_{l_{D}}\right)\). The corresponding weight is \((-1)^{k-1-q}\left(\begin{array}{c}D-1\\ k-1-q\end{array}\right)\prod_{m=1}^{D}w_{l_{m}}(\delta^{m})\). For the same \(\mathbf{\delta}\) that appears multiple times for different combinations of values of \(\mathbf{l}\), we only need to evaluate \(f\) once and sum up the respective weights beforehand. The resulting level-\(k\) sparse quadrature rule defines a set of \(n_{L}\) nodes \(S_{L}=\{\mathbf{\delta}_{1},\ldots,\mathbf{\delta}_{n_{L}}\}\) and the corresponding weights \(\{w_{1},\ldots,w_{n_{L}}\}\). The \(D\)-dim integration can then be efficiently computed with the sparse grids as:
\[\int_{\mathbb{R}^{D}}f(\mathbf{\delta})p(\mathbf{\delta})d\mathbf{\delta}\approx A_{D,k}[f]= \sum_{j=1}^{n_{L}}w_{j}f(\mathbf{\delta}_{j}). \tag{12}\]
In practice, since the sparse grids and the weights do not depend on \(f\), they can be pre-computed for the specific quadrature rule, dimension \(D\), and accuracy level \(k\).
Finally, we implement the Stein gradient estimator in (9) via the sparse-grid integration. Noting that \(\mathbf{\delta}\sim\mathcal{N}\left(\mathbf{0},\sigma^{2}\mathbf{I}\right)\), we can use univariate Gaussian quadrature rules as basis to construct a level-\(k\) sparse Gaussian quadrature rule \(A_{D,k}^{\star}\) for \(D\)-variate integration. Then the first-order derivative and Laplacian in (9) is approximated as:
\[\nabla_{\mathbf{x}}\mathbf{u}_{\mathbf{\theta}} \approx\sum_{j=1}^{n_{L}^{\star}}w_{j}^{\star}\left[\frac{\mathbf{ \delta}_{j}^{\star}}{2\sigma^{2}}(f_{\mathbf{\theta}}(\mathbf{x}+\mathbf{\delta}_{j}^{ \star})-f_{\mathbf{\theta}}(\mathbf{x}-\mathbf{\delta}_{j}^{\star}))\right], \tag{13}\] \[\Delta\mathbf{u}_{\mathbf{\theta}} \approx\sum_{j=1}^{n_{L}^{\star}}w_{j}^{\star}\left(\frac{\|\mathbf{ \delta}_{j}^{\star}\|^{2}-\sigma^{2}D}{2\sigma^{4}}\right)\times\] \[\left(f_{\mathbf{\theta}}(\mathbf{x}+\mathbf{\delta}_{j}^{\star})+f_{\mathbf{ \theta}}(\mathbf{x}-\mathbf{\delta}_{j}^{\star})-2f_{\mathbf{\theta}}(\mathbf{x})\right),\]
where the node \(\mathbf{\delta}_{j}^{\star}\) and weight \(w_{j}^{\star}\) are defined by the sparse grid \(A_{D,k}^{\star}\). We remark that \(n_{L}^{\star}\) is usually significantly smaller than the number of Monte Carlo samples required to evaluate (9) when \(D\) is small. For example, we only need \(n_{L}^{\star}=2D^{2}+2D+1\) nodes to approximate a \(D\)-dimensional integral (\(D>1\)) using a level-\(3\) sparse Gaussian quadrature rule \(A_{D,3}^{\star}\).
## 5 Numerical Results
We test our tensor-compressed ZO training method on the MNIST dataset and a PDE benchmark. We compare our method with various baselines including standard first-order training and state-of-the-art BP-free training methods.
### MNIST Image Classification
To evaluate our proposed tensor-train (TT) compressed hybrid ZO training method on realistic image classification
tasks, we train a Multilayer Perceptron (MLP) network with two FC layers (768\(\times\)1024, 1024\(\times\)10) to classify the MNIST dataset. We compare our TT-compressed hybrid ZO training with four baseline methods: 1) first-order (FO) training using BP for accurate gradient evaluation, 2) ZO-RGE, 3) ZO-signRGE, and 4) ZO-CG. We use SGD to update model parameters. We adopt a mini-batch size 64, an initial learning learning rate \(\alpha\) as 1e-3 with exponential decaying rate of 0.9 every 10 epochs. All experiments run for 100 epochs.
**(1) Effectiveness of TT Variance Reduction.** The proposed TT variance reduction is the key to ensure the success of ZO training on realistic neural networks. For the 2-layer MLP model, we fold the sizes of the input, hidden, and output layers as size \(7\times 4\times 4\times 7\), \(8\times 4\times 4\times 8\), \(1\times 5\times 2\times 1\), respectively. We preset the TT-ranks as [1,\(r\),\(r\),\(r\),1]. By varying \(r\) we can control the compression ratio. We term this model as TT-MLP. As shown in Table 1, the TT-compressed approach can reduce the model parameters by \(205\times\). Due to the huge parameter and variance reduction, a hybrid ZO training can achieve a high testing accuracy \(96.64\%\), which is very close to the FO training results (\(97.16\%\)). We further compare our method with two other sets of baselines:
* **FO and ZO training on the MLP model.** As shown in Table 1, the original MLP model has a high testing accuracy of \(97.43\%\) when trained with a FO SGD algorithm. Note that this accuracy is very close to that of our TT-compressed hybrid ZO training method. The ZO training methods cannot converge to a descent solution due to the large variance caused by the 814K model parameters.
* **FO and ZO training with pruning.** An alternative method of reducing model parameters and ZO gradient variance is pruning. To train a sparse-pruned MLP (termed SP-MLP), we adopted the Gradient Signal Preserving (GraSP) method [20] at the initialization, which targets preserving the training dynamics after pruning. To ensure a fair comparison, we constrained the total number of parameters to be at the same level of TT-MLP. Table 1 show that pruning can indeed improve the quality of ZO training, but it still underperforms our proposed TT-compressed hybrid ZO training by a \(4.4\%\) accuracy drop.
2021a) assume a random pre-specified sparsity. We also extend the sparse method to ZO training.
* **Forward Gradient (FG):** Standard FG method uses forward-mode AD to evaluate the _forward gradient_ and performs SGD-type iterations (Baydin et al., 2022). Ref. (Ren et al., 2022) further scales up this method by introducing activation perturbation and local loss to reduce the variance of forward gradient evaluation.
* **Feedback Alignment (FA):** Standard FA (Lillicrap et al., 2016) uses random and fixed backward weights. Direct feedback alignment (DFA) propagated the error through fixed random feedback connections directly from the output layer to each hidden layer. (Nokland, 2016).
The results are summarized in Table 3. For ZO optimizers, we also compare the number of loss computations needed for each iteration. Our method can achieve the best accuracy in weight perturbation-based methods. Compared with SOTA FG and FA algorithms, our method can also achieve comparable accuracy in the MNIST dataset. Note that our method contains over 107\(\times\) fewer model parameters. This can greatly save the memory overhead for on-device training scenarios with restricted memory resources.
We remark that both FG and FA are not exactly black-box optimizers, since they rely on the computational graph of a given deep learning software. In practice, storing the computational graphs on an edge device can be expensive.
### PINN for Solving High-Dim HJB PDE
We further use our BP-free method to train a PINN arising from high-dim optimal control of robots and autonomous systems. We consider the following 20-dim HJB PDE:
\[\begin{split}&\partial_{t}u(\mathbf{x},t)+\Delta u(\mathbf{x},t)-0.05 \left\|\nabla_{\mathbf{x}}u(\mathbf{x},t)\right\|_{2}^{2}=-2,\\ & u(\mathbf{x},1)=\left\|\mathbf{x}\right\|_{1},\quad\mathbf{x}\in[0,1]^{20}, \;\;t\in[0,1].\end{split} \tag{14}\]
Here \(\left\|\cdot\right\|_{p}\) denotes an \(\ell_{p}\) norm. The exact solution is \(u(\mathbf{x},t)=\left\|\mathbf{x}\right\|_{1}+1-t\). The baseline neural network is a 3-layer **MLP** (\(21\times n,n\times n,n\times 1\), \(n\) denotes the number of neurons in the hidden layer) with sine activation. We consider four options for computing derivatives in the loss: 1) finite-difference (**FD**) (Lim, Dutta, and Rotaru, 2022), 2) Monte Carlo-based Stein Estimator (**SE**) (He et al., 2023), 3) our sparse-grid (**SG**) method, and 4) automatic differentiation (**AD**) as a golden reference. We approximate the solution \(u_{\mathbf{\theta}}\) by a transformed neural network \(f^{{}^{\prime}}_{\mathbf{\theta}}(\mathbf{x},t)=f_{\mathbf{\theta}}(\mathbf{x},t)+\left\|\mathbf{x }\right\|_{1}+1-t\), where \(f_{\mathbf{\theta}}(\mathbf{x},t)\) is the base neural network or its TT-compressed version. Specifically, \(u_{\mathbf{\theta}}(\mathbf{x},t)=f^{{}^{\prime}}_{\mathbf{\theta}}(\mathbf{x},t)\) for **FD** and **AD**, and \(u_{\mathbf{\theta}}(\mathbf{x},t)=\mathbb{E}_{(\mathbf{\delta}_{\mathbf{x}},\delta_{t})\sim \mathcal{N}(\mathbf{0},\sigma^{2}\mathbf{I})}f^{{}^{\prime}}_{\mathbf{\theta}}(\mathbf{x}+ \mathbf{\delta}_{\mathbf{x}},t+\delta_{t})\) for **SE** and **SG**. Here the transformed network is designed to ensure that our approximated solution either exactly satisfies (**FD**, **AD**) or closely adheres to the terminal condition (**SE**, **SG**), allowing us to focus solely on minimizing the HJB residual during training.
In our hybrid ZO optimization, we set \(\mu=0.1\) and \(N=10\) in the ZO-signRGE coarse training stage. With greatly reduced variance by TT, ZO-signRGE can provide sufficiently accurate results, so fine-tuning is not necessary for this example. In the loss evaluation, we set the step size in **FD** to 0.01 and the noise level \(\sigma\) to 0.1 in **SE** and **SG**. We use 1024 samples in **SE** and 925 samples in **SG** using a level-3 sparse Gaussian quadrature rule to approximate the expectations (8) and (9). Note that for most 2-dim or 3-dim PDEs, our SG method only requires \(10\sim 30\) samples. Our results are summarized below:
* **Effectiveness of BP-free loss computation:** We compare our sparse-grid loss computation with those using AD, FD, and Monte Carlo-based Stein estimator, respectively. To evaluate different loss evaluations, we perform FO training and report the results in Table 4. The BP-free loss computation does not hurt the model performance, and our SG method is competitive compared to the original PINN training using AD for derivative computation.
* **Evaluation of the BP-free PINN Training.** Table 5 shows the MSE error of fully BP-free PINN training, using various BP-free loss computation and the proposed hybrid ZO training. By employing a tensor-train (TT) compressed model to reduce the variance, our method achieved a validation loss similar to standard FO training using AD. This observation clearly demonstrates that our method can bypass BP in both loss evaluation and model parameter updates with little accuracy loss.
* **Comparison of dimensionality/variance reduction approaches.** We compare our TT-compressed model with standard MLPs with dense fully connected (FC) layers and sparse-pruned (SP) MLPs in Table 6. For a fair comparison, all networks have similar model parameters. TT-compressed and SP model can reach better results than FC models as TT and SP can preserve the wide hidden layer.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Method & Params & \# of loss eval. & Best Acc. \\ \hline \multicolumn{4}{c}{Hybrid ZO Optimizer} \\ \hline HZO & 814K & 814K & 83.58 \\ TT-HZO (proposed) & **4k** & 4K & **96.64** \\ SP-HZO & **4k** & 4K & 92.24 \\ \hline \multicolumn{4}{c}{Sparse ZO Optimizer} \\ \hline FLOPS & 814K & **0.1K** & 83.50 \\ STP & 814K & 24.4K & 90.20* \\ SZO-SCD & 814K & 18.3K & 93.50* \\ \hline \multicolumn{4}{c}{Forward Gradient} \\ \hline FG-W & 272K & / & 90.75 \\ LG-FG-A & 272K & / & 96.76 \\ FG-W & 429K & / & 91.44 \\ LG-FG-A & 429K & / & **97.45** \\ \hline \multicolumn{4}{c}{Feedback Alignment} \\ \hline FA & 784K & / & 97.90 \\ DFA & 1.26M & / & **98.30** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of BP-free optimizers on the MNIST dataset. * means fine-tuning on a pre-trained model. In the Forward Gradient group, FG refers to forward gradient, LG refers to local objective functions added, -W refers to weight-perturbed, and -A refers to activity-perturbed.
The SP model shows superior efficiency in FO training and achieves similar result with the TT model in ZO training. Note that the SP model only reduces the trainable parameters but not the model parameters, so TT models can achieve better memory and computation saving.
## 6 Relevant Work
Training on edge devices.On-device inference has been well studied. However, training requires extra computation for BP and extra memory for intermediate results. Edge devices usually have tight memory and power budget, and run without an operating system, making it infeasible to implement modern deep learning frameworks that support automatic differentiation [1]. Most on-device training frameworks implemented BP by hand and only update within a small subspace (e.g., the last classifier layer [1], biases [1], etc.), only capable of incremental learning or transfer learning by fine-tuning a pre-trained model. Some work leveraged pruning or sparse training to reduce the trainable parameter number[11, 14], but cannot achieve real memory savings. Recent work found tiny, trainable subspace exists in deep neural networks [10, 12], but a pre-training is needed to identify such a subspace, which does not save the training overhead. Tensorized training [10, 11] offers orders-of-magnitude memory reduction in the training process, enabling end-to-end neural network training on FPGA. However, it is non-trivial to implement this FO method to train larger models on edge devices due to the complex gradient computation.
Back-propagation-free trainingDue to the complexity of performing BP on edge devices, several BP-free training algorithms have been proposed. These methods have gained more attentions in recent years as BP is also considered "biologically implausible". ZO optimization [1, 12, 13, 14, 15, 16, 17, 18] plays an important role in tackling signal processing and machine learning problems, where actual gradient information is infeasible. For specific use cases, please refer to the survey by [12]. ZO optimizations were also applied in on-device training for optical neural networks (ONN) [14, 15, 16]. A ZO SGD-based method FLOPS [14] was proposed for end-to-end ONN training for vowel recognition, and a stochastic ZO sparse coordinate descent SZO-SCD [14] was further proposed for efficient fine-tuning for ONN. But most ZO optimizations scale poorly when training real-size neural networks from scratch, due to their dimension-dependent gradient errors. Forward-forward algorithm was proposed for biologically plausible learning [10]. Other BP-free training frameworks include the forward gradient method [1], which updates the weights based on the directional gradient computed by forward-mode AD along a random direction. This method is further scaled up by leveraging activity perturbation and local loss for variance reduction[15]. However, existing BP-free methods focus on training on HPC, lacking special concerns for the special challenges of on-device training. Furthermore, BP-free training of PINN remains an empty field.
## 7 Conclusion
Due to the high complexity and memory overhead of back propagations (BP) on edge devices, this paper has proposed a completely BP-free zeroth-order (ZO) approach to train real-size neural networks from scratch. Our method utilizes tensor compression to reduce the ZO gradient variance and a hybrid approach to improve the efficiency of ZO training. This has enabled highly accurate ZO training on the MNIST dataset, showing superior performance than sparse ZO approach. We have further extended this memory-efficient approach to train physics-informed neural networks (PINN) by developing a sparse-grid Stein gradient estimator for the loss evaluation. Our approach has been successfully solved a 20-dim HJB PDE, achieving similar accuracy with standard training using first-order optimization and automatic differentiation. Due to the huge memory reduction and the BP-free nature, our method can be easily implemented on various resource-constraint edge devices in the near future.
The ZO-CGE fine-tuning stage of our method requires many more function queries than the ZO-signRGE coarse-training stage. This prevents its applications in training large model (transformers) from scratch. We will address this issue in the future.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Network & Neurons & Params & FO-AD & ZO-SG \\ \hline FC-3 & 768 & 608257 & 3.58E-04 & 1.62E-01 \\ \hline FC-3 & 78 & 7800 & 1.52E-02 & 4.35E-02 \\ FC-2 & 320 & 7700 & 6.55E-04 & 1.00E-02 \\ TT-3 & 768 & 7745 & 4.25E-05 & **4.79E-05** \\ SP-3 & 768 & 7603 & **1.54E-09** & 6.29E-05 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Validation loss of various models for the HJB PDE
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Network & Params & FO+AD & FO+FD & FO+SE & FO+SG \\ & & & & & (proposed) \\ \hline MLP & 608257 & 3.58E-04 & 5.97E-05 & 8.98E-04 & 4.27E-04 \\ \hline TT-MLP & 7745 & 4.28E-05 & **4.90E-05** & 2.28E-04 & 7.14E-05 \\ & 3209 & 1.63E-05 & 4.56E-05 & 3.82E-04 & **4.49E-05** \\ \hline \hline \end{tabular}
\end{table}
Table 4: FO training with different loss evaluations.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Network & Params & FO+AD & ZO+FD & ZO+SE & ZO+SG \\ & & & & & (proposed) \\ \hline MLP & 608257 & 3.58E-04 & 9.23E-02 & diverge & 1.62E-01 \\ \hline TT-MLP & 7745 & 4.28E-05 & 3.79E-04 & 4.35E-04 & **4.79E-05** \\ & 3209 & 1.63E-05 & 3.82E-04 & 2.75E-04 & **1.44E-05** \\ \hline \hline \end{tabular}
\end{table}
Table 5: ZO training results for 20-dim HJB PDE. |
2301.10802 | NASCTY: Neuroevolution to Attack Side-channel Leakages Yielding
Convolutional Neural Networks | Side-channel analysis (SCA) can obtain information related to the secret key
by exploiting leakages produced by the device. Researchers recently found that
neural networks (NNs) can execute a powerful profiling SCA, even on targets
protected with countermeasures. This paper explores the effectiveness of
Neuroevolution to Attack Side-channel Traces Yielding Convolutional Neural
Networks (NASCTY-CNNs), a novel genetic algorithm approach that applies genetic
operators on architectures' hyperparameters to produce CNNs for side-channel
analysis automatically. The results indicate that we can achieve performance
close to state-of-the-art approaches on desynchronized leakages with mask
protection, demonstrating that similar neuroevolution methods provide a solid
venue for further research. Finally, the commonalities among the constructed
NNs provide information on how NASCTY builds effective architectures and deals
with the applied countermeasures. | Fiske Schijlen, Lichao Wu, Luca Mariot | 2023-01-25T19:31:04Z | http://arxiv.org/abs/2301.10802v1 | # NASCTY: Neuroevolution to Attack Side-channel Leakages Yielding Convolutional Neural Networks
###### Abstract
Side-channel analysis (SCA) can obtain information related to the secret key by exploiting leakages produced by the device. Researchers recently found that neural networks (NNs) can execute a powerful profiling SCA, even on targets protected with countermeasures. This paper explores the effectiveness of _Neuroevolution to Attack Side-channel Traces Yielding Convolutional Neural Networks_ (NASCTY-CNNs), a novel genetic algorithm approach that applies genetic operators on architectures' hyperparameters to produce CNNs for side-channel analysis automatically. The results indicate that we can achieve performance close to state-of-the-art approaches on desynchronized leakages with mask protection, demonstrating that similar neuroevolution methods provide a solid venue for further research. Finally, the commonalities among the constructed NNs provide information on how NASCTY builds effective architectures and deals with the applied countermeasures.
_Keywords_ Side-channel analysis, Genetic algorithms, Neural networks, Neural architecture search
## 1 Introduction
Cryptographic algorithms are a ubiquitous part of modern life since they allow us to preserve the confidentiality and integrity of sensitive data. However, the implementation of such algorithms (even if they are mathematically secure) can sometimes leak information about security assets, for instance, through power [10] or electromagnetic radiation [16, 12]. An attacker can attempt a _side-channel analysis_ (SCA) on leakages to exploit that leakage and retrieve the secret key or its parts.
Assuming that the attacker has an identical copy of the target device, _profiling_ SCA becomes one of the most potent attack methods. Such an attack leverages traces generated on the copy to construct a model that profiles leakage patterns corresponding to
the key-related intermediate data or the key itself. The profiling model can then recover the secret key from traces generated by the target device. Nowadays, neural networks (NNs) have become one of the most popular profiling model options thanks to their strong attack capability, even if the leakage traces are protected with countermeasures [3, 8, 26, 28, 24]. This type of attack is commonly referred to as deep learning-based SCA (DL-SCA) [15].
In practice, one of the biggest obstacles to applying DL-SCA is the design of the NN architecture and the optimization of its hyperparameters. The _architecture_ of an NN refers to its inner components, such as the neurons and the connections in between. In side-channel analysis research, an NN's architecture is often decided empirically, resulting in different architectures even on the same dataset [3, 28, 24]. Indeed, it is challenging to find an optimal architecture given an enormous number of hyperparameter combinations. Even worse, the selected architecture may not be transferrable when attacking different datasets or implementations. Therefore, it would be helpful to have a sophisticated and automated approach to build an architecture for an SCA on any given dataset. While there are other approaches to the automated design for neural networks for SCA, they also come with specific issues. For instance, Rijsdijk et al. used reinforcement learning that produced top-performing neural networks, but the authors still needed to start with a general description of architectures to be designed [18]. Additionally, a reinforcement learning approach is computationally expensive and requires a cluster of GPUs and days of tuning time. On the other hand, Wu et al. used Bayesian optimization to find neural network architectures for SCA [25]. This approach is much faster than reinforcement learning while providing similar results (in terms of attack performance). Still, the authors needed to select the surrogate model and acquisition function for Bayesian optimization, which can again make the hyperparameter tuning significantly harder. Besides, both aforementioned methods rely on the experience obtained from iterations, and the question "_Is the selected model global optimal?_" is tricky to answer.
In this paper, we propose a _Genetic Algorithm_ (GA) as an alternative to the above-mentioned methods for the hyperparameter tuning task in the context of DL-SCA. In general, GAs are quite a versatile metaheuristic for hyperparameter tuning, which optimize a population of candidate hyperparameter vectors (or individuals, in the GA terminology). A GA mimics natural evolution for these vectors by applying genetic operators such as recombination and mutation, pruning the population with a selection method, and evaluating them against a fitness function. This process is iterated over multiple generations, after which the last generation's best-performing hyperparameter vector is taken as a solution. In principle, a GA allows for obtaining robust models for leakages acquired from different cryptographic implementations.
The main contributions of this work are:
1. We provide a methodology based on genetic algorithms for tuning the hyperparameters of neural networks for profiling side-channel analysis. Our approach is automated, extensible, and capable of producing various neural networks.
2. We analyze the components of well-performing architectures constructed with our method, giving insights into the effectiveness of CNN hyperparameter options for side-channel analysis.
The rest of this paper is organized as follows. In Section 2, we provide information about profiling SCA, neural networks, genetic algorithms, and the datasets we use. Section 3 gives an overview of related works. In Section 4, we provide details about our novel methodology. Section 5 provides details about the experimental setup and
reports the obtained results. In Section 6, we provide a discussion about the obtained architectures, and finally, in Section 7, we conclude the paper.
## 2 Background
In this section, we cover all necessary background concepts related to Side-channel analysis, neural networks, and genetic algorithms that form the basis of our contribution. The treatment is essential, as a complete overview of these subjects is clearly out of the scope of this manuscript. The reader can find further information in Picek et al.'s recent systematization of knowledge paper [15].
### Profiling Side-channel Analysis
A _profiling_ attack consists of a profiling phase and an attack phase, which are analogous, respectively, to the training and test phase in the context of supervised learning machine learning. In the profiling phase, an attacker uses leakages from a clone device to construct a model that maps the relationship between leakages and corresponding labels (i.e., key-related intermediate data). In the attack phase, he iterates over all possible key candidates and obtains the respective output probabilities for their labels. By repeating this process for each trace and summing the logarithms of the probabilities assigned to each key candidate, he ends up with a log probability vector used to determine the likelihood of each candidate being the correct key. Eq. (1) formulates the procedure of obtaining the log probability for key candidate \(k^{\prime}\) over \(N\) attack traces:
\[P_{log}(k^{\prime})=\sum_{i=0}^{N-1}\log(P(l(p_{i},k^{\prime}))), \tag{1}\]
where \(l(\cdot)\) denotes the cryptographic operation that generates the targeted intermediate data, \(P(\cdot)\) is the probability assigned by the profiling model, and \(p_{i}\) represents the plaintext used for leakage \(i\).
The attack performance is evaluated with the _key rank_ metric as follows:
\[\mathsf{KR}=\big{|}\{k^{\prime}|P_{\log}(k^{\prime})>P_{\log}(k)\}\big{|}. \tag{2}\]
Intuitively, the key rank is the number of key candidates with a higher likelihood of correctness than the correct key value. An attack is successful when the correct key is predicted with the highest likelihood or can be brute-forced after being placed among the few highest-likelihood candidates. In this work, we discuss the mean key rank achieved over multiple experimental runs, in which case the metric is commonly referred to as the _guessing entropy_[20]. Furthermore, as common in the related works, we will assess the attack performance against a single key byte only (which is denoted partial guessing entropy), but for simplicity, we will denote it as guessing entropy. A common assumption is that attacking a single key byte reveals the average effort required for other key bytes as well [27, 18].
#### 2.1.1 Countermeasures
SCA countermeasures aim to mitigate the information leakage produced during cryptographic operations. In this work, we consider the _Boolean masking_ and _desynchronization_ countermeasures. The masking countermeasure splits the sensitive intermediate
values into different shares to decrease the key dependency [12]. For instance, as implemented in the considered datasets [3], a random mask \(r\) is applied after the AdRoundKey operation in the first round of AES encryption. The intermediate value is then computed as:
\[Z=\textsf{SBOX}(p\oplus k)\oplus r. \tag{3}\]
Desynchronization is a common type of hiding countermeasure introducing time randomness to the leakages. In practice, such effects can be realized by adding clock jitters and inserting random instructions. We simulate this effect by randomly shifting with an upper bound for each trace [3, 26].
### Neural Networks
SCA can be considered a classification task that aims to map the input leakages to a cluster corresponding to the targeted labels. Such a task can be accomplished with a _neural network_ (NN), which is essentially a nonlinear function composed of layers of _neurons_, sometimes referred to as _nodes_. The output of a neuron is defined as follows:
\[y=\phi(\sum_{i=i}^{n}w_{i}x_{i}+b_{k}), \tag{4}\]
which is computed by multiplying the neuron's inputs \(x_{1},\dots,x_{n}\) from the previous layer with their corresponding weights \(w_{1},\dots,w_{n}\), adding the _bias_ value \(b_{k}\) corresponding to neuron \(k\), and finally transforming the result with the _activation function_\(\phi\). The activation function acts as a source of nonlinearity and often improves the efficiency of the training phase. Two common activation functions for NNs in SCA are the _rectified linear unit_ (ReLU) and _scaled exponential linear unit_ (SELU).
When training a neural network, the weight and bias for each neuron are updated with gradient descent to minimize the loss function. A common loss function in multi-class classification problems is the _categorical cross-entropy_ (CCE). Cross-entropy is a measure of the difference between two distributions. Minimizing the cross-entropy between the true distribution of the classes and the distribution modeled by the neural network improves its predictions:
\[CCE(y,\hat{y})=-\frac{1}{n}\sum_{i=1}^{l}\sum_{j=1}^{c}y_{i,j}\cdot\log(\widehat {y_{i,j}}), \tag{5}\]
where \(c\) and \(l\) respectively denote the number of classes and data, \(y\) is the true value, and \(\hat{y}\) is the predicted value.
A primary type of NN architecture in SCA is the _multilayer perceptron_ (MLP), in which a sequence of fully-connected hidden layers of neurons is followed by an output layer that transforms the final output values to label prediction probabilities. A _convolutional neural network_ (CNN) is another commonly used type of network in SCA. It prepends its first fully-connected layer with one or more convolutional blocks. Such a block consists of a _convolutional layer_ that attempts to compute local features over the input data, and it is optionally followed by a _pooling layer_ that aggregates the resulting values, e.g., by calculating \(n\)-wise averages. Eq. (6) formally displays the application of \(j\) convolution filters with kernel size \(k\) on inputs \(\{x_{i},x_{i+1},\dots,x_{i+j}\}\). A convolutional layer repeats such convolutions until shifted through all \(n\) inputs, resulting in \(n\cdot j\) inputs
for the fully-connected layers. Formally, this operation can be stated as follows:
\[\begin{split}\mathsf{conv}(\{x_{i},x_{i+1},\dots,x_{i+j}\})=& \{c_{0,0}\cdot c_{0,1}\cdot\dots\cdot c_{0,k-1}\cdot x_{i},\\ & c_{1,0}\cdot c_{1,1}\cdot\dots\cdot c_{1,k-1}\cdot x_{i+1},\\ &\dots,\\ & c_{j-1,0}\cdot c_{j-1,1}\cdot\dots\cdot c_{j-1,k-1}\cdot x_{i+j }\}.\end{split} \tag{6}\]
### Genetic Algorithms
A genetic algorithm (GA) is a type of population-based optimization algorithm that typically utilizes elements from biological evolution [6]. A GA's objective is to optimize a solution to some problem by maintaining a population of such solutions and evolving them over several _generations_. We refer to such a solution as an _individual_ or _genome_ consisting of building blocks known as _genes_. One generation is performed by evaluating the fitness of each genome, selecting fit genomes as parents for reproduction, and applying genetic operators such as mutation and crossover on those parents to generate the offspring, which represents the next generation.
Before commencing the first generation, the genomes in the population are randomly initialized for diversity. One then starts an iteration of generations until the fitness evaluation budget expires or the fitness value of the best genome achieves a predefined threshold. Each generation starts with _fitness evaluation_, by assigning a fitness value to each genome that measures how well the corresponding individual performs concerning the relative optimization problem. The next step is _selection_, which aims to cull weak genomes from the population so that the algorithm favors genetic modifications that create fitter genomes. Rather than straightforwardly selecting a number of the fittest genomes, modern GAs employ more sophisticated methods to preserve diversity in the population. One such method is _tournament selection_[13], which determines parents by holding 'tournaments' of some randomly picked genomes and retaining the ones with the best fitness as parents.
Having selected the parents, a GA usually produces as many offspring individuals as the number of parents. Production of one child's genome involves applying one or multiple _genetic operators_ to one or multiple parents. These operators include _mutation_ and _crossover_[7], though the latter can be omitted in a nonmating GA. Mutation only requires one parent, cloned and randomly modified with a predefined mutation function to produce one child. On the other hand, crossover refers to the combination of properties of two or more parents to construct a child. In this work, we apply _polynomial mutation_, a mutation method for real-valued parameters introduced by Deb and Agrawal [4] that is designed for variables with predefined minimum and maximum boundaries. The method mutates some variable \(x\) towards either lower boundary \(x_{L}\) or upper boundary \(x_{U}\) with uniform probability. The degree of the mutation is then determined by pseudorandom number \(0\leq u<1\) and parameter \(\eta\), with a higher value of \(\eta\) resulting in a smaller mutation range. In other words, the mutated value \(x^{\prime}\) is equal to \(x+\bar{\delta}_{L}(x-x_{L})\) or \(x+\bar{\delta}_{R}(x_{R}-x)\) with \(\bar{\delta}_{L}\) and \(\bar{\delta}_{R}\) scaling with \(u\) and \(\eta\) as defined in Eq. (7).
\[\begin{split}\bar{\delta}_{L}=&(2u)^{\frac{1}{1+ \eta}}-1\\ \bar{\delta}_{R}=& 1-\left(2(1-u)\right)^{\frac{1}{1+\eta}}. \end{split} \tag{7}\]
_Neuroevolution_ is the usage of an evolutionary algorithm for constructing or optimizing an NN. In this work, we will use a GA to construct a neural network architecture for
SCA. In such a scenario, a genome in a GA describes the hyperparameter combination of an NN. The fitness is determined through the evaluation of the network's performance.
### Datasets
We use the ASCAD dataset [3], where each trace comprises 700 trace points corresponding to the S-box operation of the third key byte. Note that we are referring to the _fixed-key_ ASCAD dataset, where the same encryption key is used in all AES operations. These traces are protected with the _masking_ countermeasure, so the intermediate value \(Z\) was computed as in Eq. (3) for some random mask byte \(r\).
The training (35 584) and validation (3 840) sets are balanced samples taken from the 50 000 training traces. Their respective numbers were chosen such that both sets are sufficiently large for their respective purposes. Since we use the identity leakage model in all our experiments with this method, both sets' numbers are a multiple of 256, i.e., the number of possible output labels. Note that while we do not expect many issues with the identity leakage model and class imbalance [14], we still balance the classes to mitigate any undesired effects. Ten thousand attack traces are used to assess the attack performance. Finally, we conduct experiments on this dataset with and without the desynchronization countermeasure.
## 3 Related Work
We now give an overview of the literature concerning optimizing NN's architectures for the SCA domain, considering both manual and fully automated approaches. Next, we briefly survey the automated methods based on neuroevolution.
### Network Architecture Optimization in SCA
In 2016, Maghrebi et al. proposed several DL-based approaches and verified their effectiveness on the DPAv2 dataset [23] and custom AES implementations with and without first-order masking [11]. The customized neural networks obtained a key rank of zero with fewer than \(10^{5}\) training traces on the masked implementation. Interestingly, their CNN architecture was determined with a genetic algorithm using the guessing entropy as a fitness function, but they do not provide in-depth elaboration on their methodology.
Benadjila et al. [3] further explored the performances of neural networks in the SCA context. Upon evaluating several promising architectures from the existing literature, VGG-16 [19] or deep CNNs with similar structures were deemed well-suited for SCA. Kim et al. [8] enhanced the attack performance in combination with data augmentation, which turned out to be an effective way of priming the CNN to deal with several kinds of countermeasures. Next, Zaid et al. introduced efficient CNN architectures [28] that could obtain state-of-the-art performance with significantly reduced neural network size. Wouters et al. [24] further reduced the networks' size with data preprocessing strategies.
Besides manually optimizing the neural network, recent research has also attempted fully automated approaches for network architecture search. Rijsdijk et al. [18] customized the MetaQNN reinforcement learning algorithm for SCA to automatically find CNN architectures. However, the search space is roughly limited to hyperparameters that we know to be effective, and pure MLP architectures are not discussed. Each NN is
evaluated by training it for 50 epochs using the Adam optimizer and the SELU activation function in their work. Wu et al. proposed AutoSCA [25], which uses _Bayesian optimization_ to find architecture hyperparameters for both MLPs and CNNs. Their approach produced good results and mainly focused on finding larger architectures with at least 100 neurons in each dense layer.
The application of neuroevolution to perform side-channel analysis has only scarcely been explored in existing work. Knezevic et al. used genetic programming to evolve custom activation functions specific to side-channel analysis [9] that can outperform the widely used ReLU function. The genome in their approach encodes an activation function as a tree containing unary and binary operators, with leaves representing the function's inputs. Such a tree is initialized with a depth of two to five levels and limited to twelve levels during evolution. For fitness evaluation, they compute the mean number of attack traces required to obtain a key rank of zero over one hundredfold and add it to one minus the accuracy. The method resulted in novel activation functions that improve performance on large and efficient MLPs and CNNs. Acharya et al. proposed InfoNEAT [1], an approach that tailors the _NeuroEvolution of Augmenting Topologies_ (NEAT) algorithm [21] specifically for side-channel analysis. Their approach considers the identity leakage model and uses NEAT to evolve an NN architecture with a single output node for each of the 256 output classes. The resulting 256 binary networks are combined by a _stacking_ approach that uses the networks' outputs as inputs for a logistic regression model. Such a stacked model is created for multiple different folds of balanced traces taken from the complete dataset, after which those models' prediction probabilities can be summed to form a final prediction for each attack trace.
### Evolution-based Network Architecture Search
Evolutionary approaches have been widely used in automated network architecture searches. Real et al. developed one such method for image classification on modern datasets [17], a task that requires large networks. They propose a nonmating GA with both NEAT-like [21] mutations and layer-level mutations to evolve CNNs on granular and large scales. Specifically, each genome is trained with backpropagation on 45 000 samples before evaluating its fitness. In this approach, a child's genome keeps the weights and biases of its parent, effectively training each network over time.
Other successful neuroevolution methods that construct CNNs for image classification include the Deep Evolutionary Network Structured Representation approach DENSER [2] and EvoCNN [22]. DENSER uses a 2-level genotype where the first level encodes the NN's hyperparameters while the second encodes layer-specific variables such as the number of neurons or the variables for a convolutional filter. This structure enables the algorithm to be used for MLPs, CNNs, and other types as long as they can be appropriately defined in the genome's second level. Furthermore, DENSER trains NNs with backpropagation before evaluating the fitness on a validation set. EvoCNN works similarly but evolves the weight initialization values along with the architecture hyperparameters.
## 4 Nascty
_Neuroevolution to Attack Side-channel Traces Yielding Convolutional Neural Networks_ (NASTY-CNNs) is a GA that modifies hyperparameters of CNNs for side-channel analysis. Algorithm 1 shows the main procedure of our approach. This section will
specify the genome structure, the initialization of the population, the fitness evaluation method, and the method used to produce offspring. Note that the sampling of training and validation data is only performed once, meaning that we use the same data for fitness evaluation in every generation. Furthermore, we only use balanced data samples, i.e., a sampled set of traces is taken such that it contains an equal number of traces corresponding to each possible output label. Following [17, 2, 22], we also use tournament size 3 during selection, which means we randomly choose three individuals and select the fittest one as a potential parent for reproduction. Finally, note that all experiments are performed targeting the third (masked) key byte of the fixed-key ASCAD dataset in which each trace consists of 700 trace points, each of which is normalized between -1 and 1.
### Genome Structure
The NASCTY genome represents a CNN and consists of a list of zero up to and including five convolutional blocks, an optional pooling layer when no convolutional blocks are present, and a list of one up to and including five dense layers. Each of the convolutional blocks is described with the number of convolutional filters, the filter size, a Boolean denoting the presence of a batch normalization layer, and a pooling layer. Any pooling layer in the genome comprises a pooling type, either max pooling or average pooling, a pool size, and a pool stride. Finally, a dense layer is described only by its number of neurons. An example of the genome structure is presented in Figure 1.
When expressing a NASCTY genome as a neural network, following the state-of-the-art architectures, we always use the SELU activation function for all hidden neurons, use He weight initialization for the convolutional blocks and dense layers, and use Glorot uniform weight initialization for the output layer, which uses the softmax activation
Figure 1: An example of the genome encoding used in the NASCTY algorithm.
function. Although enlarging the genome parameter spaces would increase the diversity of the populations, applying the prior knowledge would speed up the evolution process.
### Population Initialization
We initialize all networks randomly with hyperparameters within the ranges displayed in Table 1. These hyperparameter ranges, as well as the genome structure itself, are inspired by the VGG-like networks for SCA found in prior work [3, 8], as well as by the automated architecture search approach for SCA based on reinforcement learning [18].
Note that one can initialize the population by starting with architectures with a minimal number of trainable parameters to reduce the required evaluation time. However, we opt to completely initialize the population at random to avoid local optima that may come about due to the reduced diversity in the population.
### Fitness Evaluation
Once a genome is defined, the corresponding CNN is trained with the Adam optimizer.1 The loss value on the validation set is used for the fitness evaluation. By minimizing loss, we aim to have a system aligned with the related works in DL-SCA. Naturally, one could consider other options here, for instance, the ones applied in [25]. The objective of training the networks before evaluating them is to enable us to differentiate their quality more accurately. We chose to train each network for ten epochs as other works do [25, 2, 22] and preliminary experiments following the methodology recommendations by [22] showed that this is enough for similarly sized networks to observe significant CCE differences in networks of different qualities.
Footnote 1: All networks are trained with the same seed in every generation to ensure they are fairly compared.
### Offspring Production
After evaluating the fitness of each genome in one generation, the members of the next generation (offspring) can be produced. Half of these members are produced by applying tournament selection to the population to find fit genomes that will act as parents. The
\begin{table}
\begin{tabular}{l c} \hline \hline
**Parameter** & **Options** \\ \hline Num. convolutional blocks & 0 to 5 in a step of 1 \\ \hline Num. dense layers & 1 to 5 in a step of 1 \\ \hline Num. convolutional filters & 2 to 128 in a step of 1 \\ \hline Filter size & 1 to 50 in a step of 1 \\ \hline Batch normalisation layer & False, True \\ \hline Pooling type & Average, Max \\ \hline Pool size & 2 to 50 in a step of 1 \\ \hline Pool stride & 2 to 50 in a step of 1 \\ \hline Num. dense neurons & 1 to 20 in a step of 1 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Ranges for CNN genome hyperparameters in the NASCTY algorithm.
remaining half, on the other hand, is constructed by randomly choosing pairs of those parents on which the crossover and mutation operations are applied. Optionally, we may only apply tournament selection to some proportion of the top-performing members of the population to select parents. This operation is performed to ensure the best genomes are maintained in the population, a concept known as elitism which can be tuned with a _truncation proportion_ parameter.
Our algorithm uses one of two possible types of crossover, i.e., either _one-point_ crossover or _parameter-wise_ crossover, both of which are common crossover strategies in genetic algorithms. The performance of these two methods is evaluated in Section 5.2. To enact a one-point crossover with two parents, we apply one-point crossover separately on the parents' lists of convolutional blocks and dense layers. For a list of either convolutional blocks or dense layers, we achieve this operation by picking a random cutoff point in both parents' lists of that layer type. The first child's list of that type is then created by connecting the first parent's list before its cutoff point to the second parent's list after its cutoff point, while the second child's list is created by connecting the remaining units. The one-point crossover operation is then finalized by randomly dividing the parents' optional pooling layers that are present in the absence of convolutional blocks among the offspring.
In our implementation of parameter-wise crossover, the first child genome is created by iterating over the parents' pairs of convolutional blocks and randomly inheriting
Figure 2: Visualisation of one-point crossover on a list of convolutional blocks or dense layers in a NASCTY genome.
convolutional block genes from either parent. The second child's genome then inherits the remaining hyperparameters for these blocks. The exact process is repeated for the parents' lists of dense layers. In the typical scenario where one parent has more convolutional blocks or dense layers than the other parent, the excess units are appended unmodified to the first child's genome.
After the crossover operation, the offspring are mutated through one of the following methods with a uniform probability:
* Adding one random convolutional block or dense layer with randomly initialized hyperparameters;
* Removing one random convolutional block or dense layer;
* Modifying all hyperparameters through _polynomial mutation_ with probability \(\frac{1}{n}\), where \(n\) is the total number of modifiable hyperparameters in the genome.
The polynomial mutation is described in Section 2.3 and is designed for variables with predefined minimum and maximum boundaries, which fits our task of exploring proper hyperparameter values within predefined ranges. This mutation method comes with a parameter \(\eta\), which can be increased to reduce the degree to which a given parameter is modified. The value for \(\eta\) is recommended to be between 20 and 100 [5], though experimentation is the only way to determine a proper value of \(\eta\) for an untested scenario.
## 5 Experiments
In this section, we discuss the experimental evaluation of NASCTY. We start by describing the setup of our experiments. Then, we show the outcome of the preliminary tuning phase based on grid search. Finally, we present the results obtained by GA with the best-performing parameter combination on masked and desynchronized traces of ASCAD.
### Experimental Setup
For the experimental validation of our approach, we optimize GA parameters through a grid search, then evaluate the performance on the masked ASCAD traces and masked and desynchronized ASCAD traces for several desynchronization levels. The objective of these experiments is to determine:
* The effectiveness of GA parameters for our approach;
* Whether our automated approach can produce NNs that outperform similar NNs found through trial and error;
* Architecture components that contribute to the effectiveness of an SCA.
To account for the randomness introduced by the mutation operations, we run five experiments for each GA parameter configuration and report the best results. The best genome resulting from the NASCTY algorithm is evaluated by training its corresponding NN for 50 epochs and computing the mean incremental key rank [18] over 100 folds.
All experiments are executed with 52 parallel workers, each of which runs at approximately 2.1GHz on an Intel E5-2683 v4 CPU. With these computational resources,
the discussed experiments required 84GB RAM and took at least four and at most seven days to complete. This significant variance in runtime complexity is caused by the pseudorandom nature of GAs, which results in the construction and evaluation of NNs of varying sizes.
### Parameter Tuning by Grid Search
Table 2 shows a summary of the GA parameters for the grid search. Note that additional parameter options, mutation strategies, and crossover strategies could potentially result in better performance, but such adjustments would have to significantly diverge from our current strategy to assess the general effectiveness of the algorithm. We run each grid search experiment with a population size of 52 to match the number of available parallel workers and run the GA for ten generations. Furthermore, each of these experiments uses the same training data, validation data, and initial population to observe the impact of the parameter changes more accurately.
We expect lower values of the polynomial mutation parameter \(\eta\)[4] a better choice in these experiments where we run for relatively few generations. More specifically, smaller values of \(\eta\) cause larger mutations, which carry the potential to find better networks faster, but a value of 20 for \(\eta\) is still large enough to avoid unreasonably risky mutations [5]. For similar reasons, a larger truncation proportion would result in better performance as it preserves diversity in an already-small population. However, its influence is likely not as significant as that of the chosen crossover and mutation configurations since those can modify the population more straightforwardly. Finally, we expect either crossover strategy to perform well since both allow the algorithm to find effective architectures in the predefined hyperparameter ranges. The performance of the final network of each parameter combination's best run is shown in Figure 3.
Figure (a)a implies that each parameter combination is capable of producing fit architectures for the considered ASCAD traces. Similarly, the best runs' fitness plots over generations shown in Figure (b)b demonstrate that the best genome's CCE, i.e., the validation loss, can improve significantly in as few as ten generations, regardless of the parameter combination under consideration.
The best mean incremental key rank among all grid search experiments was approximately 0.50419 and resulted from the experiments with a polynomial mutation \(\eta\) value of 20, one-point crossover, and a truncation proportion of 1.0. Therefore, those parameters are applied for all further experiments. To determine each GA parameter's influence on the final performance, we observe the effect of modifying one variable at a time while keeping the others constant at the aforementioned best-observed values. Table 3 displays how such modifications affect the mean incremental key rank. From the table, we can infer that only the crossover strategy significantly affects the final performance among the parameters we considered; one-point crossover is preferred over the parameter-wise crossover. Since the best runs using parameter-wise crossover in Figure 3 still perform
\begin{table}
\begin{tabular}{l c} \hline \hline
**Parameter** & **Options** \\ \hline Polynomial mutation \(\eta\) & 20, 40 \\ \hline Crossover type & One-point, parameter-wise \\ \hline Truncation proportion & 0.5, 1.0 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The parameter values we consider during grid search for NASCTY.
well, the performance difference likely results from poor consistency compared to runs using the one-point crossover. We suspect that the additional consistency observed with one-point crossover is achieved through its advantage in retaining functional sequences of convolutional blocks or dense layers. In addition, one-point crossover on lists of layers intuitively provides synergy with our mutation strategy of adding or removing an entire layer because effective additions or removals can be identified more quickly when they are separated into offspring in a modular fashion.
### ASCAD: Masked and Desynchronized
All remaining experiments are run with the best-performing parameter options found through our grid search experiments. In addition, we run these experiments with a population size of 100 to fully exploit the resources at our disposal. In contrast to our grid search experiments, these experiments do not use a seed for the pseudorandom numbers involved anywhere in the GA except for the fitness evaluation procedure, in which we use a seed for the training of each NN to ensure the genomes are fairly compared.
We first run NASCTY on masked ASCAD traces for 75 generations to evaluate the algorithm's general effectiveness. Then, we run experiments on the same masked dataset, further protected with desynchronization as described in Section 2.1.1. Specifically, we run three sets of experiments with desynchronization levels of 10, 30, and 50,
Figure 3: NASCTY grid search results corresponding to the best network obtained with each parameter combination.
\begin{table}
\begin{tabular}{l l l l} \hline \hline \(\eta\) & **Crossover type** & **Truncation proportion** & **Mean incremental key rank** \\ \hline
20 & One-point & 1.0 & 0.50419 \\ \hline
40 & One-point & 1.0 & 0.50880 \\ \hline
20 & Parameter-wise & 1.0 & 0.89354 \\ \hline
20 & One-point & 0.5 & 0.50966 \\ \hline \hline \end{tabular}
\end{table}
Table 3: The effect of each parameter on the final incremental key rank in NASCTY grid search experiments.
respectively. With this approach, we aim to determine whether NASCTY can circumvent or mitigate countermeasures without additional algorithm modifications and whether larger desynchronization levels hinder NASCTY's ability to find good architectures.
The fitness progression trends in Figure 2(b) indicate that more generations would improve the observed fitness of the best genome. Following this, we first ran NASCTY on masked ASCAD traces for 75 generations with a population size of 100. Figure 4 shows the results of this first experiment. As shown in Figure 3(a), the best-obtained network converges smoothly. Ultimately, the network breaks the target in 314 attack traces and achieves a mean incremental key rank of 0.51857. The best fitness value progresses (Figure3(b)) continually after ten generations have passed, then stagnates well before the seventy-fifth generation is reached. Moreover, despite the difference in the population size and the number of generations, the best network is outperformed by several of the NNs obtained with our grid search experiments. This observed fitness stagnation implies that the algorithm may be prone to get stuck in local optima. Typically, the mutation is the source of global search in a GA, so we recommend that future work evaluates lower values of \(\eta\) for polynomial mutation or possibly more perturbing mutation strategies.
Due to the observation of fitness progress in the previous experiment, we ran NASCTY on the masked and desynchronized ASCAD traces for 50 generations instead of 75. The results for desynchronization levels 10, 30, and 50 are displayed in Figure 5. As shown in Figure 4(a), the time-randomness introduced by desynchronization affects the algorithm's performance, considering both fitness progress and final performance are noticeably diminished as the desynchronization level increases. Still, the results show that NASCTY can find effective architectures despite the added countermeasures, with the networks evaluated in Figure 4(b) being able to obtain key rank 0 in 338, 474, or 531 traces respectively for desynchronization levels 10, 30, and 50.
## 6 Discussion
The best run on the synchronized ASCAD traces produced the CNN architecture shown in Figure 5(a). It has 10 470 trainable parameters and vaguely resembles the efficient CNN proposed by Zaid et al. [28]. In comparison, the architecture produced with NASCTY
Figure 4: NASCTY results corresponding to the best network obtained on masked ASCAD traces.
has an additional dense layer and possesses several unintuitive components, such as 27 convolutional filters of size 45 and a pool stride that exceeds the pool size. Since the efficient MLP proposed by Wouters et al. [24] only required two layers of ten neurons each, we surmise that NASCTY may be inclined to include unnecessary model complexity. In other words, NASCTY does not sufficiently discourage redundant model complexity. Indeed, by increasing the desynchronization from 30 to 50, the number of trainable parameters of the best architecture decreases from 90 379 to 68 427. A possible solution would be to introduce a model size penalty in the fitness function. Still, NASCTY, with the current configuration, is sufficient in generating good network architectures. Table 4 shows that NASCTY can compete with other state-of-the-art automated hyperparameter tuning methods for SCA, although it is ultimately outperformed.
The architecture corresponding to a desynchronization level of 10 is similar to the architecture for synchronized traces in both structure and size, with the main difference being the addition of more convolutional filters and another dense layer. Both architectures also feature a pooling layer with a stride that exceeds its size, suggesting their convolutional layers have produced features that are either redundant or incorrectly utilized.
The NASCTY architectures for the two more severe desynchronization levels (30 and 50) provide more insight into NASCTY's way of mitigating this countermeasure. As can be seen in Figures 5(c) and 5(d), both of these architectures start with a convolutional layer with over 100 filters, a significantly larger number than that of the other architectures' convolutional layers. In addition, both feature two convolutional blocks and two dense
\begin{table}
\begin{tabular}{l|c|c} \hline \hline
**Method** & \multicolumn{2}{c|}{**Num. traces to**} & **Num. trainable parameters** \\ & **obtain mean key rank 0** & \\ \hline RL-SCA [18] & 242 & 1 282 \\ \hline AutoSCA [25] & 158 & 54 752 \\ \hline NASCTY & 314 & 10 470 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comparison of automated hyperparameter tuning methods for SCA on synchronized, masked ASCAD traces.
Figure 5: NASCTY results for ASCAD traces at desynchronization levels 10, 30, and 50.
layers, with the first dense layer in each network having two neurons. The larger number of filters is consistent with existing approaches to mitigate desynchronization, but the usage of such small dense layers is uncommon when attempting to break protected ASCAD traces. Finally, average pooling appears to be the preferred pooling type in these networks, with max pooling only occurring once. We emphasize that our architecture for desynchronization of 50 outperforms the approach of [18] or performs very similarly to the state-of-the-art methods proposed in [28, 24].
## 7 Conclusions and Future Work
This paper proposed a genetic algorithm for network architecture search in the SCA domain. In NASCTY, each genome encodes the hyperparameters representing a CNN's architecture, and a genome's fitness is evaluated by the validation loss. During offspring production, we apply either one-point crossover on parents' lists of layers or parameter-wise crossover to create a pair of child genomes that we immediately mutate by adding a layer, removing a layer, or applying polynomial mutation on its genes. With this approach, NASCTY could produce comparable architectures to state-of-the-art techniques. The redundant complexity and unintuitive architecture components found in some NASCTY networks suggest that our method can likely be improved further,
Figure 6: Best architectures produced by NASCTY for masked ASCAD traces protected with several desynchronization levels.
implying unexplored potential to match or surpass current state-of-the-art approaches.
Furthermore, NASCTY found effective architectures for traces protected with masking and desynchronization levels up to 50 while keeping its GA parameters and implementation largely unmodified. However, the desynchronization did affect the final networks' performances: networks produced by NASCTY for desynchronization levels 0, 10, 30, and 50 obtained key rank 0 within 314, 338, 474, and 531 attack traces, respectively. We recommend that future work evaluate NASCTY's effectiveness and resulting architecture patterns on traces protected with other countermeasures. The observed network architectures showed that NASCTY tends to combat desynchronization by adding a convolutional layer and increasing the number of filters in the first convolutional layer, which was the case in architectures generated for desynchronization levels 30 and 50. Interestingly, these networks started their fully-connected part with a dense layer of two neurons. Additionally, some architectures contained pooling layers of which the stride was larger than their size, which is uncommon in other approaches and suggests that NASCTY may be generating redundant features through unnecessarily large numbers of convolutional filters. The architectures NASCTY generated for synchronized and mildly desynchronized traces also came with more trainable parameters than models from related work, that achieve better performance on the same task [24], corroborating the hypothesis that NASCTY is prone to adding redundant model complexity. Regardless of the presence of desynchronization, average pooling was preferred to max-pooling in nearly all pooling layers.
For future work, it would be interesting to explore the application of a complexity penalty to fitness evaluation or population initialization with minimal architectures to reduce the network size. Furthermore, we recommend experimenting with more mutation parameters to stimulate a better global search to avoid getting stuck in a local optimum, which often seems to occur well before the algorithm terminates. Once those drawbacks have been resolved, we may find better, more interesting architectures by expanding the search space. To do so, we suggest introducing new hyperparameters to the genome, e.g., activation functions for each layer and the learning rate for more general applications. Finally, our approach considers the ASCAD dataset with a fixed key. It will be interesting to see how well our approach works for other, more difficult datasets, like ASCAD with random keys.
|
2303.09280 | Topology optimization with physics-informed neural networks: application
to noninvasive detection of hidden geometries | Detecting hidden geometrical structures from surface measurements under
electromagnetic, acoustic, or mechanical loading is the goal of noninvasive
imaging techniques in medical and industrial applications. Solving the inverse
problem can be challenging due to the unknown topology and geometry, the
sparsity of the data, and the complexity of the physical laws. Physics-informed
neural networks (PINNs) have shown promise as a simple-yet-powerful tool for
problem inversion, but they have yet to be applied to general problems with a
priori unknown topology. Here, we introduce a topology optimization framework
based on PINNs that solves geometry detection problems without prior knowledge
of the number or types of shapes. We allow for arbitrary solution topology by
representing the geometry using a material density field that approaches binary
values thanks to a novel eikonal regularization. We validate our framework by
detecting the number, locations, and shapes of hidden voids and inclusions in
linear and nonlinear elastic bodies using measurements of outer surface
displacement from a single mechanical loading experiment. Our methodology opens
a pathway for PINNs to solve various engineering problems targeting geometry
optimization. | Saviz Mowlavi, Ken Kamrin | 2023-03-13T12:44:32Z | http://arxiv.org/abs/2303.09280v2 | # Topology optimization with physics-informed neural networks:
###### Abstract
Detecting hidden geometrical structures from surface measurements under electromagnetic, acoustic, or mechanical loading is the goal of noninvasive imaging techniques in medical and industrial applications. Solving the inverse problem can be challenging due to the unknown topology and geometry, the sparsity of the data, and the complexity of the physical laws. Physics-informed neural networks (PINNs) have shown promise as a simple-yet-powerful tool for problem inversion, but they have yet to be applied to general problems with a priori unknown topology. Here, we introduce a topology optimization framework based on PINNs that solves geometry detection problems without prior knowledge of the number or types of shapes. We allow for arbitrary solution topology by representing the geometry using a material density field that approaches binary values thanks to a novel eikonal regularization. We validate our framework by detecting the number, locations, and shapes of hidden voids and inclusions in linear and nonlinear elastic bodies using measurements of outer surface displacement from a single mechanical loading experiment. Our methodology opens a pathway for PINNs to solve various engineering problems targeting geometry optimization.
Noninvasive detection of hidden geometries is desirable in countless applications including medical imaging and diagnosis [13], nondestructive evaluation of materials [28], and mine detection [49]. The goal is to infer the locations, and shapes of structures hidden inside a matrix from surface measurements of the response to an applied load such as a magnetic field [19; 22], an electric current [2], or a mechanical traction [9]. Identifying these internal boundaries from the measured data constitutes a challenging inverse problem due to the unknown topology, the large number of parameters required to describe arbitrary geometries [27], the potential sparsity of the data, and the complexity of the underlying physical laws which usually take the form of linear or nonlinear partial differential equations (PDEs) [18].
In recent years, physics-informed neural networks (PINNs) have emerged as a robust tool for problem inversion across disciplines and over a range of model complexity [16; 33; 45]. PINNs' ability to seamlessly blend measurement data or design objectives with governing PDEs in nontrivial geometries has enabled practitioners to solve easily a range of inverse problems involving identification or design of unknown properties in fields ranging from mechanics to optics and medicine [11; 25; 36; 42; 46; 50]. Encouraged by these early successes, we introduce in this paper a general topology optimization (TO) framework to solve noninvasive geometry detection problems using PINNs, leveraging both the measurements and the governing PDEs. Building on the strength of PINNs, our approach is straightforward to implement regardless of the complexity of the physical model, produces accurate results using measurement data from a single experiment, and does not require a training dataset. To the best of our knowledge, the present work marks the first time that PINNs have been applied to problems involving _a priori_ unknown topology and geometry.
Classical approaches to solve geometry identification inverse problems tend to be complex as they combine traditional numerical solvers such as the finite-element or boundary-element method, adjoint techniques to evaluate the sensitivity of the error residual with respect to the shape or the topology, and gradient descent-based optimization algorithms to update the geometry at every iteration [4; 7; 18; 20; 24]. Furthermore, these techniques call for carefully chosen regularization schemes and do not always yield satisfactory results, especially in the presence of sparse measurements acquired by only one or a few sets of experiments. For example, in the mechanical loading case where voids and inclusions in an elastic body are to be identified from measurements of surface displacement in response to a prescribed traction, past studies limit themselves to simple shapes like squares and circles or fail to find the right number of shapes [5; 9; 34; 39; 40]. Attempts to apply PINNs to geometry detection are in their infancy, so far restricted to cases where the number and shape-type of hidden voids or inclusions in an elastic structure are provided in advance [56].
Our PINN-based TO framework does not require any prior knowledge on the number and types of shapes. We allow for arbitrary solution topology by representing the geometry using a material density field equal to 0 in one phase and 1 in the other. The material density is parameterized through a neural network, which needs to be regularized in order to push the material density towards 0 or 1 values. Thus, one key ingredient in our framework is a novel eikonal regularization, inspired from fast-marching level-set methods [1; 43] and neural signed distance functions [23], that promotes a constant thickness of the interface region where the material density transitions between 0 and 1, leading to well-defined boundaries throughout the domain. This eikonal regularization en
ters as an additional term in the standard PINN loss, which is then used to train the neural networks underlying the material density and physical quantities to yield a solution to the geometry detection problem. As an illustration, we apply our framework to cases involving an elastic body under mechanical loading, and discover the topology, locations, and shapes of hidden structures for a variety of geometries and materials.
## Results
### Problem formulation
We consider noninvasive geometry detection problems of the following form. Suppose we have a continuous body \(\mathcal{B}\) containing an unknown number of hidden voids or inclusions, with unknown shapes and at unknown locations within the body. The material properties are assumed to be known and homogeneous within the body and the inclusions. We then apply a certain type of loading (e.g. mechanical, thermal, acoustic, etc) on the body's external boundary \(\partial\mathcal{B}^{\mathrm{ext}}\), which produces a response within the body that can be described by a set of \(n\) physical quantities (e.g. displacements, stresses, temperature, etc). These physical quantities can be lumped into a vector field \(\mathbf{\psi}:\mathcal{B}\rightarrow\mathbb{R}^{n}\) and satisfy a known set of governing PDEs. The goal of the inverse problem is to identify the number, locations, and shapes of the voids or inclusions based on measurements along \(\partial\mathcal{B}^{\mathrm{ext}}\) of some of the physical quantities contained in \(\mathbf{\psi}\).
As a concrete example, we consider two prototypical plane-strain elasticity inverse problems. In the first case, a square elastic matrix with hidden voids or inclusions is pulled by a uniform traction \(P_{o}\) on two sides (Fig. 1a). The goal of the inverse problem is to identify the number, locations, and shapes of the voids or inclusions using discrete measurements of the displacement of the outer boundary of the matrix. In the second case, an elastic layer on top of a hidden rigid substrate is compressed from the top by a uniform pressure \(P_{o}\), with periodic lateral boundary conditions (Fig. 1b). The goal is to identify the shape of the substrate using discrete measurements of the displacement of the top surface. For both cases, the constitutive properties of all materials are assumed to be known. We will consider two different types of constitutive laws: compressible linear elasticity, which characterizes the small deformation of any compressible elastic material, and incompressible nonlinear hyperelasticity, which models the large deformation of rubber-like materials. In the linear elastic case, there exists a unique solution to the inverse problem (see proof in Supplementary Information), making it well-suited to evaluating the accuracy of our TO framework.
Following density-based TO methods [51], we avoid any restriction on the number and shapes of hidden structures by parameterizing the geometry of the elastic body \(\mathcal{B}\) through a discrete-valued material density function \(\rho:\Omega\rightarrow\{0,1\}\), where \(\Omega\) is a global domain comprising both \(\mathcal{B}\) and the hidden voids or inclusions. The material density is defined to be equal to \(1\) in the elastic body \(\mathcal{B}\) and \(0\) in the voids or inclusions. The physical quantities \(\mathbf{\psi}\) can then be extended to the global domain \(\Omega\) by introducing an explicit \(\rho\)-dependence in their governing PDEs, leading to equations of the form
\[\mathbf{r}(\mathbf{\psi}(\mathbf{x}),\rho(\mathbf{x})) =0,\quad\mathbf{x}\in\Omega, \tag{1a}\] \[\mathbf{b}(\mathbf{\psi}(\mathbf{x})) =0,\quad\mathbf{x}\in\partial\Omega, \tag{1b}\]
with known boundary conditions defined solely on the external boundary \(\partial\Omega=\partial\mathcal{B}^{\mathrm{ext}}\). Note that the residual functions \(\mathbf{r}\) and \(\mathbf{b}\) may contain partial derivatives of \(\mathbf{\psi}\) and \(\rho\). The inverse problem is now to find the distribution of material density \(\rho\) in \(\Omega\) so that the corresponding solution for \(\mathbf{\psi}\) matches surface measurements \(\mathbf{\psi}_{i}^{m}\) at discrete locations \(\mathbf{x}_{i}\in\partial\Omega^{m}\subset\partial\Omega\), that is,
\[\mathbf{\psi}(\mathbf{x}_{i})=\mathbf{\psi}_{i}^{m},\quad\mathbf{x}_{i}\in\partial \Omega^{m}. \tag{2}\]
In practice, we might only measure select quantities in \(\mathbf{\psi}\) at some of the locations, but we do not write so explicitly to avoid overloading the notation.
For the linear elasticity problem that we consider as an example, \(\mathbf{\psi}=(\mathbf{u},\mathbf{\sigma})\) where \(\mathbf{u}(\mathbf{x})\) and \(\mathbf{\sigma}(\mathbf{x})\) are displacement and stress fields, respectively, and the governing
Figure 1: **Setup of two geometry identification problems in elastic bodies under mechanical loading.****a**, A square elastic matrix with hidden voids or inclusions is pulled by a known uniform traction on two opposite sides. The goal is to identify the number, locations, and shapes of the voids or inclusions within using measurements of the displacement occurring along the outer boundary of the matrix. **b**, An elastic layer on top of a hidden rigid substrate is compressed from the top by a uniform pressure. The goal is to identify the shape of the substrate using measurements of the displacement of the top surface.
equations comprise equilibrium relations \(\sum_{j}\partial\sigma_{ij}/\partial x_{j}=0\) and a constitutive law \(F(\mathbf{\sigma},\nabla\mathbf{u},\rho)=0\), both defined over \(\Omega\). The presence of \(\rho\) in the constitutive law specifies different material behaviors for the elastic solid phase and the void or rigid inclusion phase. The applied boundary conditions take the form \(\mathbf{u}=\bar{\mathbf{u}}\) on \(\partial\Omega_{u}\) and \(\mathbf{\sigma}\mathbf{n}=\bar{\mathbf{t}}\) on \(\partial\Omega_{t}\), where \(\partial\Omega_{u}\) and \(\partial\Omega_{t}\) are partitions of the external boundary with applied displacements and applied tractions, respectively, and \(\mathbf{n}\) is the outward unit normal. In the case of the elastic layer, the outer boundary also comprises a portion \(\partial\Omega_{p}\) with periodic boundary conditions on the displacement and traction. Finally, the requirement that the predictions for \(\mathbf{u}\) at the surface match the measurement data is expressed as \(\mathbf{u}(\mathbf{x}_{i})=\mathbf{u}_{i}^{m}\), \(\mathbf{x}_{i}\in\partial\Omega^{m}\). See Methods for a detailed formulation of the governing equations and boundary conditions for all considered cases.
Similar to density-based TO methods [51], we relax the binary constraint on the material density by allowing intermediate values of \(\rho\) between 0 and 1. This renders the problem amenable to gradient-based optimization, which underpins the PINN-based TO framework that we introduce in the next section. However, the challenge is to find an appropriate regularization mechanism that drives the optimized material distribution towards 0 and 1 rather than intermediate values devoid of physical meaning. As we will show in the discussion, common strategies employed in TO [18; 51] do not yield satisfactory results in our PINN-based framework for geometry detection problems. Thus, we have developed a novel eikonal regularization scheme inspired from level-set methods and signed distance functions, which we will describe after presenting the general framework.
### General framework
We propose a TO framework based on PINNs for solving noninvasive geometry detection problems (Fig. 2). At the core of the framework are several deep neural networks that approximate the physical quantities \(\mathbf{\psi}(\mathbf{x})\) describing the problem and the material density \(\rho(\mathbf{x})\). For the physical quantities, each neural network maps the spatial location \(\mathbf{x}=(x_{1},x_{2})\) to one of the variables in \(\mathbf{\psi}=(\psi_{1},\cdots,\psi_{n})\); this can be expressed as \(\psi_{i}=\bar{\psi}_{i}(\mathbf{x};\mathbf{\theta}_{i})\) where \(\bar{\psi}_{i}\) is the map defined by the \(i\)th neural network and its trainable parameters \(\mathbf{\theta}_{i}\) (see Methods). For the material distribution, we first define a neural network with trainable parameters \(\mathbf{\theta}_{\phi}\) that maps \(\mathbf{x}\) to a scalar variable \(\phi=\bar{\phi}(\mathbf{x};\mathbf{\theta}_{\phi})\). A sigmoid function is then applied to \(\phi\) to yield \(\rho=\text{sigmoid}(\phi/\delta)=\text{sigmoid}(\bar{\phi}(\mathbf{x};\mathbf{ \theta}_{\phi})/\delta)\), which we simply write as \(\rho=\bar{\rho}(\mathbf{x},\mathbf{\theta}_{\phi})\). This construction ensures that the material density \(\rho\) remains between 0 and 1, and \(\delta\) is a transition length scale that we will comment on later. We define the phase transition to occur at \(\rho=0.5\) so that the zero level-set of \(\phi\) delineates the boundary between the two material phases, hence \(\phi\) is hereafter referred to as a level-set function [43; 44].
We now seek the parameters \(\mathbf{\theta}_{\mathbf{\psi}}=\{\mathbf{\theta}_{1},\dots,\mathbf{\theta}_{n}\}\) and \(\mathbf{\theta}_{\phi}\) so that the neural network approximations for \(\mathbf{\psi}(\mathbf{x})\) and \(\rho(\mathbf{x})\) satisfy the governing equations (1a) and applied boundary conditions (1b) while matching the surface measurements (2). This is achieved by constructing a loss function of the form
\[\mathcal{L}(\mathbf{\theta}_{\mathbf{\psi}},\mathbf{\theta}_{\phi}) =\lambda_{\text{meas}}\mathcal{L}_{\text{meas}}(\mathbf{\theta}_{\bm {\psi}})+\lambda_{\text{gov}}\mathcal{L}_{\text{gov}}(\mathbf{\theta}_{\mathbf{\psi}},\mathbf{\theta}_{\phi})\] \[\quad+\lambda_{\text{reg}}\mathcal{L}_{\text{eik}}(\mathbf{\theta}_{ \phi}), \tag{3}\]
where \(\mathcal{L}_{\text{meas}}\) and \(\mathcal{L}_{\text{gov}}\) measure the degree to which the neural network approximations do not satisfy the measurements and governing equations, respectively, \(\mathcal{L}_{\text{eik}}\) is a crucial regularization term that drives \(\rho\) towards 0 or 1 values and that we will explain below, and the \(\lambda\)'s are scalar weights. The measurement loss takes the form
\[\mathcal{L}_{\text{meas}}(\mathbf{\theta}_{\mathbf{\psi}})=\frac{1}{|\partial\Omega^{ m}|}\sum_{\mathbf{x}_{i}\in\partial\Omega^{m}}|\bar{\mathbf{\psi}}(\mathbf{x}_{i}; \mathbf{\theta}_{\mathbf{\psi}})-\mathbf{\psi}_{i}^{m}|^{2}, \tag{4}\]
where \(|\partial\Omega^{m}|\) denotes the size of the set \(\partial\Omega^{m}\). A trivial modification of this expression is necessary in the case where only select quantities in \(\mathbf{\psi}\) are measured. The governing equations loss takes the form
\[\mathcal{L}_{\text{gov}}(\mathbf{\theta}_{\mathbf{\psi}},\mathbf{\theta}_{\phi})=\frac{1}{| \Omega^{d}|}\sum_{\mathbf{x}_{i}\in\Omega^{d}}|\mathbf{r}(\bar{\mathbf{\psi}}( \mathbf{x}_{i};\mathbf{\theta}_{\mathbf{\psi}}),\bar{\rho}(\mathbf{x}_{i};\mathbf{\theta}_ {\phi}))|^{2}, \tag{5}\]
where \(\Omega^{d}\) is a set of collocation points in \(\Omega\), and we use automatic differentiation to calculate in a mesh-free fashion the spatial derivatives contained in \(\mathbf{r}\). We design the architecture of our neural networks in such a way that they inherently satisfy the boundary conditions (see Methods, section "Detailed PINNs formulation").
Finally, the optimal parameters \(\mathbf{\theta}_{\mathbf{\psi}}^{*}\) and \(\mathbf{\theta}_{\phi}^{*}\) that solve the problem can be obtained by training the neural networks to minimize the loss (3) using stochastic gradient descent-based optimization. The corresponding physical quantities \(\bar{\mathbf{\psi}}(\mathbf{x};\mathbf{\theta}_{\mathbf{\psi}}^{*})\) will match the discrete surface measurements while satisfying the governing equations of the problem, while the corresponding material density \(\bar{\rho}(\mathbf{x};\mathbf{\theta}_{\phi}^{*})\) will reveal the number, locations, and shapes of the hidden voids or inclusions.
### Material density regularization
We now describe the key ingredient that ensures the success of our framework. As mentioned above, the main challenge is to promote the material density \(\rho(\mathbf{x})\) to converge towards 0 or 1 away from the material phase boundaries, given by the zero level-set \(\phi=0\). Moreover, we desire the thickness of the transition region along these boundaries, where \(\rho\) goes from 0 to 1, to be uniform everywhere in order to ensure consistency of physical laws across the interface (e.g. stress jumps).
To visualize what happens in the absence of regularization, consider a random instance of the neural network
\(\phi=\bar{\phi}(\mathbf{x},\mathbf{\theta}_{\phi})\) (Fig. 3a, left) and the corresponding material distribution \(\rho=\text{sigmoid}(\phi/\delta)\) with \(\delta=0.01\) (Fig. 3a, center). The sigmoid transformation ensures that \(\rho\) never drops below 0 or exceeds 1, leading to large regions corresponding to one phase or the other. However, the thickness of the transition region where \(\rho\) goes from 0 to 1 is not everywhere uniform, resulting in large zones where \(\rho\) assumes nonphysical values between 0 and 1 (Fig. 3a, center). This behavior stems from the non-uniformity of the gradient norm \(|\nabla\phi|\) along the material boundaries \(\phi=0\), with small and large values of \(|\nabla\phi|\) leading to wide and narrow transition regions, respectively (Fig. 3a, right).
We propose to regularize the material density by forcing the gradient norm \(|\nabla\phi|\) to be unity in a narrow band \(\Omega_{\text{eik}}\) of width \(w\) along the material boundaries defined by the zero level-set \(\phi=0\). In this way, \(\phi\) becomes a signed distance function to the material boundary in the narrow band, thereby constraining the gradient of \(\rho\) to be constant along the interface. To ensure that the narrow band covers the near-entirety of the transition region where \(\rho\) goes from 0 to 1, we choose \(w=10\delta\) so that \(\rho=\text{sigmoid}(\pm w/2\delta)=\text{sigmoid}(\pm 5)\simeq 0\) or 1 along the edge of the narrow band. To illustrate the effect of such regularization, we consider the previous random instance of the neural network \(\phi=\bar{\phi}(\mathbf{x},\mathbf{\theta}_{\phi})\) and enforce the constraint \(|\nabla\phi|=1\) in the narrow band \(\Omega_{\text{eik}}\) along its zero level-set (Fig. 3b, left and right). The zero level-set is kept fixed to facilitate comparison with the un-regularized case (Fig. 3a). With \(\phi\) now behaving like a signed-distance function in the narrow band, a uniform transition thickness for \(\rho\) along all material boundaries is achieved, without large regions of intermediate density values (Fig. 3b, center).
In practice, we implement this regularization into our PINN-based TO framework by including an 'eikonal' loss term \(\mathcal{L}_{\text{eik}}\) in (3), which takes the form
\[\mathcal{L}_{\text{eik}}(\mathbf{\theta}_{\phi})=\frac{1}{|\Omega_{\text{eik}}^{ d}|}\sum_{\mathbf{x}_{i}\in\Omega_{\text{eik}}^{d}}\left(|\nabla\phi(\mathbf{x}_{i} )|-1\right)^{2}, \tag{6}\]
where \(\Omega_{\text{eik}}^{d}=\{\mathbf{x}_{i}\in\Omega^{d}:|\phi(\mathbf{x}_{i})|<w /2\}\). The aim of this term is to penalize deviations away from the constraint \(|\nabla\phi|=1\) in the narrow band \(\Omega_{\text{eik}}\) of width \(w\) along the interface defined by the zero level-set \(\phi=0\). Because finding the subset of collocation points \(\mathbf{x}_{i}\) in \(\Omega^{d}\) belonging to the true narrow band of width \(w\) at every step of the training process would be too expensive, we instead relax the domain over which the constraint \(|\nabla\phi|=1\) is active by utilizing the subset \(\Omega_{\text{eik}}^{d}\) of collocation points that satisfy \(|\phi(\mathbf{x}_{i})|<w/2\). As the constraint \(|\nabla\phi|=1\) is progressively better satisfied during the training process, \(\Omega_{\text{eik}}^{d}\) will eventually overlap the true narrow band of width \(w\) along the zero level-set of \(\phi\) (Fig. 3c).
Since the constraint \(|\nabla\phi|=1\) in the narrow band takes the form of an eikonal equation, we call this approach eikonal regularization. We emphasize that in contrast to recent works training neural networks to solve the eikonal equation [23], our eikonal regularization does not force \(\phi\)
Figure 2: **TO framework for noninvasive detection of hidden geometries.** The geometry of the system, which is initially unknown, is parameterized by a material density field given through a level-set function and equal to 1 in the elastic body and 0 in the voids or inclusions. The level-set function and the physical quantities describing the problem are approximated with deep neural networks designed to inherently satisfy the applied boundary conditions. These neural networks are then trained to minimize a loss function that drives the material density and physical quantities towards satisfying the governing equation of the problem while matching discrete surface measurements. A crucial eikonal regularization term in the loss function ensures that the material density transitions between 0 and 1 over a prescribed length scale and avoids settling on intermediate values. By the end of the optimization, the converged material distribution reveals the location and shapes of the hidden structures.
to vanish on a predefined boundary. Rather, the zero level-set of \(\phi\) evolves freely during the training process in such a way that the corresponding material distribution \(\rho=\text{sigmoid}(\phi/\delta)\) and physical quantities \(\mathbf{\psi}\) minimize the total loss (3), eventually revealing the material boundaries delineating the hidden voids or inclusions.
### Setup of numerical experiments
We evaluate our TO framework on a range of challenging test cases involving different numbers and shapes of hidden structures and various materials (Methods, Tabs. I and II). As a substitute for real experiments, we use the finite-element method (FEM) software Abaqus to compute the deformed shape of the boundary of the elastic structure and generate the measurement data for each case (Methods, section "FEM simulations"). Using this measurement data, we run our TO framework to discover the number, locations, and shapes of the hidden voids or rigid inclusions (for implementation and training details, see Methods, section "Architecture and training details"). We then compare the obtained results with the ground truth -- the voids or inclusions originally fed into Abaqus -- to assess the efficacy of our framework.
### Elastic matrix experiments
We first apply our framework to cases involving a linear elastic matrix (Fig. 1a) containing voids (cases 1, 3, 8, 10, 15, 17 in Methods, Tab. 1). As the various loss components are minimized during training (Fig. 4a), the material density \(\rho\) evolves and splits in a way that progressively reveals the number, locations, and shapes of the hidden voids (Extended Data Fig. 1 and Supplementary Movie 1), without advance knowledge of their topology. By the end of the training, the transition regions where the material density goes from 0 to 1 have uniform thickness along all internal boundaries (Fig. 4d), thanks to the eikonal regularization that encourages the level-set gradient \(\nabla\phi\) to have unit norm in a band along the material boundaries \(\phi=0\) (Fig. 4b,c). The agreement between the final inferred shapes and the ground truth is remarkable, with our framework able to recover intricate details such as the three lobes and the concave surfaces of the star-shaped void (Fig. 4d, second from left), or the exact aspect ratio and location of a thin slit (Fig. 4d, third from left). The stress and strain fields of the deformed matrix are also obtained as a byproduct of the solution process (Extended Data Fig. 2). The only case that is not completely identified is the U-shaped void (Fig. 4d, first from right), a result of the miniscule influence of the inner lobe on the outer surface displacements due to its low level of strain and stress (Extended Data Fig. 8). Finally, our framework maintains accurate results when reducing the number of surface measurement points or restricting measurements to a few surfaces (Extended Data Figs. 5-7).
Next, we consider cases involving linear elastic and rigid inclusions in the linear elastic matrix (cases 4, 5, 6, 11, 12, 13 in Methods, Tab. 1). Our framework successfully identifies the inclusions in almost all cases (Fig. 4e). Inferred displacements and stresses of the deformed matrix (Extended Data Fig. 3) confirm the intuition that voids or soft inclusions soften the matrix while stiff or
Figure 3: **Eikonal regularization of the material density.****a**, A random level-set function \(\phi\) yields a material density \(\rho=\text{sigmoid}(\phi/\delta)\) with large regions of values between 0 and 1, due to the nonuniformity of the gradient \(|\nabla\phi|\) along the material boundaries defined by the zero level-set of \(\phi\) (black lines). **b**, Constraining \(\phi\) to solve the eikonal equation \(\nabla\phi=1\) in a narrow band \(\Omega_{\text{eik}}\) of thickness \(w\) (edges depicted by dashed lines) along the material boundaries results in a uniform transition thickness of \(\rho\) from 0 to 1, without large regions of intermediate density values. **c**, The loss \(\mathcal{L}_{\text{eik}}\) implements the eikonal regularization in the PINN-based TO framework by penalizing deviations away from the constraint \(|\nabla\phi|=1\) on a subset of collocation points \(\Omega_{\text{eik}}^{d}\subset\Omega_{d}\) that approximates the true narrow band \(\Omega_{\text{eik}}\).
Figure 4: **Identification of voids and inclusions in elastic matrices.** A linear elastic matrix containing voids (**a-d**): **a**, The various loss components that enforce the solution to match the surface measurement data, satisfy the governing equations, and obey the eikonal regularization, are being minimized during the training process. **b,c**, The final level-set function \(\phi\) and its gradient magnitude \(|\nabla\phi|\) show the effect of the eikonal regularization, making \(\phi\) a signed distance function in narrow band along the interface. **d**, The final material density \(\rho\) reveals the number, locations, and shapes of the hidden voids, which are compared with the ground truth shown in dotted white lines. **e**, The final material density predictions in the case of a linear elastic matrix containing soft, stiff or rigid inclusions. **f**, The final material density predictions in the case of a nonlinear hyperelastic matrix containing voids subject to large stretches.
rigid inclusions harden the matrix. The U-shaped soft and stiff elastic inclusions (Fig. 4e, second and third from right) are better detected by the framework than their void or rigid counterparts (Fig. 4d, first from right and Fig. 4e, first from right), since an elastic inclusion induces some strain and stress on the inner lobe (Extended Data Fig. 8).
Finally, we consider cases involving a soft, incompressible Neo-Hookean hyperelastic matrix with the same void shapes considered previously (cases 2, 7, 9, 14, 16, 18 in Methods, Tab. 1). The geometries are identified equally well (Fig. 4f) in this large deformation regime (Extended Data Fig. 4) as with linear elastic materials, which illustrates the ability of the framework to cope with nonlinear governing equations without any added complexity in the formulation or the implementation.
### Elastic layer experiments
We finally apply our framework to the periodic elastic layer (Fig. 1b), where a linear elastic material covers a hidden rigid substrate (cases 20, 21, 22 in Methods, Tab. 2).
Contrary to the matrix problem, this setup only provides access to measurements on the top surface, and the hidden geometry to be discovered is not completely surrounded by the elastic material. Our TO framework is nevertheless able to detect the correct depths and shapes of the hidden substrates (Fig. 5). This example demonstrates the versatility of the framework in adapting to various problem setups.
## Discussion
As with any TO method relying on a material density field to parameterize the geometry, the success of our PINN-based framework hinges on the presence of an appropriate regularization mechanism to penalize intermediate density values. Although we have shown that our novel eikonal regularization leads to consistently accurate results, other regularization approaches have been employed in classical adjoint-based TO methods [51, 18]. These include the total variation dimishing (TVD) regularization [39, 10] that penalizes the \(L_{1}\) norm of the density gradient \(\nabla\rho\), the explicit penalization regularization [3] that penalizes the integral over the domain of \(\rho(1-\rho)\), and the Solid Isotropic Material with Penalization (SIMP) approach [7] that relates material properties such as the shear modulus and the material density through a power-law with exponent \(p\). The latter is the most popular regularization mechanism in structural optimization [8]. However, when implemented in our PINN-based framework for the detection of hidden geometries, these methods yield inferior results to the eikonal regularization (Fig. 6). Indeed, we compare all four approaches on a challenging test case involving a linear elastic rectangular matrix pulled from the top and bottom and containing soft inclusions in the shape of the letters M, I, and T (case 19, Tab. 1). The measurements consist of the displacement along the outer boundary, similar to the previous square matrix examples. We consider different values of the regularization weight \(\lambda_{\text{reg}}\) (for the eikonal, TVD and explicit penalization regularizations) and the exponent \(p\) (for the SIMP regularization), and solve the inverse problem using four random initializations of the neural networks in each case. Not only was the eikonal regularization the only one to find the right shapes, it did so over three orders of magnitude of \(\lambda_{\text{reg}}\), demonstrating a desirable robustness with respect to \(\lambda_{\text{reg}}\) (Fig. 6).
Thanks to the flexibility of the PINN framework, adapting our approach to various scenarios involving partial information, three-dimensional geometries, or other types of noninvasive imaging experiments (e.g. using thermal [6], acoustic [14], electric [12], or magnetic [37] loading) should be straightforward. As an illustration, we identify a hidden inclusion in a nonlinearly conducting matrix using partially unknown thermal loading (Supplementary Information), mimicking an inaccessible surface. This experiment reveals our method's ability to generate good results without further modifications even when the forward problem is ill-posed, which would require including the unknown boundary condition as an additional op
Figure 5: **Identification of substrate shape underneath a periodic linear elastic layer.****a**, The various loss components that enforce the solution to match the surface measurement data, satisfy the governing equations, and obey the eikonal regularization, are being minimized during the training process. **b,c**, The final level-set function \(\phi\) and its gradient magnitude \(|\nabla\phi|\) show the effect of the eikonal regularization, which makes \(\phi\) a signed distance function in narrow band along the material boundary. **d**, The final material density \(\rho\) reveals the shape of the buried rigid substrate.
timization variable in a classical adjoint-based approach.
In conclusion, we have presented a PINN-based TO framework with a novel eikonal regularization, which we have applied to the noninvasive detection of hidden inclusions. By representing the geometry through a material density field combined with a novel eikonal regularization, our framework is able to discover the number, shapes and locations of hidden structures, without any prior knowledge required regarding the number or the types of shapes to expect. Finally, the idea of parameterizing geometries of arbitrary topologies with a material density field regularized with the eikonal constraint opens a pathway for PINNs to be applied to a wide range of design optimization problems constrained by physical governing equations. These include, for instance, the design of lenses that achieve targeted optical properties [38, 41] or the design of structures and metamaterials that exhibit desirable mechanical, acoustic, or thermal properties [32, 8, 30].
Fig. 6: **Comparison between eikonal regularization and alternative regularizations.****a**, The eikonal regularization achieves a high IoU (intersection over union, a geometry detection accuracy metric equal to 1 in the perfect case) above 0.95 for any value of the regularization weight \(\lambda_{\text{reg}}\) within a range spanning three orders of magnitude. The results are consistent over 4 random initializations of the neural networks parameters, with the circles reporting the average value and the shade reporting the highest and lowest values. By contrast, the total variation diminishing (TVD), explicit penalization, and Solid Isotropic Material with Penalization (SIMP) regularizations never exceed IoU values above 0.93, with larger variability among realizations. **b**, The final material density \(\rho\) obtained with each regularization mechanism for various values of the regularization weight \(\lambda_{\text{reg}}\) or exponent \(p\) (shown in **a** by the shaded areas) demonstrates the efficacy of the eikonal regularization. The evolution of the solution during training using the eikonal regularization and \(\lambda_{\text{reg}}=1\) is shown in Supplementary Movie 2.
## Methods
### Governing equations
The two plane-strain elasticity inverse problems considered in this study (Fig. 1) are defined in a two-dimensional domain \(\Omega\subset\mathbb{R}^{2}\) formed by the union of the elastic body \(\mathcal{B}\) and the voids or inclusions. Denoting with \(\mathbf{x}=(x_{1},x_{2})\in\Omega\) the planar spatial coordinates, the hidden geometrical layout of voids or inclusions is characterized by a material density \(\rho(\mathbf{x})\) equal to \(1\) in the body \(\mathcal{B}\) and \(0\) in the voids or inclusions.
#### Small-deformation linear elasticity
We first consider the case where the elastic body and inclusions consist of linear elastic materials, with Young's modulus \(E\) and Poisson's ratio \(\nu\) for the body, and Young's modulus \(\bar{E}\) and Poisson's ratio \(\bar{\nu}\) for the inclusions. Voids and rigid inclusions correspond to the limits \(\bar{E}\to 0\) and \(\bar{E}\to\infty\), respectively. The deformation of the elastic body containing the inclusions is described by a vector field \(\mathbf{\psi}(\mathbf{x})=(\mathbf{u}(\mathbf{x}),\mathbf{\sigma}(\mathbf{x}))\), where \(\mathbf{u}(\mathbf{x})\) is a planar displacement field with components \(u_{i}(\mathbf{x})\) and \(\mathbf{\sigma}(\mathbf{x})\) is a Cauchy stress tensor with components \(\sigma_{ij}(\mathbf{x})\). Indices \(i\) and \(j\) will hereafter always range from \(1\) to \(2\).
The governing PDEs comprise the equilibrium equations
\[\sum_{j}\frac{\partial\sigma_{ij}}{\partial x_{j}}=0,\quad\mathbf{x}\in\Omega, \tag{7}\]
as well as a linear elastic constitutive law \(F(\mathbf{\sigma},\nabla\mathbf{u},\rho)=0\) that we will express in two different but equivalent ways, depending on whether the inclusions are softer or stiffer than the matrix. For voids and soft inclusions, we consider the constitutive law in stress-strain form,
\[\mathbf{\sigma}= \,\rho\left[\lambda\,\mathrm{tr}(\mathbf{\epsilon})\,\mathbf{I}+2\mu \,\mathbf{\epsilon}\right],\] \[(1-\rho)\left[\bar{\lambda}\,\mathrm{tr}(\mathbf{\epsilon})\,\mathbf{ I}+2\bar{\mu}\,\mathbf{\epsilon}\right],\quad\mathbf{x}\in\Omega, \tag{8}\]
where \(\mathbf{\epsilon}=(\nabla\mathbf{u}+\nabla\mathbf{u}^{T})/2\) is the infinitesimal strain tensor, \(\mathrm{tr}(\mathbf{\epsilon})\) denotes its trace, \(\lambda=E\nu/(1+\nu)(1-2\nu)\) and \(\mu=E/2(1+\nu)\) are the Lame constants of the body, and \(\bar{\lambda}=\bar{E}\bar{\nu}/(1+\bar{\nu})(1-2\bar{\nu})\) and \(\bar{\mu}=\bar{E}/2(1+\bar{\nu})\) are the Lame constants of the inclusions. Notice that the case of voids, the stress vanishes in the \(\rho=0\) regions. For stiff and rigid inclusions, we consider the constitutive law in the inverted strain-stress form
\[\mathbf{\epsilon}= \,\rho\left[\frac{1+\nu}{E}\mathbf{\sigma}-\frac{\nu(1+\nu)}{E}\, \mathrm{tr}(\mathbf{\sigma})\,\mathbf{I}\right]\] \[(1-\rho)\left[\frac{1+\bar{\nu}}{\bar{E}}\mathbf{\sigma}-\frac{\bar{ \nu}(1+\bar{\nu})}{\bar{E}}\,\mathrm{tr}(\mathbf{\sigma})\,\mathbf{I}\right],\quad \mathbf{x}\in\Omega, \tag{9}\]
where \(\mathrm{tr}(\mathbf{\sigma})\) is the trace of the stress tensor. This relation differs from the three-dimensional one due to the plane strain assumption. Notice that the case of rigid inclusions, the strain vanishes in the \(\rho=0\) regions.
The boundary conditions on \(\partial\Omega\) and surface displacement measurement locations \(\partial\Omega^{m}\) are different in the two problems. For the elastic matrix (Fig. 1a), the domain is \(\Omega=[-0.5,0.5]\times[-0.5,0.5]\) and the boundary conditions are
\[\mathbf{\sigma}(\mathbf{x})\mathbf{n}(\mathbf{x}) =-P_{o}\mathbf{e}_{1}, \mathbf{x}\in\{-0.5,0.5\}\times[-0.5,0.5], \tag{10a}\] \[\mathbf{\sigma}(\mathbf{x})\mathbf{n}(\mathbf{x}) =\mathbf{0}, \mathbf{x}\in[-0.5,0.5]\times\{-0.5,0.5\}. \tag{10b}\]
The measurement locations \(\partial\Omega^{m}\) are distributed along the entire external boundary \(\partial\Omega\). In the case of the M, I, T inclusions (case 19, Tab. 1), the boundary conditions (10) are changed to account for the fact that the matrix is pulled from the top and bottom boundaries and covers the domain \(\Omega=[-1,1]\times[-0.5,0.5]\). For the elastic layer (Fig. 1b), the domain is \(\Omega=[0,1]\times[-0.5,0]\) and the boundary conditions are
\[\mathbf{\sigma}(\mathbf{x})\mathbf{n}(\mathbf{x}) =-P_{o}\mathbf{e}_{2}, \mathbf{x}\in[0,1]\times\{0\}, \tag{11a}\] \[\mathbf{u} =\mathbf{0}, \mathbf{x}\in[0,1]\times\{-0.5\}, \tag{11b}\]
as well as periodic for the displacement and traction on \(\mathbf{x}\in\{0,1\}\times[-0.5,0]\). The measurement locations \(\partial\Omega^{m}\) are distributed along the top surface \(\partial\Omega_{t}=[0,1]\times\{0\}\).
The geometry identification problem that we solve can then be stated as follows. Given surface displacement measurements \(\mathbf{u}_{i}^{m}\) at locations \(\mathbf{x}_{i}\in\partial\Omega^{m}\), find the distribution of material density \(\rho\) in \(\Omega\) such that the difference between the predicted and measured surface displacements vanish, that is,
\[\mathbf{u}(\mathbf{x}_{i})=\mathbf{u}_{i}^{m},\quad\mathbf{x}_{i}\in\partial \Omega^{m}. \tag{12}\]
The predicted displacement field must satisfy the equilibrium equation (7), the constitutive relation (8) or (9), and the boundary conditions (10) or (11).
#### Large-deformation nonlinear hyperelasticity
Next, we consider the case where the elastic body consists of an incompressible Neo-Hookean hyperelastic material with shear modulus \(\mu\). We now have to distinguish between the reference (undeformed) and current (deformed) configurations. We denote by \(\mathbf{x}=(x_{1},x_{2})\in\Omega\) and \(\mathbf{y}=(y_{1},y_{2})\in\Omega^{*}\) the coordinates in the reference and deformed configurations, respectively, with \(\Omega^{*}\) the deformed image of \(\Omega\). The displacement field \(\mathbf{u}(\mathbf{x})\) with components \(u_{i}(\mathbf{x})\) moves an initial position \(\mathbf{x}\in\Omega\) into its current location \(\mathbf{y}=\mathbf{x}+\mathbf{u}(\mathbf{x})\in\Omega^{*}\). In order to formulate the governing equations and boundary conditions in the reference configuration \(\Omega\), we need to introduce the first Piola-Kirchhoff stress tensor \(\mathbf{S}(\mathbf{x})\) with components \(S_{ij}(\mathbf{x})\). Unlike the Cauchy stress tensor, the first Piola-Kirchhoff stress tensor is defined in \(\Omega\) and is not
symmetric. The deformation of the elastic body is then described by the vector field \(\mathbf{\psi}(\mathbf{x})=(\mathbf{u}(\mathbf{x}),\mathbf{S}(\mathbf{x}),p(\mathbf{x}))\) defined over \(\Omega\), where \(p(\mathbf{x})\) is a pressure field that serves to enforce the incompressibility constraint.
The equilibrium equations are
\[\sum_{j}\frac{\partial S_{ij}}{\partial x_{j}}=0,\quad\mathbf{x}\in\Omega, \tag{13}\]
where the derivatives in \(\nabla_{\mathbf{x}}\) are taken with respect to the reference coordinates \(\mathbf{x}\). We only consider the presence of voids so that the nonlinear constitutive law \(F(\mathbf{S},\nabla_{\mathbf{x}}\mathbf{u},p,\rho)=0\) is simply expressed as
\[\mathbf{S}=\rho\left[-p\mathbf{F}^{-T}+\mu\mathbf{F}\right],\quad\mathbf{x}\in\Omega, \tag{14}\]
where \(\mathbf{F}(\mathbf{x})=\mathbf{I}+\nabla_{\mathbf{x}}\mathbf{u}(\mathbf{x})\) is the deformation gradient tensor. Notice that the stress vanishes in the \(\rho=0\) regions. Finally, we have the incompressibility constraint
\[\rho\left[\det(\mathbf{F})-1\right]=0,\quad\mathbf{x}\in\Omega, \tag{15}\]
which turns itself off in the \(\rho=0\) regions since voids do not deform in a way that preserves volume.
We only treat the matrix problem (Fig. 1a) in this hyperelastic case. The domain is \(\Omega=[-0.5,0.5]\times[-0.5,0.5]\) and the boundary conditions are
\[\mathbf{S}(\mathbf{x})\mathbf{n}_{0}(\mathbf{x}) =-P_{o}\mathbf{e}_{1}, \mathbf{x}\in\{-0.5,0.5\}\times[-0.5,0.5], \tag{16a}\] \[\mathbf{S}(\mathbf{x})\mathbf{n}_{0}(\mathbf{x}) =\mathbf{0}, \mathbf{x}\in[-0.5,0.5]\times\{-0.5,0.5\}. \tag{16b}\]
As in the linear elastic case, the measurement locations \(\partial\Omega^{m}\) are distributed along the entire external boundary \(\partial\Omega\).
The geometry identification problem can then be stated identically as in the linear elastic case. This time, the predicted displacement field must satisfy the equilibrium equation (13), the constitutive relation (14), the incompressibility condition (15), and the boundary conditions (16).
#### iv.2.2 Rescaling
The various physical quantities involved in the elasticity inverse problem span a wide range of scales; for instance, displacements may be orders of magnitude smaller than the length scale associated with the geometry. Thus, we rescale all physical quantities into nondimensional values of order one, as also done in [29]. Lengths are rescaled with the width \(L\) of the elastic matrix or elastic layer, tractions and stresses with the magnitude \(P_{o}\) of the applied traction at the boundaries, and displacements with the ratio \(LP_{o}/E\), where \(E\) is the Young's modulus of the elastic material (in the hyperelastic case, we use the equivalent Young's modulus \(E=3\mu\), where \(\mu\) is the shear modulus of the hyperelastic material). This rescaling is critical to enable the neural networks underlying our framework to handle elasticity problems across a wide range of material moduli and applied loads.
### Solution methodology
Here, we describe in detail the application of our TO framework to the solution of the two plane-strain elasticity inverse problems formulated in the introduction. We will treat separately the small-deformation linear elasticity case and the large-deformation hyperelasticity case.
#### iv.2.3 Small-deformation linear elasticity
Since the problem is described by the physical quantities \(\mathbf{\psi}=(u_{1},u_{2},\sigma_{11},\sigma_{22},\sigma_{12})\), we introduce the neural network approximations
\[u_{1}(\mathbf{x}) =\bar{u}_{1}(\mathbf{x};\mathbf{\theta}_{1}), \tag{17a}\] \[u_{2}(\mathbf{x}) =\bar{u}_{2}(\mathbf{x};\mathbf{\theta}_{2}),\] (17b) \[\sigma_{11}(\mathbf{x}) =\bar{\sigma}_{11}(\mathbf{x};\mathbf{\theta}_{3}),\] (17c) \[\sigma_{22}(\mathbf{x}) =\bar{\sigma}_{22}(\mathbf{x};\mathbf{\theta}_{4}),\] (17d) \[\sigma_{12}(\mathbf{x}) =\bar{\sigma}_{12}(\mathbf{x};\mathbf{\theta}_{5}),\] (17e) \[\phi(\mathbf{x}) =\bar{\phi}(\mathbf{x};\mathbf{\theta}_{\phi}). \tag{17f}\]
The last equation represents the level-set neural network, which defines the material density as \(\rho(\mathbf{x})=\bar{\rho}(\mathbf{x};\mathbf{\theta}_{\phi})=\text{sigmoid}( \bar{\phi}(\mathbf{x};\mathbf{\theta}_{\phi})/\delta)\). We then formulate the loss function (3) by specializing the loss term expressions presented in results section to the linear elasticity problem. Omitting the \(\mathbf{\theta}\)'s for notational simplicity, we obtain
\[\mathcal{L}_{\text{meas}}(\mathbf{\theta}_{\mathbf{\psi}}) =\frac{1}{|\partial\Omega^{m}|}\sum_{\mathbf{x}_{i}\in\partial \Omega^{m}}|\bar{\mathbf{u}}(\mathbf{x}_{i})-\mathbf{u}_{i}^{m}|^{2}, \tag{18a}\] \[\mathcal{L}_{\text{gov}}(\mathbf{\theta}_{\mathbf{\psi}},\mathbf{\theta}_{ \phi}) =\frac{1}{|\Omega^{d}|}\sum_{\mathbf{x}_{i}\in\Omega^{d}}|\mathbf{r }_{\text{eq}}(\bar{\mathbf{\sigma}}(\mathbf{x}_{i}))|^{2}\] \[\quad+\frac{1}{|\Omega^{d}|}\sum_{\mathbf{x}_{i}\in\Omega^{d}}| \mathbf{r}_{\text{cr}}(\bar{\mathbf{u}}(\mathbf{x}_{i}),\bar{\mathbf{\sigma}}( \mathbf{x}_{i}),\bar{\rho}(\mathbf{x}_{i}))|^{2},\] (18b) \[\mathcal{L}_{\text{eik}}(\mathbf{\theta}_{\phi}) =\frac{1}{|\Omega^{d}_{\text{eik}}|}\sum_{\mathbf{x}_{i}\in \Omega^{d}_{\text{eik}}}\left(|\nabla\bar{\phi}(\mathbf{x}_{i})|-1\right)^{2}, \tag{18c}\]
where \(\bar{\mathbf{u}}=(\bar{u}_{1},\bar{u}_{2})\) and \(\bar{\mathbf{\sigma}}\) has components \(\bar{\sigma}_{i,j}\), \(i,j=1,2\). In (18b), the terms \(\mathbf{r}_{\text{eq}}\) and \(\mathbf{r}_{\text{cr}}\) refer to the residuals of the equilibrium equation (7) and the constitutive relation (8) or (9). The eikonal loss term is problem-independent and therefore identical to (6).
We note that instead of defining neural network approximations for the displacements and the stresses, we could define neural network approximations solely for the displacements, that is, \(\mathbf{\psi}=(u_{1},u_{2})\). In this case, the loss term (18b) would only include the residual of the equilibrium equation (7), in which the stress components would
be directly expressed in terms of the displacements and the material distribution using the constitutive relation (8). However, several recent studies [21, 25, 26, 29, 47, 48] have shown that the mixed formulation adopted in the present work results in superior accuracy and training performance, which could partly be explained by the fact that only first-order derivatives of the neural network outputs are involved since the displacements and stresses are only differentiated to first oder in (7) and (8). In our case, the mixed formulation holds the additional advantage that it enables us to treat stiff and rigid inclusions using the inverted constitutive relation (9) instead of (8). Finally, the mixed formulation allows us to directly integrate both displacement and traction boundary conditions into the output of the neural network approximations, as we describe in the next paragraph.
We design the architecture of the neural networks in such a way that they inherently satisfy the boundary conditions, treating the latter as hard constraints [17, 53]. For the elastic matrix, we do this through the transformations
\[\bar{u}_{1}(\mathbf{x};\mathbf{\theta}_{1}) =\bar{u}^{\prime}_{1}(\mathbf{x};\mathbf{\theta}_{1}), \tag{19a}\] \[\bar{u}_{2}(\mathbf{x};\mathbf{\theta}_{2}) =\bar{u}^{\prime}_{2}(\mathbf{x};\mathbf{\theta}_{2}),\] (19b) \[\bar{\sigma}_{11}(\mathbf{x};\mathbf{\theta}_{3}) =(x-0.5)(x+0.5)\,\bar{\sigma}^{\prime}_{11}(\mathbf{x};\mathbf{\theta }_{3})+P_{o},\] (19c) \[\bar{\sigma}_{22}(\mathbf{x};\mathbf{\theta}_{4}) =(y-0.5)(y+0.5)\,\bar{\sigma}^{\prime}_{22}(\mathbf{x};\mathbf{\theta }_{4}),\] (19d) \[\bar{\sigma}_{12}(\mathbf{x};\mathbf{\theta}_{5}) =(x-0.5)(x+0.5)\cdot\] \[(y-0.5)(y+0.5)\,\bar{\sigma}^{\prime}_{12}(\mathbf{x};\mathbf{\theta }_{5}),\] (19e) \[\bar{\phi}(\mathbf{x};\mathbf{\theta}_{\phi}) =(x-0.5)(x+0.5)\cdot\] \[(y-0.5)(y+0.5)\,\bar{\phi}^{\prime}(\mathbf{x};\mathbf{\theta}_{\phi} )+w, \tag{19f}\]
where the quantities with a prime denote the raw output of the neural network. In this way, the neural network approximations defined in (17) obey by construction the boundary conditions (10). Further, since we know that the elastic material is present all along the outer surface \(\partial\Omega\), we define \(\bar{\phi}\) so that \(\phi=w\) on \(\partial\Omega\), which ensures that \(\rho=\text{sigmoid}(\phi/\delta)\simeq 1\) on \(\partial\Omega\) (recall that \(w\) is such that \(\text{sigmoid}(w/2\delta)\simeq 1\)). In the case of the M, I, T inclusions, these transformations are changed to reflect the fact that the matrix is wider and pulled from the top and bottom. For the periodic elastic layer, we introduce the transformations
\[\bar{u}_{1}(\mathbf{x};\mathbf{\theta}_{1}) =(y+0.5)\,\bar{u}^{\prime}_{1}(\cos x,\sin x,y;\mathbf{\theta}_{1}), \tag{20a}\] \[\bar{u}_{2}(\mathbf{x};\mathbf{\theta}_{2}) =(y+0.5)\,\bar{u}^{\prime}_{2}(\cos x,\sin x,y;\mathbf{\theta}_{2}),\] (20b) \[\bar{\sigma}_{11}(\mathbf{x};\mathbf{\theta}_{3}) =\bar{\sigma}^{\prime}_{11}(\cos x,\sin x,y;\mathbf{\theta}_{3}),\] (20c) \[\bar{\sigma}_{22}(\mathbf{x};\mathbf{\theta}_{4}) =y\,\bar{\sigma}^{\prime}_{22}(\cos x,\sin x,y;\mathbf{\theta}_{4})-P _{o},\] (20d) \[\bar{\sigma}_{12}(\mathbf{x};\mathbf{\theta}_{5}) =y\,\bar{\sigma}^{\prime}_{12}(\cos x,\sin x,y;\mathbf{\theta}_{5}),\] (20e) \[\bar{\phi}(\mathbf{x};\mathbf{\theta}_{\phi}) =y(y+0.5)\,\bar{\phi}^{\prime}(\cos x,\sin x,y;\mathbf{\theta}_{\phi})\] \[\quad+w(4y+1), \tag{20f}\]
so that the neural network approximations defined in (17) obey by construction the boundary conditions (11) and are periodic along the \(x\) direction. Further, since we know that the elastic material is present all along the top surface \(y=0\) and the rigid substrate is present all along the bottom surface \(y=-0.5\), we define \(\bar{\phi}\) so that \(\phi=w\) for \(y=0\) and \(\phi=-w\) for \(y=-0.5\), which ensures that \(\rho=\text{sigmoid}(\phi/\delta)\simeq 1\) for \(y=0\) and \(\rho\simeq 0\) for \(y=-0.5\).
#### Large-deformation hyperelasticity
The problem is now described by the physical quantities \(\mathbf{\psi}=(u_{1},u_{2},S_{11},S_{22},S_{12},S_{21},p)\). We therefore introduce the neural network approximations
\[u_{1}(\mathbf{x}) =\bar{u}_{1}(\mathbf{x};\mathbf{\theta}_{1}), \tag{21a}\] \[u_{2}(\mathbf{x}) =\bar{u}_{2}(\mathbf{x};\mathbf{\theta}_{2}),\] (21b) \[S_{11}(\mathbf{x}) =\bar{S}_{11}(\mathbf{x};\mathbf{\theta}_{3}),\] (21c) \[S_{22}(\mathbf{x}) =\bar{S}_{22}(\mathbf{x};\mathbf{\theta}_{4}),\] (21d) \[S_{12}(\mathbf{x}) =\bar{S}_{12}(\mathbf{x};\mathbf{\theta}_{5}),\] (21e) \[S_{21}(\mathbf{x}) =\bar{S}_{21}(\mathbf{x};\mathbf{\theta}_{6}),\] (21f) \[p(\mathbf{x}) =\bar{p}(\mathbf{x};\mathbf{\theta}_{7}),\] (21g) \[\phi(\mathbf{x}) =\bar{\phi}(\mathbf{x};\mathbf{\theta}_{\phi}), \tag{21h}\]
and the material distribution is given by \(\rho(\mathbf{x})=\bar{p}(\mathbf{x};\mathbf{\theta}_{\phi})=\text{sigmoid}(\bar{ \phi}(\mathbf{x};\mathbf{\theta}_{\phi})/\delta)\). We then formulate the loss function (3) by specializing the loss term expressions presented in Section 4 to the linear elasticity problem, using the governing equations given in Appendix. Omitting the \(\mathbf{\theta}\)'s for notational simplicity, we obtain
\[\mathcal{L}_{\text{meas}}(\mathbf{\theta}_{\mathbf{\psi}}) =\frac{1}{|\partial\Omega^{m}|}\sum_{\mathbf{x}_{i}\in\partial \Omega^{m}}|\bar{\mathbf{u}}(\mathbf{x}_{i})-\mathbf{u}_{i}^{m}|^{2}, \tag{22a}\] \[\mathcal{L}_{\text{gov}}(\mathbf{\theta}_{\mathbf{\psi}},\mathbf{\theta}_{ \phi}) =\frac{1}{|\Omega^{d}|}\sum_{\mathbf{x}_{i}\in\Omega^{d}}|\mathbf{r}_{ \text{eq}}(\bar{\mathbf{S}}(\mathbf{x}_{i}))|^{2}\] \[\quad+\frac{1}{|\Omega^{d}|}\sum_{\mathbf{x}_{i}\in\Omega^{d}}| \mathbf{r}_{\text{cr}}(\bar{\mathbf{u}}(\mathbf{x}_{i}),\bar{\mathbf{S}}( \mathbf{x}_{i}),\bar{p}(\mathbf{x}_{i}),\bar{\rho}(\mathbf{x}_{i}))|^{2}\] \[\quad+\frac{1}{|\Omega^{d}|}\sum_{\mathbf{x}_{i}\in\Omega^{d}}| \mathbf{r}_{\text{inc}}(\bar{\mathbf{u}}(\mathbf{x}_{i}),\bar{\rho}(\mathbf{x}_ {i}))|^{2},\] (22b) \[\mathcal{L}_{\text{eik}}(\mathbf{\theta}_{\phi}) =\frac{1}{|\Omega^{d}_{\text{eik}}|}\sum_{\mathbf{x}_{i}\in \Omega^{d}_{\text{eik}}}\left(|\nabla\bar{\phi}(\mathbf{x}_{i})|-1\right)^{2}, \tag{22c}\]
where \(\bar{\mathbf{u}}=(\bar{u}_{1},\bar{u}_{2})\) and \(\bar{\mathbf{S}}\) has components \(\bar{S}_{i,j}\), \(i,j=1,2\). In (22b), the terms \(\mathbf{r}_{\text{eq}}\), \(\mathbf{r}_{\text{cr}}\), and \(\mathbf{r}_{\text{inc}}\) refer to the residuals of the equilibrium equation (13), the constitutive relation (14), and the incompressibility constraint (15). The eikonal loss term is problem-independent and therefore identical to (6).
As in the linear elasticity case, we design the architecture of the neural networks in such a way that they inherently satisfy the boundary conditions. For the elastic matrix problem,
\[\bar{u}_{1}(\mathbf{x};\mathbf{\theta}_{1}) =\bar{u}^{\prime}_{1}(\mathbf{x};\mathbf{\theta}_{1}), \tag{23a}\] \[\bar{u}_{2}(\mathbf{x};\mathbf{\theta}_{2}) =\bar{u}^{\prime}_{2}(\mathbf{x};\mathbf{\theta}_{2}), \tag{23b}\]
\[\bar{S}_{11}(\mathbf{x};\mathbf{\theta}_{3}) =(x-0.5)(x+0.5)\,\bar{S}^{\prime}_{11}(\mathbf{x};\mathbf{\theta}_{3})+P _{o}, \tag{23c}\] \[\bar{S}_{22}(\mathbf{x};\mathbf{\theta}_{4}) =(y-0.5)(y+0.5)\,\bar{S}^{\prime}_{22}(\mathbf{x};\mathbf{\theta}_{4}),\] (23d) \[\bar{S}_{12}(\mathbf{x};\mathbf{\theta}_{5}) =(y-0.5)(y+0.5)\,\bar{S}^{\prime}_{12}(\mathbf{x};\mathbf{\theta}_{5}),\] (23e) \[\bar{S}_{21}(\mathbf{x};\mathbf{\theta}_{6}) =(x-0.5)(x+0.5)\,\bar{S}^{\prime}_{21}(\mathbf{x};\mathbf{\theta}_{6}),\] (23f) \[\bar{p}(\mathbf{x};\mathbf{\theta}_{7}) =\bar{p}^{\prime}(\mathbf{x};\mathbf{\theta}_{7}),\] (23g) \[\bar{\phi}(\mathbf{x};\mathbf{\theta}_{\phi}) =(x-0.5)(x+0.5)\cdot\] \[\quad(y-0.5)(y+0.5)\,\bar{\phi}^{\prime}(\mathbf{x};\mathbf{\theta}_{ \phi})+w, \tag{23h}\]
where the quantities with a prime denote the raw output of the neural network. In this way, the neural network approximations defined in (21) obey by construction the boundary conditions of the problem. As before, since we know that the elastic material is present all along the outer surface \(\partial\Omega\), we define \(\bar{\phi}\) so that \(\phi=w\) on \(\partial\Omega\), which ensures that \(\rho=\mathrm{sigmoid}(\phi/\delta)\simeq 1\) on \(\partial\Omega\).
### Architecture and training details
Here, we provide implementation details regarding the architecture of the deep neural networks, the training procedure and corresponding parameter values.
#### Neural network architecture
State variable fields of the form \(\psi(\mathbf{x})\) are approximated using deep fully-connected neural networks that map the location \(\mathbf{x}\) to the corresponding value of \(\psi\) at that location. This map can be expressed as \(\psi(\mathbf{x})=\bar{\psi}(\mathbf{x};\mathbf{\theta})\), and is defined by the sequence of operations
\[\mathbf{z}^{0} =\mathbf{x}, \tag{24a}\] \[\mathbf{z}^{k} =\sigma(\mathbf{W}^{k}\mathbf{z}^{k-1}+\mathbf{b}^{k}),\quad 1 \leq k\leq\ell-1,\] (24b) \[\psi =\mathbf{z}^{\ell} =\mathbf{W}^{\ell}\mathbf{z}^{\ell-1}+\mathbf{b}^{\ell}. \tag{24c}\]
The input \(\mathbf{x}\) is propagated through \(\ell\) layers, all of which (except the last) take the form of a linear operation composed with a nonlinear transformation. Each layer outputs a vector \(\mathbf{z}^{k}\in\mathbb{R}^{q_{k}}\), where \(q_{k}\) is the number of 'neurons', and is defined by a weight matrix \(\mathbf{W}^{k}\in\mathbb{R}^{q_{k}\times q_{k-1}}\), a bias vector \(\mathbf{b}^{k}\in\mathbb{R}^{q_{k}}\), and a nonlinear activation function \(\sigma(\cdot)\). Finally, the output of the last layer is assigned to \(\psi\). The weight matrices and bias vectors, which parametrize the map from \(\mathbf{x}\) to \(\psi\), form a set of trainable parameters \(\mathbf{\theta}=\{\mathbf{W}^{k},\mathbf{b}^{k}\}_{k=1}^{\ell}\).
The choice of the nonlinear activation function \(\sigma(\cdot)\) and the initialization procedure for the trainable parameters \(\mathbf{\theta}\) are both important factors in determining the performance of neural networks. While the tanh function has been a popular candidate in the context of PINNs [35], recent works by Refs. [52; 55] have shown that using sinusoidal activation functions can lead to improved training performance by promoting the emergence of small-scale features. In this work, we select the sinusoidal representation network (SIREN) architecture from Ref. [52], which combines the use of the sine as an activation function with a specific way to initialize the trainable parameters \(\mathbf{\theta}\) that ensures that the distribution of the input to each sine activation function remains unchanged over successive layers. Specifically, each component of \(\mathbf{W}^{k}\) is uniformly distributed between \(-\sqrt{6/q_{k}}\) and \(\sqrt{6/q_{k}}\) where \(q_{k}\) is the number of neurons in layer \(k\), and \(\mathbf{b}^{k}=\mathbf{0}\), for \(k=1,\ldots,\ell\). Further, the first layer of the SIREN architecture is \(\mathbf{z}^{1}=\sigma(\omega_{0}\mathbf{W}^{1}\mathbf{z}^{0}+\mathbf{b}^{1})\) instead of (24b), with the extra scalar \(\omega_{0}\) promoting higher-frequency content in the output.
#### Training procedure
We construct the total loss function (3) and train the neural networks in TensorFlow 2. The training is performed using ADAM, a first-order gradient-descent-based algorithm with adaptive step size [31]. In each case, we repeat the training over four random initializations of the neural networks parameters and report the best results. Three tricks resulted in noticeably improved training performance and consistency:
* First, we found that pretraining the level-set neural network \(\phi(\mathbf{x})=\bar{\phi}(\mathbf{x};\mathbf{\theta}_{\phi})\) in a standard supervised setting leads to much more consistent results over different initializations of the neural networks. During this pretraining step, carried out before the main optimization step in which all neural networks are trained to minimize the loss (3), we minimize the mean-square error \[\mathcal{L}_{\mathrm{sup}}(\mathbf{\theta}_{\phi})=\frac{1}{|\Omega^{d}|}\sum_{ \mathbf{x}_{i}\in\Omega^{d}}|\bar{\phi}(\mathbf{x}_{i};\mathbf{\theta}_{\phi})- \phi_{i}|,\] (25) where \(\Omega^{d}\) is the same set of collocation points as in (5), the supervised labels \(\phi_{i}=|\mathbf{x}_{i}|-0.25\) for the elastic matrix, and \(\phi_{i}=y_{i}+0.25\) for the elastic layer. The material density \(\bar{\rho}(\mathbf{x};\mathbf{\theta}_{\phi})=\mathrm{sigmoid}(\bar{\phi}( \mathbf{x};\mathbf{\theta}_{\phi})/\delta)\) obtained at the end of this pretraining step is one outside a circle of radius \(0.25\) centered at the origin for the elastic matrix, and it is one above the horizontal line \(y=-0.25\) for the elastic layer. This choice for the supervised labels is justified by the fact that \(\rho\) is known to be one along the outer boundary of the domain \(\Omega\) for the elastic matrix, and it is known to be one (zero) along the top (bottom) boundary of \(\Omega\) for the elastic layer.
* Second, during the main optimization in which all neural networks are trained to minimize the loss (3), we evaluate the loss component \(\mathcal{L}_{\mathrm{gov}}\) in (5) using a different subset, or mini-batch, of residual points from \(\Omega^{d}\) at every iteration. Such a mini-batching approach has been reported to improve the convergence of the PINN training process [54; 15], corroborating our own observations. In our case, we choose to divide the set \(\Omega^{d}\) into
10 different mini-batches of size \(|\Omega^{d}|/10\), which are then employed sequentially to evaluate \(\mathcal{L}_{\text{gov}}\) during each subsequent gradient update \[\mathbf{\theta}_{\mathbf{\psi}}^{k+1} =\mathbf{\theta}_{\mathbf{\psi}}^{k}-\alpha_{\mathbf{\psi}}(k)\nabla_{\mathbf{\theta }_{\mathbf{\psi}}}\mathcal{L}(\mathbf{\theta}_{\mathbf{\psi}}^{k},\mathbf{\theta}_{\phi}^{k}),\] (26a) \[\mathbf{\theta}_{\phi}^{k+1} =\mathbf{\theta}_{\phi}^{k}-\alpha_{\phi}(k)\nabla_{\mathbf{\theta}_{\phi }}\mathcal{L}(\mathbf{\theta}_{\mathbf{\psi}}^{k},\mathbf{\theta}_{\phi}^{k}).\] (26b) An epoch of training, which is defined as one complete pass through the whole set \(\Omega^{d}\), therefore consists of 10 gradient updates.
* Third, the initial nominal step size \(\alpha_{\mathbf{\psi}}\) governing the learning rate of the physical quantities neural networks is set to be 10 times larger than its counterpart \(\alpha_{\phi}\) governing the learning rate of the level-set neural network. This results in a separation of time scales between the rate of change of the physical quantities neural networks and that of the level-set neural network, which is motivated by the idea that physical quantities should be given time to adapt to a given geometry before the geometry itself changes.
#### Parameter values
The parameter values described below apply to all results presented in this paper.
* **Neural network architecture.** For all cases except the M, I, T inclusions (case 19, Tab. 1), we opted for neural networks with 4 hidden layers of 50 neurons each, which we found to be a good compromise between expressivity and training time. For the M, I, T inclusions, we used 6 hidden layers with 100 neurons each. Further, we choose \(\omega_{0}=10\) as the scalar appearing in the first layer of the SIREN architecture.
* **Collocation and measurements points.** In the square and rectangle elastic matrix problems, we consider that the boundary displacement is measured along each of the four external boundaries at 100 equally-spaced points, which amounts to \(|\partial\Omega^{m}|=400\). In the elastic layer problem, we consider that the boundary displacement is measured along the top boundary at 100 equally-spaced points, which amounts to \(|\partial\Omega^{m}|=100\). For both geometries except the M, I, T inclusions, the set of collocation points \(\Omega^{d}\) consists of 10000 points distributed in \(\Omega\) with a Latin Hypercube Sampling (LHS) strategy, yielding 10 mini-batches containing 1000 points each. For the M, I, T inclusions, \(\Omega^{d}\) consists of 50000 points, yielding 50 mini-batches containing 1000 points each.
* **Training parameters.** The pretraining of the level-set neural network is carried out using the ADAM optimizer with nominal step size \(10^{-3}\) over 800 training epochs, employing the whole set \(\Omega^{d}\) to compute the gradient of \(\mathcal{L}_{\text{sup}}\) at each update step. The main optimization, during which all neural networks are trained to minimize the total loss (3), is carried out using the ADAM optimizer. For the matrix cases except the M, I, T inclusions, we use a total of 150k training epochs starting from a nominal step size \(10^{-4}\) for the level-set neural network and \(10^{-3}\) for the other neural networks. This step size is reduced to \(10^{-4}\) for all neural networks at 60k epochs, and again to \(10^{-5}\) at 120k epochs. The schedule is the same for the elastic layer cases, with the difference that we use a total of 200k training epochs. For the matrix case with the M, I, T inclusions, we use a total of 50k epochs (note that each epoch contains 5 times as many mini-batches as in the other cases) starting from a nominal step size \(10^{-4}\) for the level-set neural network and \(10^{-3}\) for the other neural networks. This step size is reduced to \(10^{-4}\) for all neural networks at 16k epochs, and again to \(10^{-5}\) at 40k epochs. Finally, the scalar weights in the loss (3) are assigned the values \(\lambda_{\text{meas}}=10\), \(\lambda_{\text{gov}}=1\), and \(\lambda_{\text{reg}}=1\) for all cases. We also multiply the second term of \(\mathcal{L}_{\text{gov}}\) in (18b) and (22b) with a scalar weight \(\lambda_{\text{cr}}=10\).
### FEM simulations
The FEM simulations that provide the boundary displacement data and the ground truth are performed in the software Abaqus, using its Standard (implicit) solver. The list of all cases considered in provided in Tab. 1 for the elastic matrix setup (Fig. 1a) and in Tab. 2 for the periodic elastic layer setup (Fig. 1b). Every case is meshed using a linear density of 200 elements per unit length along each boundary, corresponding to between 25k to 80k total elements depending on domain size as well as number and shapes of voids or inclusions. We employ bilinear quadrilateral CPE4 plain-strain elements for the cases involving a linear elastic material, and their hybrid constant-pressure counterpart CPE4H for the cases involving a hyperelastic material. We apply a load \(P_{o}/E=0.01\) for the cases involving a linear elastic material, and a load \(P_{o}/E=0.173\) for the cases involving a hyperelastic material.
## References
* [1] D. Adalsteinsson and J. A. Sethian. A fast level set method for propagating interfaces. _Journal of computational physics_, 118(2):269-277, 1995.
* [2] A. Adler and D. Holder. _Electrical impedance tomography: methods, history and applications_. CRC Press, 2021.
* [3] G. Allaire and R. Kohn. Topology optimization and optimal shape design using homogenization. In _Topology design of structures_, pages 207-218. Springer, 1993.
* [4] G. Allaire, F. Jouve, and A.-M. Toader. Structural optimization using sensitivity analysis and a level-set method. _Journal of computational physics_, 194(1):363-393, 2004.
* [5] H. B. Ameur, M. Burger, and B. Hackl. Level set methods for geometric inverse problems in linear elasticity. _Inverse Problems_, 20(3):673, 2004.
* [6] H. T. Banks, F. Kojima, and W. P. Winfree. Boundary estimation problems arising in thermal tomography. _Inverse problems_, 6(6):897, 1990.
* [7] M. P. Bendsoe. Optimal shape design as a material distribution problem. _Structural optimization_, 1(4):193-202, 1989.
* [8] M. P. Bendsoe and O. Sigmund. _Topology optimization: theory, methods, and applications_. Springer Science & Business Media, 2003.
* [9] M. Bonnet and A. Constantinescu. Inverse problems in elasticity. _Inverse problems_, 21(2):R1, 2005.
* [10] T. F. Chan and X.-C. Tai. Level set and total variation regularization for elliptic inverse problems with discontinuous coefficients. _Journal of Computational Physics_, 193(1):40-66, 2004.
* [11] Y. Chen and L. Dal Negro. Physics-informed neural networks for imaging and parameter retrieval of photonic nanostructures from near-field data. _APL Photonics_, 7(1):010802, 2022.
* [12] M. Cheney, D. Isaacson, and J. C. Newell. Electrical impedance tomography. _SIAM review_, 41(1):85-101, 1999.
* [13] V. Cherepenin, A. Karpov, A. Korjenevsky, V. Kornienko, A. Mazaletskaya, D. Mazourov, and D. Meister. A 3d electrical impedance tomography (eit) system for breast cancer detection. _Physiological measurement_, 22(1):9, 2001.
* [14] D. Colton, J. Coyle, and P. Monk. Recent developments in inverse acoustic scattering theory. _Siam Review_, 42(3):369-414, 2000.
* [15] A. Daw, J. Bu, S. Wang, P. Perdikaris, and A. Karpatne. Rethinking the importance of sampling in physics-informed neural networks. _arXiv preprint arXiv:2207.02338_, 2022.
* [16] M. Dissanayake and N. Phan-Thien. Neural-network-based approximations for solving partial differential equations. _Communications in Numerical Methods in Engineering_, 10(3):195-201, 1994.
* [17] S. Dong and N. Ni. A method for representing periodic functions and enforcing exactly periodic boundary conditions with deep neural networks. _Journal of Computational Physics_, 435:110242, 2021.
* [18] O. Dorn and D. Lesselier. Level set methods for inverse scattering. _Inverse Problems_, 22(4):R67, 2006.
* [19] O. Dorn, E. L. Miller, and C. M. Rappaport. A shape reconstruction method for electromagnetic tomography using adjoint fields and level sets. _Inverse problems_, 16(5):1119, 2000.
* [20] H. A. Eschenauer, V. V. Kobelev, and A. Schumacher. Bubble method for topology and shape optimization of structures. _Structural optimization_, 8(1):42-51, 1994.
* [21] R. J. Gladstone, M. A. Nabian, and H. Meidani. Fopinns: A first-order formulation for physics informed neural networks. _arXiv preprint arXiv:2210.14320_, 2022.
* [22] H. Griffiths. Magnetic induction tomography. _Measurement science and technology_, 12(8):1126, 2001.
* [23] A. Gropp, L. Yariv, N. Haim, M. Atzmon, and Y. Lipman. Implicit geometric regularization for learning shapes. In _International Conference on Machine Learning_, pages 3789-3799. PMLR, 2020.
* [24] X. Guo, W. Zhang, and W. Zhong. Doing topology optimization explicitly and geometrically?a new moving morphable components based framework. _Journal of Applied Mechanics_, 81(8), 2014.
* [25] E. Haghighat, M. Raissi, A. Moure, H. Gomez, and R. Juanes. A physics-informed deep learning framework for inversion and surrogate modeling in solid mechanics. _Computer Methods in Applied Mechanics and Engineering_, 379:113741, 2021.
* [26] A. Harandi, A. Moeineddin, M. Kaliske, S. Reese, and S. Rezaei. Mixed formulation of physics-informed neural networks for thermo-mechanically coupled systems and heterogeneous domains. _arXiv preprint arXiv:2302.04954_, 2023.
* [27] J. Haslinger and R. A. Makinen. _Introduction to shape optimization: theory, approximation, and computation_. SIAM, 2003.
* [28] C. J. Hellier. _Handbook of nondestructive evaluation_. McGraw-Hill Education, 2013.
* [29] A. Henkes, H. Wessels, and R. Mahnken. Physics informed neural networks for continuum micromechanics. _Computer Methods in Applied Mechanics and Engineering_, 393:114790, 2022.
* [30] M. Kadic, G. W. Milton, M. van Hecke, and M. Wegener. 3d metamaterials. _Nature Reviews Physics_, 1(3):198-210, 2019.
* [31] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. _arXiv preprint arXiv:1412.6980_, 2014.
* [32] H. T. Kollmann, D. W. Abueidda, S. Koric, E. Guleryuz, and N. A. Sobh. Deep learning for topology optimization of 2d metamaterials. _Materials & Design_, 196:109098, 2020.
* [33] I. E. Lagaris, A. Likas, and D. I. Fotiadis. Artificial neural networks for solving ordinary and partial differential equations. _IEEE transactions on neural networks_, 9(5):987-1000, 1998.
* [34] H. S. Lee, C. J. Park, and H. W. Park. Identification of geometric shapes and material properties of inclusions in two-dimensional finite bodies by boundary parameterization. _Computer Methods in Applied Mechanics and Engineering_, 181(1-3):1-20, 2000.
* [35] L. Lu, X. Meng, Z. Mao, and G. E. Karniadakis. Deepxde: A deep learning library for solving differential equations. _SIAM Review_, 63(1):208-228, 2021.
* [36] L. Lu, R. Pestourie, W. Yao, Z. Wang, F. Verdugo, and S. G. Johnson. Physics-informed neural networks with hard constraints for inverse design. _SIAM Journal on Scientific Computing_, 43(6):B1105-B1132, 2021.
* [37] L. Ma and M. Soleimani. Magnetic induction tomography methods and applications: A review. _Measurement Science and Technology_, 28(7):072001, 2017.
* [38] W. Ma, Z. Liu, Z. A. Kudyshev, A. Boltasseva, W. Cai, and Y. Liu. Deep learning for the design of photonic structures. _Nature Photonics_, 15(2):77-90, 2021.
* [39] Y. Mei, R. Fulmer, V. Raja, S. Wang, and S. Goenezen. Estimating the non-homogeneous elastic modulus distribution from surface deformations. _International Journal of Solids and Structures_, 83:73-80, 2016.
* [40] Y. Mei, Z. Du, D. Zhao, W. Zhang, C. Liu, and X. Guo. Moving morphable inclusion approach: an explicit framework to solve inverse problem in elasticity. _Journal of Applied Mechanics_, 88(4), 2021.
* [41] S. Molesky, Z. Lin, A. Y. Piggott, W. Jin, J. Vuckovic, and A. W. Rodriguez. Inverse design in nanophotonics. _Nature Photonics_, 12(11):659-670, 2018.
* [42] S. Mowlavi and S. Nabi. Optimal control of PDEs using physics-informed neural networks. _Journal of Computational Physics_, 473:111731, 2023.
* [43] S. Osher and R. Fedkiw. _Level set methods and dynamic implicit surfaces_. Springer-Verlag, 2003.
* [44] S. Osher and J. A. Sethian. Fronts propagating with curvature-dependent speed: Algorithms based on hamilton-jacobi formulations. _Journal of computational physics_, 79(1):12-49, 1988.
* [45] M. Raissi, P. Perdikaris, and G. E. Karniadakis. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. _Journal of Computational physics_, 378:686-707, 2019.
* [46] M. Raissi, A. Yazdani, and G. E. Karniadakis. Hidden fluid mechanics: Learning velocity and pressure fields from flow visualizations. _Science_, 367(6481):1026-1030, 2020.
* [47] C. Rao, H. Sun, and Y. Liu. Physics-informed deep learning for computational elastodynamics without labeled data. _Journal of Engineering Mechanics_, 147(8):04021043, 2021.
* [48] S. Rezaei, A. Harandi, A. Moeineddin, B.-X. Xu, and S. Reese. A mixed formulation for physics-informed neural networks as a potential solver for engineering problems in heterogeneous domains: comparison with finite element method. _Computer Methods in Applied Mechanics and Engineering_, 401:115616, 2022.
* [49] L. Robledo, M. Carrasco, and D. Mery. A survey of land mine detection technology. _International Journal of Remote Sensing_, 30(9):2399-2410, 2009.
* [50] F. Sahli Costabal, Y. Yang, P. Perdikaris, D. E. Hurtado, and E. Kuhl. Physics-informed neural networks for cardiac activation mapping. _Frontiers in Physics_, 8:42, 2020.
* [51] O. Sigmund and K. Maute. Topology optimization approaches. _Structural and Multidisciplinary Optimization_, 48(6):1031-1055, 2013.
* [52] V. Sitzmann, J. Martel, A. Bergman, D. Lindell, and G. Wetzstein. Implicit neural representations with periodic activation functions. _Advances in Neural Information Processing Systems_, 33:7462-7473, 2020.
* [53] N. Sukumar and A. Srivastava. Exact imposition of boundary conditions with distance functions in physics-informed deep neural networks. _Computer Methods in Applied Mechanics and Engineering_, 389:114333, 2022.
* [54] C. L. Wight and J. Zhao. Solving Allen-Cahn and Cahn-Hilliard equations using the adaptive physics informed neural networks. _Communications in Computational Physics_, 29(3):930-954, 2021.
* [55] J. C. Wong, C. Ooi, A. Gupta, and Y.-S. Ong. Learning in sinusoidal spaces with physics-informed neural networks. _arXiv preprint arXiv:2109.09338_, 2021.
* [56] E. Zhang, M. Dao, G. E. Karniadakis, and S. Suresh. Analyses of internal structures and defects in materials using physics-informed neural networks. _Science advances_, 8(7):eabk0644, 2022.
**Figure 3**: **Identification of inclusions in a linear elastic matrix.** Final Cauchy stress components \(\sigma_{xx}\) (**a**), \(\sigma_{yy}\) (**b**), and \(\sigma_{xy}\) (**c**), displayed in the deformed configuration obtained from the final displacement components \(u_{1}\) and \(u_{2}\), for the cases reported in Fig. 4e. The grey dotted lines show the outline of the matrix surface in the reference configuration.
Extended Data Fig. 5. **Effect of sparse measurements on the identification of voids in a linear elastic matrix with one circle-shaped void.** Final material density obtained in our framework when using fewer measurement locations and restricting the number of measurements to a subset of the outer surfaces.
Extended Data Fig. 6. **Effect of sparse measurements on the identification of voids in a linear elastic matrix with one star-shaped and one rectangle-shaped void.** Final material density obtained in our framework when using fewer measurement locations and restricting the number of measurements to a subset of the outer surfaces.
Extended Data Fig. 7. **Effect of sparse measurements on the identification of voids in a linear elastic matrix with one slit-shaped void.** Final material density obtained in our framework when using fewer measurement locations and restricting the number of measurements to a subset of the outer surfaces.
**Supplementary information for:**
**Topology optimization with physics-informed neural networks:**
**application to noninvasive detection of hidden geometries**
Saviz Mowlavi\({}^{1,\,2,}\)1, and Ken Kamrin\({}^{1,\,\dagger}\)2
Footnote 1: [email protected]
Footnote 2: [email protected]
\({}^{1}\)_Department of Mechanical Engineering, MIT, Cambridge, MA 02139, USA_
\({}^{2}\)_Mitsubishi Electric Research Laboratories, Cambridge, MA 02139, USA_
## I Uniqueness of solutions
We sketch a proof for the uniqueness of solutions to the geometry detection elasticity problem considered in this paper. For the specific case of a two-dimensional linear elastic material with a single void, it has been proved that there exists at most one cavity which yields the same surface displacements and stresses on an finite portion of the external boundary [1]. Our approach is different and applicable to any three-dimensional problem governed by elliptic PDEs (be they linear or nonlinear, heat conduction, elasticity, etc) and containing multiple voids or rigid inclusions. Before sketching the proof, we state a few lemmas that will be useful:
* **Lemma 1**. Any \(C^{2}\) displacement field that solves the equilibrium Navier-Cauchy equations of linear elasticity with constant elastic moduli is a real analytic function of space. Ths holds because the Navier-Cauchy equations are elliptic, so any \(C^{2}\) solution must be real analytic [3].
* **Lemma 2**. Any real analytic function on a connected domain \(\Omega\) that vanishes on a finite and connected subset \(\mathcal{S}\subset\Omega\) necessarily vanishes in all of \(\Omega\). This is a specialization of the identity theorem for analytic functions [2].
* **Lemma 3**. If \(\mathbf{u}\) is a \(C^{2}\) displacement solution to the Navier-Cauchy equations on a connected domain \(\Omega\), and both \(\mathbf{u}\) and traction \(\mathbf{t}\) vanish on a smooth finite portion of the boundary \(S\subset\partial\Omega\), then \(\mathbf{u}\) must necessarily vanish in all of \(\Omega\). This fact can be seen by extending the domain \(\Omega\) along \(S\) by some amount \(\Omega^{ext}\) and defining \(\mathbf{u}=\mathbf{0}\) there. Then \(\mathbf{u}\) satisfies the Navier-Cauchy equations at each point in \(\Omega\cup\Omega^{ext}\), and it can be shown that all second derivatives exist and are continuous on \(S\). Thus, on \(\Omega\cup\Omega^{ext}\), \(\mathbf{u}\) is an analytic function (by Lemma 1) that vanishes in \(\Omega^{ext}\), and therefore \(\mathbf{u}\) vanishes in all of \(\Omega\cup\Omega^{ext}\) (by Lemma 2) and thus in \(\Omega\).
We sketch the proof for the case of a single void or inclusion, but the same reasoning generalizes to any number of voids or inclusions. Consider two bodies \(\mathcal{B}^{(1)}\) and \(\mathcal{B}^{(2)}\) with the same material properties and sharing the same external boundary \(\partial\mathcal{B}^{\mathrm{ext}}\) (Supplementary Fig. 1a,b). Each body contains a single smooth void or rigid inclusion, characterized by the connected domains \(\mathcal{I}^{(1)}\) and \(\mathcal{I}^{(2)}\) with boundaries \(\partial\mathcal{I}^{(1)}\) and \(\partial\mathcal{I}^{(2)}\), respectively. The two bodies are subjected to the same external loading, which consists of an applied displacement \(\mathbf{u}\) on a portion \(\partial\mathcal{B}^{\mathrm{ext}}_{u}\) of \(\partial\mathcal{B}^{\mathrm{ext}}\) and an applied traction \(\mathbf{\hat{t}}\) on a portion \(\partial\mathcal{B}^{\mathrm{ext}}_{t}\) of \(\partial\mathcal{B}^{\mathrm{ext}}\), with \(\partial\mathcal{B}^{\mathrm{ext}}=\partial\mathcal{B}^{\mathrm{ext}}_{u} \cup\partial\mathcal{B}^{\mathrm{ext}}_{t}\). This loading generates a displacement and stress solution within each body, which we denote by \(\mathbf{u}^{(1)}\), \(\boldsymbol{\sigma}^{(1)}\) and \(\mathbf{u}^{(2)}\), \(\boldsymbol{\sigma}^{(2)}\), respectively. These solutions must be nontrivial, meaning that the surface tractions \(\mathbf{t}^{(1)}\) and \(\mathbf{t}^{(2)}\) do not vanish everywhere on \(\partial\mathcal{B}^{\mathrm{ext}}\). Finally, we assume that we measure identical surface displacements, i.e. \(\mathbf{u}^{(1)}=\mathbf{u}^{(2)}=\mathbf{u}^{m}\), on a finite portion \(\partial\mathcal{B}^{m}\) of \(\partial\mathcal{B}^{\mathrm{ext}}_{t}\).
We will prove that there cannot be two distinct shapes \(\mathcal{I}^{(1)}\) and \(\mathcal{I}^{(2)}\) yielding nontrivial solutions that are identical on \(\partial\mathcal{B}^{m}\). Let us first subtract the displacement and stress solutions of the two bodies, which yields a displacement field \(\Delta\mathbf{u}=\mathbf{u}^{(1)}-\mathbf{u}^{(2)}\) and a stress field \(\Delta\boldsymbol{\sigma}=\boldsymbol{\sigma}^{(1)}-\boldsymbol{\sigma}^{(2)}\) defined over the intersection \(\mathcal{B}^{(1)}\cap\mathcal{B}^{(2)}\) between the two bodies (Supplementary Fig. 1c). This 'difference solution' itself satisfies the governing equations of linear elasticity owing to their linearity, and its displacement and traction vanish on \(\partial\mathcal{B}^{m}\). Therefore, we can show using Lemmas 1 and 3 that the difference solution vanishes in its domain of definition, meaning that \(\mathbf{u}^{(1)}=\mathbf{u}^{(2)}\) and \(\boldsymbol{\sigma}^{(1)}=\boldsymbol{\sigma}^{(2)}\) in \(\mathcal{B}^{(1)}\cap\mathcal{B}^{(2)}\). We now focus on the behavior of the solution in body \(\mathcal{B}^{(1)}\) and treat separately the cases of a void or rigid inclusion.
In the case of a void (Supplementary Fig. 1d), the traction \(\mathbf{t}^{(1)}\) on the internal boundary \(\partial\mathcal{I}^{(1)}\) vanishes. Using the fact that \(\boldsymbol{\sigma}^{(1)}=\boldsymbol{\sigma}^{(2)}\) in \(\mathcal{B}^{(1)}\cap\mathcal{B}^{(2)}\), we know that \(\mathbf{t}^{(1)}\) also vanishes on \(\mathcal{B}^{(1)}\cap\partial\mathcal{I}^{(2)}\). As a result, there exists a finite region \(\tilde{\mathcal{B}}^{(1)}=\mathcal{B}^{(1)}\cap\mathcal{I}^{(2)}\) with zero boundary traction. Thus, \(\boldsymbol{\sigma}^{(1)}\) must vanish not only in \(\tilde{\mathcal{B}}^{(1)}\), but also in the
entire body \(\mathcal{B}^{(1)}\) as a consequence of Lemma 2. This violates the fact that the surface traction \(\mathbf{t}^{(1)}\) cannot vanish everywhere on \(\partial\mathcal{B}^{\mathrm{ext}}\).
In the case of a rigid inclusion (Supplementary Fig. 1e), the displacement \(\mathbf{u}^{(1)}\) on the internal boundary \(\partial\mathcal{I}^{(1)}\) corresponds to that of a rigid motion, i.e. \(\mathbf{u}^{(1)}=\mathbf{u}^{(1)}_{r}+\mathbf{\theta}^{(1)}_{r}\times\mathbf{x}\) for some fixed \(\mathbf{u}^{(1)}_{r}\) and \(\mathbf{\theta}^{(1)}_{r}\). Using the fact that \(\mathbf{u}^{(1)}=\mathbf{u}^{(2)}\) in \(\mathcal{B}^{(1)}\cap\mathcal{B}^{(2)}\), we know that \(\mathbf{u}^{(1)}=\mathbf{u}^{(2)}=\mathbf{u}^{(2)}_{r}+\mathbf{\theta}^{(2)}_{r} \times\mathbf{x}\) on \(\mathcal{B}^{(1)}\cap\partial\mathcal{I}^{(2)}\) for some other fixed \(\mathbf{u}^{(2)}_{r}\) and \(\mathbf{\theta}^{(2)}_{r}\). However, given that these two rigid motions must coincide at \(\partial\mathcal{I}^{(1)}\cap\partial\mathcal{I}^{(2)}\), we must have \(\mathbf{u}^{(1)}_{r}=\mathbf{u}^{(2)}_{r}\) and \(\mathbf{\theta}^{(1)}_{r}=\mathbf{\theta}^{(2)}_{r}\). As a result, the entire surface of the finite region \(\tilde{\mathcal{B}}^{(1)}=\mathcal{B}^{(1)}\cap\mathcal{I}^{(2)}\) undergoes a rigid motion. Thus, \(\mathbf{\sigma}^{(1)}\) must vanish not only in \(\tilde{\mathcal{B}}^{(1)}\), but also in the entire body \(\mathcal{B}^{(1)}\) as a consequence of Lemma 2. As in the case of the void, this violates the fact that the surface traction \(\mathbf{t}^{(1)}\) cannot vanish everywhere on \(\partial\mathcal{B}^{\mathrm{ext}}\).
## IIIL-Posed Nonlinear Thermal Problem
To illustrate the flexibility of our TO framework with respect to the physical model and the sparsity of the data, in this appendix we apply our method to an ill-posed nonlinear thermal imaging problem. Consider a nonlinearly conducting matrix (Supplementary Fig. 2a) whose heat flux \(\mathbf{q}\) relates to the temperature \(T\) through a nonlinear Fourier's law \(\mathbf{q}=-k(1+T/T_{0})\nabla T\), where \(k\) is akin to a thermal conductivity and \(T_{0}\) is a reference temperature. The matrix contains a hidden inclusion that is either perfectly insulating (no inside heat flux) or perfectly conducting (no inside temperature gradient). A thermal loading is applied, consisting of a prescribed unit temperature on the left boundary, zero temperature on the right boundary, and insulated top and bottom boundaries (Supplementary Fig. 2a). The goal of the inverse problem is to identify the location and shape of the inclusion under different ill-posed scenarios where the applied boundary condition and the measurements are both assumed unavailable on one of the four sides, while the temperature or flux resulting from the prescribed thermal loading are measured on the remaining three sides (Supplementary Fig. 2b).
The experiment is simulated in the FEM software Abaqus using biquadratic DC2D8 diffusive heat transfer elements, and considering \(k=1\) and \(T_{0}=1\). The inverse problem is solved with our PINN-based TO framework by constructing neural network approximations for the physical quantities \(\mathbf{\psi}=(T,\mathbf{q})\) and the density field \(\rho\). Similarly to the elasticity examples, the neural networks are designed to inherently satisfy the boundary conditions (on the three sides where they are assumed to be known). The governing equations included in the loss function comprise the conservation law \(\nabla\cdot\mathbf{q}=0\) as well as the nonlinear Fourier's law \(F(\mathbf{q},T,\rho)=0\). The latter, expressed over both the matrix and the inclusion, takes the form \(\mathbf{q}+\rho k(1+T/T_{0})\nabla T=0\) in the presence of a perfectly-insulating inclusion, or \(\rho\mathbf{q}/k+(1+T/T_{0})\nabla T=0\) in the presence of a perfectly-conducting inclusion. We use the same training parameters and the same weights in the loss function as in the square elastic matrix examples.
Despite the complete lack of information along an entire side, with the applied boundary condition and measurements both missing, our TO framework is able to detect the slit-shaped inclusion with reasonable accuracy in both the perfectly-insulating case (Supplementary Fig. 2c) and the perfectly-conducting case (Supplementary Fig. 2d). This example showcases the ability of the framework to generate good results when even the forward problem is ill-posed.
|
2307.13061 | Feature Gradient Flow for Interpreting Deep Neural Networks in Head and
Neck Cancer Prediction | This paper introduces feature gradient flow, a new technique for interpreting
deep learning models in terms of features that are understandable to humans.
The gradient flow of a model locally defines nonlinear coordinates in the input
data space representing the information the model is using to make its
decisions. Our idea is to measure the agreement of interpretable features with
the gradient flow of a model. To then evaluate the importance of a particular
feature to the model, we compare that feature's gradient flow measure versus
that of a baseline noise feature. We then develop a technique for training
neural networks to be more interpretable by adding a regularization term to the
loss function that encourages the model gradients to align with those of chosen
interpretable features. We test our method in a convolutional neural network
prediction of distant metastasis of head and neck cancer from a computed
tomography dataset from the Cancer Imaging Archive. | Yinzhu Jin, Jonathan C. Garneau, P. Thomas Fletcher | 2023-07-24T18:25:59Z | http://arxiv.org/abs/2307.13061v1 | # Feature Gradient Flow for Interpreting Deep Neural Networks in Head and Neck Cancer Prediction
###### Abstract
This paper introduces feature gradient flow, a new technique for interpreting deep learning models in terms of features that are understandable to humans. The gradient flow of a model locally defines nonlinear coordinates in the input data space representing the information the model is using to make its decisions. Our idea is to measure the agreement of interpretable features with the gradient flow of a model. To then evaluate the importance of a particular feature to the model, we compare that feature's gradient flow measure versus that of a baseline noise feature. We then develop a technique for training neural networks to be more interpretable by adding a regularization term to the loss function that encourages the model gradients to align with those of chosen interpretable features. We test our method in a convolutional neural network prediction of distant metastasis of head and neck cancer from a computed tomography dataset from the Cancer Imaging Archive.
Yinzhu Jin\({}^{\star}\) Jonathan C. Garneau\({}^{\dagger}\) P. Thomas Fletcher\({}^{\ddagger,\star}\)+\({}^{\star}\) Department of Computer Science, University of Virginia, Charlottesville, VA, USA
\({}^{\dagger}\)Department of Otolaryngology, University of Virginia, Charlottesville, VA, USA
\({}^{\ddagger}\) Department of Electrical and Computer Engineering, University of Virginia, Charlottesville, VA, USA
Footnote †: Published in: 2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI)
Footnote †: thanks: Published in: 2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI)
## 1 Introduction
Deep neural networks (DNNs) have great promise for predicting disease outcomes from medical imaging. For example, researchers have demonstrated that DNNs can predict outcomes in head and neck cancer, such as whether the cancer will metastasize, from computed tomography (CT) images with high accuracy [1, 2]. However, full adoption of such models is held back by the fact that DNNs are typically "black boxes", i.e., the image features that they learn and use to make predictions are not known or not interpretable by humans. This in turn leads to a lack of trust in DNNs by potential clinical users.
Several methods have been proposed for interpretable or explainable deep learning [3, 4] and applied in medical image analysis. The most commonly used methods fall into the class of saliency maps, e.g., Grad-CAM [5]. In these methods, information derived from the classifier gradient is displayed on an input image, highlighting where in the image the classifier is using information to make its decision. While these methods explain _where_ a classifier is focusing, they do not explain _what_ information it is using to make a decision. One other way is to build a decision tree that approximates the deep learning model [6], which makes use of the naturally interpretable architecture of decision trees. Similar to saliency mapping methods, it highlights object parts that are important for decision making without showing the exact features the model relies on. Another class of methods, such as LIME [7], fit a linear approximation to a DNN in a local region of the input space. The idea is that the approximating linear classifiers can be interpreted more easily because they are given by a single parameter vector that provides a weighting of the importance for each input feature. Testing with concept activation vectors (TCAV) [8] extended the linear approximation approach by fitting linear classifiers of binary concepts to the intermediate layer activations of a DNN. This direction was subsequently extended to linear regression of continuous concepts and applied in medical imaging tasks by Graziani et al. [9]. A limitation of these concept attribution approaches is that they rely on a _linear_ approximation to the activations of a DNN, whereas the flexibility of deep models comes from the fact that they are highly nonlinear.
In this paper, we seek to understand the nonlinear features that are being utilized in the decision process
of a DNN. We do this by first developing the _fibered manifold geometry_ that a classifier induces on the input space, decomposing it into nonlinear dimensions that are either relevant or irrelevant to the classifier's decision (Section 2). Second, we develop a method for measuring the alignment of a given set of interpretable features along the gradient flow of this classifier geometry (Section 3). In this way we query to what extent the classifier is using information that is human interpretable. Furthermore, we develop a regularlization term for training a classifier to prefer using interpretable feature dimensions. Finally, we demonstrate the effectiveness of our approach in a prediction of distant metastasis from CT images of head and neck tumors (Section 4). We show that our training method leads to a classifier that uses a higher percentage of interpretable image features.
## 2 The Fibered Manifold Geometry of a Classifier
In this section we describe how a classifier locally decomposes the input data space into nonlinear dimensions along which the classifier decision changes, called **sections**, and complementary dimensions along which the classifier decision remains constant, i.e., the level sets of the classifier function, called **fibers**. This structure is known as a **fibered manifold** (see Fig. 1). The key insight to our approach is that to interpret a classifier one must understand the section dimensions because these are the dimensions along which the classifier is differentiating different classes of data. A classifier is "ignoring" the features along its fiber dimensions.
Let's consider a classifier taking inputs \(x\in\mathbb{R}^{d}\) and predicting an output class \(y\in\{1,\ldots,K\}\). Furthermore, assume this classifier is a \(C^{1}\) mapping, \(f:\mathbb{R}^{d}\rightarrow\mathbb{R}^{K}\). For example, the outputs could be conditional probabilities, \(p(y\mid x)\), or normalized logits, \(z=\ln(p(y\mid x))\). We will denote the Jacobian matrix of \(f\) at a point \(x\in\mathbb{R}^{d}\) as \(Df(x)\). The **rank** of \(f\) at \(x\) is defined as \(\mathrm{rank}(Df(x))\).
Assuming that \(K<d\), the maximal rank of \(f\) at any point is \(K-1\), due to the constraint that \(\sum_{y}p(y\mid x)=1\). A **regular point** of \(f\) is a point \(x\in\mathbb{R}^{d}\) such that \(Df(x)\) has maximal rank, that is, \(\mathrm{rank}(Df(x))=K-1\). The set of regular points of \(f\) is open in \(\mathbb{R}^{d}\). This implies that there is a neighborhood about any regular point of \(f\) that is a **fibered manifold**, i.e., there is a (possibly nonlinear) coordinate system that decomposes into \(d-K+1\) fiber coordinates, where \(f\) remains constant, and \(K-1\) section coordinates, where \(f\) changes its output.
## 3 Interpretable Feature Alignment
In addition to our classifier function, \(f:\mathbb{R}^{d}\rightarrow\mathbb{R}^{K}\), assume that we can also compute a set of \(m\)_interpretable_ features through a mapping \(g:\mathbb{R}^{d}\rightarrow\mathbb{R}^{m}\). The general idea of our method is to measure how well the classifier \(f\) is using these interpretable features by looking at the alignment of their section subspaces, e.g., if the dimensions in which they vary are similar.
Consider first the simple case where \(K=2\) (a binary classifier) and \(m=1\) (a single interpretable features). Then the sections defined by \(f\) and \(g\) are one dimensional and tangential to their respective gradients. Thus we can measure the agreement of the features by the alignment between the gradients of \(f\) and \(g\), e.g., the angle between them. We can estimate the expectation of this alignment over the data distribution by summing this value at each point in the test data. At a single data point \(x\in\mathbb{R}^{d}\), this **pointwise alignment** is given by
\[S(x)=\left(\frac{\langle\nabla f(x),\nabla g(x)\rangle}{\|\nabla f(x)\|\cdot\| \nabla g(x)\|}\right)^{2}. \tag{1}\]
We square the dot product between the normalized gradients because we are only concerned about how well these dimensions align. We do not care about the magnitude of the units or the polarity of the gradients, that is, we consider the alignment of the bidirectional lines defined by the two gradients.
### Decomposition into Feature Hyperplane
Now consider we want to evaluate the classifier's dependency on multiple features. We may derive this as a search for a classifier, \(\phi:\mathbb{R}^{m}\rightarrow\mathbb{R}^{K}\) from the interpretable features, \(g(x)\), that approximates the prediction output by \(f\), i.e.,
\[f(x)\approx\phi\circ g(x). \tag{2}\]
If the task is difficult enough, then the same classification from \(f\) will not be able to be computed using only the interpretable features \(g(x)\). Therefore, the relationship in (2) is not an exact equality. This is what we expect in the case of a DNN, that is to say, we don't expect
Figure 1: Fibered manifold geometry of a classifier.
that the decision of a DNN can be explained perfectly by interpretable features alone. Rather than optimize for \(\phi\) directly, we consider minimizing the difference in the gradients of both sides of (2). In other words, we want to minimize \(\|\nabla f-Dg^{T}\nabla\phi\|\). This gives us
\[\nabla\phi=(DgDg^{T})^{-1}Dg\nabla f,\]
which can be viewed as a projection of \(\nabla f\) to the hyperplane given by the rows of \(Dg\), i.e., the gradients of the interpretable features. Then we can decompose \(\nabla f\) into a parallel part and a vertical part:
\[\nabla f_{\parallel}=Dg^{T}(DgDg^{T})^{-1}Dg\nabla f, \tag{3}\] \[\nabla f_{\perp}=\nabla f-\nabla f_{\parallel}. \tag{4}\]
These can be understood as the interpretable component (3) and un-interpretable component (4) of the classifier gradient using the selected features.
Note that \(\|\nabla f\|^{2}=\|\nabla f_{\parallel}\|^{2}+\|\nabla f_{\perp}\|^{2}\), so we can naturally define the alignment measure as the fraction of the squared gradient norm contained in tangent space to the section of the interpretable features, i.e.,
\[S(x)=\frac{\|\nabla f_{\parallel}(x)\|^{2}}{\|\nabla f(x)\|^{2}}.\]
Note that in the case with only one feature, this definition of \(S\) corresponds to mission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works."the previous one in (1).
### Gradient Flow to the Decision Boundary
Although observing the alignment at a point can give some indication whether the classifier is making use of the given features, it is only a local property and does not take into account more global geometry of the classifier. To address this, we next develop a measurement of how the gradients of interpretable features align with those of the classifier along a path from the data point to the classifier's decision boundary.
In practice, we can start from the data point and follow the gradient of the classifier and stop when it hits the decision boundary. Then we can integrate the normalized dot product of the gradients of the classifier and the gradients of the feature mapping along this path. To do this, first define the _gradient flow_ from a data point \(x\) as a curve in the data space \(\gamma:[0,T]\rightarrow\mathbb{R}^{d}\) that begins at \(\gamma(0)=x\) and follows the gradient of the classifier, i.e.,
\[\frac{d\gamma(t)}{dt}=\nabla f(\gamma(t)). \tag{5}\]
With a similar decomposition as the pointwise case, we measure the total fraction of alignment of the classifier and the interpretable features along the gradient flow admission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works."
\[F(x)=\frac{\int_{0}^{T}\left\|\nabla f_{\parallel}\left(\gamma(t)\right)\right\| ^{2}dt}{\int_{0}^{T}\left\|\nabla f\left(\gamma(t)\right)\right\|^{2}dt}.\]
We note that both the pointwise score \(S\) and the gradient flow score \(F\) range from 0 to 1.
### Enhancing Interpretability During Training
With the above measure of alignment, we can further use it to encourage a model to be more interpretable. This is done by adding an alignment "reward" term to the loss during training. Given interpretable features, \(g_{i}(x),i=1,\dots,m\), for each data point \(x\), the training loss is:
\[\begin{split}\mathcal{L}(x,f,g)=&\sum_{x}L(f(x),y )\\ &-\sum_{x}\sum_{i=1}^{m}\lambda_{i}\left(\frac{\langle\nabla f(x),\nabla g_{i}(x)\rangle}{\|\nabla f(x)\|\cdot\|\nabla g_{i}(x)\|}\right)^{2} \end{split} \tag{6}\]
where \(L\) is the original training loss function, and \(\lambda_{i}\) is a positive scalar parameter that controls the weight of the alignment with the \(i\)th interpretable feature. It rewards the gradients of the model for aligning with the gradients of given features. We can tune \(\lambda_{i}\) to be as large as is possible without hurting the performance of the model.
## 4 Results
### Dataset and Architecture
We used a Head and Neck PET-CT dataset from [10] available on the Cancer Image Archive (TCIA) to evaluate our methods. The dataset includes data from 298
Figure 2: Example CT image with segmented tumor (left) and the masked tumor image used as input data (right).
subjects. The task is to predict distant metastasis of head and neck tumors using CT images and segmentations of gross tumor volumes. It's a highly imbalanced classification task, with only 40 positive cases (13%) in the entire dataset.
The classifier we used is a neural network with 3 convolutional layers (kernel sizes = \(5\times 5\), \(3\times 3\), and \(3\times 3\)) each followed by average pooling and exponential linear units (ELUs). These are followed by 3 fully-connected layers (output dimensions = 256, 128, and 1). We chose average pooling over max pooling and ELU over ReLU because they are differentiable, while providing equivalent classification performance. The input data is \(512\times 512\) gray scale images.
Following the work from Diamant et al. [1], the inputs are 2D CT slices, each chosen where the cross-sectional area of tumor is the largest. The area outside the tumor was masked as zero. We augmented the data by randomly rotating in the range \(\pm 20\) degrees and translating in the range \(\pm 0.015\) times the image width/height. We used weighted random sampling to balance the training batches evenly between negative and positive samples. The data was split into training set of 209 samples and test set of 89 samples. The model was trained 100 epochs using Adam optimizer with initial learning rate \(3.0\times 10^{-5}\) and batch size 32. For the model with alignment term, we chose 3 features each with with \(\lambda_{i}=3.0\times 10^{-5}\). Our plain classifier achieved 0.681 balanced accuracy, and the classifier with alignment term training achieved 0.688 balanced accuracy.
### Interpretable Features
We chose 3 features that can be calculated from the data, namely, overall brightness (\(g_{1}\)), tumor extent (\(g_{2}\)), and log aspect ratio of tumor (\(g_{3}\)). Let \(I(u)\) denote an image, with pixel grid coordinates \(u=(u_{1},u_{2})\). Then the three features are calculated as follows:
\[g_{1} =\sum_{u}I(u)\] \[\mu =\frac{1}{g_{1}}\sum_{u}uI(u),\ C=\frac{1}{g_{1}}\sum_{u}I(u)(u- \mu)(u-\mu)^{T}\] \[g_{2} =\mathrm{tr}(C)\] \[g_{3} =\log(\sigma_{1})-\log(\sigma_{2})\]
where \(\sigma_{1}^{2}\geq\sigma_{2}^{2}\) are eigenvalues of the covariance, \(C\).
### Interpretability Measures
Here we apply our pointwise interpretability measure, \(S\), to our test set. The results are shown in Table 1 for the both the plain model and the model trained with our enhanced interpretability method. While the interpretability scores for the individual features seem quite small, we show that they are actually large relative to the interpretability scores for a randomly generated feature (which provides a baseline interpretability score for a feature that should _not_ be useful to the classifier). We generated one random feature from a standard normal distribution to compare to the single feature case, and three independent random features to compare to the multiple feature case. These two are referred as "single random" and "three random" in the table. To quantify if our interpretable features were statistically significantly better than random to the classifier, we performed a Kolmogorov-Smirnov (KS) test between the distribution of \(S\) values. All three features \(g_{1},g_{2},g_{3}\), and their combination, were statistically significantly better than random at \(p<10^{-3}\). The last column in Table 1 is the KS test \(p\)-value to see if the enhanced training improved the interpretability over the plain model. As we can see, tumor extent and aspect ratio did not become more useful to the classifier, but brightness did. Finally, using all three interpretable features jointly accounts for 16% of the classifier's squared gradient magnitude. From the results for the model with alignment term, we can see that the interpretable fraction of classifier gradients increased while not negatively affecting classifier performance.
We also show in Table 2 the results of the gradient flow alignment measure, \(F\), applied on the both the plain model and the model trained with our interpretability enhancement. The overall behavior is similar to that of the pointwise interpretability scores. Again, the KS test indicates that the gradient flow interpretability measures for all three features \(g_{1},g_{2},g_{3}\), and their combination, were
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \multirow{2}{*}{feature} & \multicolumn{2}{c|}{Means} & \multirow{2}{*}{\(p\)-value} \\ \cline{2-3} \cline{5-5} & plain & & \multicolumn{1}{c|}{enhanced} \\ \hline overall brightness & 8.2e-4 & 3.4e-3 & \(<10^{-6}\) \\ tumor extent & 3.8e-2 & 2.4e-2 & — \\ log aspect ratio & 1.3e-2 & 9.5e-3 & — \\ combined features & 0.12 & 0.16 & \(<10^{-6}\) \\ \hline single random & 3.6e-6 & 4.1e-6 & 0.95 \\ three random & 1.2e-5 & 1.3e-5 & 0.30 \\ \hline \end{tabular}
\end{table}
Table 1: Results of Pointwise Alignment (\(S\))
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \multirow{2}{*}{feature} & \multicolumn{2}{c|}{Means} & \multirow{2}{*}{\(p\)-value} \\ \cline{2-3} \cline{5-5} & plain & & \multicolumn{1}{c|}{enhanced} \\ \hline overall brightness & 8.4e-4 & 3.4e-3 & \(<10^{-6}\) \\ tumor extent & 3.4e-2 & 2.1e-2 & — \\ log aspect ratio & 4.1e-3 & 2.0e-3 & — \\ combined features & 0.14 & 0.19 & \(<10^{-6}\) \\ \hline single random & 4.4e-6 & 3.8e-6 & 0.40 \\ three random & 1.1e-5 & 1.2e-5 & 0.95 \\ \hline \end{tabular}
\end{table}
Table 2: Results of Gradient Flow Alignment (\(F\))
statistically significantly better than random at \(p<10^{-3}\). Interestingly, the gradient flow measures are similar to the pointwise measures for individual features, but somewhat higher for the three features combined.
## 5 Discussion
We introduced a new method to evaluate the importance of given interpretable features by quantifying their gradient alignment along the gradient flow of the model. Although the resulting alignment scores may seem small, we note that they are significantly larger than random and account for a high proportion of the variance relative to the high dimensionality of the input images. A limitation of our method is that it requires the user to input interpretable features of interest. Thus, our method may be less effective in cases where a model mostly makes use of unexpected features.
## 6 Compliance with Ethical Standards
This research study was conducted retrospectively using human subject data made available in open access by Vallieres et al. [10] in the Cancer Image Archive (TCIA). Ethical approval was not required as confirmed by the license attached with the open access data.
|
2304.13531 | Integrated Architecture for Neural Networks and Security Primitives
using RRAM Crossbar | This paper proposes an architecture that integrates neural networks (NNs) and
hardware security modules using a single resistive random access memory (RRAM)
crossbar. The proposed architecture enables using a single crossbar to
implement NN, true random number generator (TRNG), and physical unclonable
function (PUF) applications while exploiting the multi-state storage
characteristic of the RRAM crossbar for the vector-matrix multiplication
operation required for the implementation of NN. The TRNG is implemented by
utilizing the crossbar's variation in device switching thresholds to generate
random bits. The PUF is implemented using the same crossbar initialized as an
entropy source for the TRNG. Additionally, the weights locking concept is
introduced to enhance the security of NNs by preventing unauthorized access to
the NN weights. The proposed architecture provides flexibility to configure the
RRAM device in multiple modes to suit different applications. It shows promise
in achieving a more efficient and compact design for the hardware
implementation of NNs and security primitives. | Simranjeet Singh, Furqan Zahoor, Gokulnath Rajendran, Vikas Rana, Sachin Patkar, Anupam Chattopadhyay, Farhad Merchant | 2023-04-26T13:07:15Z | http://arxiv.org/abs/2304.13531v2 | # Integrated Architecture for Neural Networks and Security Primitives using RRAM Crossbar
###### Abstract
This paper proposes an architecture that integrates neural networks (NNs) and hardware security modules using a single resistive random access memory (RRAM) crossbar. The proposed architecture enables using a single crossbar to implement NN, true random number generator (TRNG), and physical unclonable function (PUF) applications while exploiting the multi-state storage characteristic of the RRAM crossbar for the vector-matrix multiplication operation required for the implementation of NN. The TRNG is implemented by utilizing the crossbar's variation in device switching thresholds to generate random bits. The PUF is implemented using the same crossbar initialized as an entropy source for the TRNG. Additionally, the weights locking concept is introduced to enhance the security of NNs by preventing unauthorized access to the NN weights. The proposed architecture provides flexibility to configure the RRAM device in multiple modes to suit different applications. It shows promise in achieving a more efficient and compact design for the hardware implementation of NNs and security primitives.
TRNG, PUF, NN, RRAM, Memristors, Hardware Security
## I Introduction
The Internet of things (IoT) era has led to a significant increase in data exchange between processors and memory, which results in high power consumption. It ultimately degrades the system's performance [1]. The von Neumann architecture, which separates processing and memory units, often suffers from data access latency due to the large volume of data movement [2]. To overcome these limitations, various novel computing paradigms are being investigated. In-memory computing, which performs calculations entirely within the computer memory, has gained significant attraction as a potential solution [3, 4, 5].
Additionally, it is necessary to authenticate IoT devices on the network to ensure data security and protection by maintaining their integrity, confidentiality, and availability, thus preventing any malicious attacks or unauthorized access. Physical unclonable function (PUF) circuits are becoming popular in IoT due to their ability to generate unique and unpredictable responses to challenges. This makes them highly useful for hardware security, such as device authentication and key generation, and for implementing security protocols ranging from device attestation to data encryption. Several circuits have been proposed for realizing in-memory computing architectures using resistive random access memory (RRAM) devices to implement various techniques, including PUF [6, 7, 8], neuromorphic neurons [9, 10] and digital gates [11, 12, 13]. RRAM is being considered as a potential candidate to address various drawbacks of conventional complementary metal oxide semiconductor (CMOS)-based architectures [14]. The relatively small size of RRAM devices also makes it highly feasible to integrate computing circuits and memory, thus realizing efficient architectures for learning algorithms, hardware security modules, and neural network (NN) applications [15].
RRAM has been extensively studied for main memory and in-memory computing architectures, but their stochastic nature and intrinsic variation in switching parameters have hindered their widespread adoption as next-generation memories [16]. However, this uncertain behavior is desirable for designing hardware security primitives [17]. Another commonly investigated hardware encryption module is true random number generators (TRNGs), which generate a stream of random numbers by exploiting randomness in physical processes [18]. While CMOS-based TRNG designs have been proposed, they only provide limited security-specific properties, paving the way for TRNGs based on emerging technologies. Among these designs, RRAM-based TRNGs demonstrate desirable properties, primarily due to their low power operation, high
Fig. 1: The NN locking using the embedded hardware security primitives by exploiting the variations of RRAM cells.
density, and stochastic filament formation [19].
The protection of trained neural network (NN) models has become crucial to prevent unauthorized access, which can lead to the cloning of the model by adversaries. This study proposes a novel architecture to implement NN and hardware security modules on a single RRAM crossbar, allowing only authorized users with the correct device to use the locked NN model [20]. The proposed architecture focuses on protecting the intellectual property (IP) rights of deep NN models. Fig. 1 illustrates the framework for locking the NN using the embedded security module. The major contributions of this work are as follows:
* Integrating the NN, PUF, and TRNG on the RRAM crossbar array.
* Discussion on how the proposed crossbar architecture can be used for realizing NN weights locking.
* Lastly, the methodology for implementing the NN, PUF, and TRNG on the same crossbar is validated.
The remainder of the paper is organized as follows: Section II details the architecture for implementing NN, TRNG, and PUF designs based on RRAM. Section III shows the NN weights-locking algorithm using the proposed architecture. Section IV discusses the results of integrating NN and hardware security primitives realized using the crossbar RRAM array. Section V concludes the paper.
## II Proposed Architecture
This section explains the architecture to integrate NN and hardware security modules using the RRAM crossbar. The proposed architecture is shown in Fig. 2. The RRAM cells connected in a passive crossbar configuration are at the core of the proposed architecture. The same crossbar has been used to implement NN, PUF, and TRNG.
### _RRAM crossbar_
The RRAM device employed in this investigation is characterized by its ability to store multiple bits. Specifically, the device can be programmed into a low resistive state (LRS) and a high resistive state (HRS). Still, it can also store multiple resistive states between LRS and HRS, which is referred to as multi-state storage. Applying varying voltage pulses across the device can be configured as two- or multi-state devices. For the purpose of the vector-matrix multiplication (VMM) implementation in this study, the device is configured as a multi-state device, but it can also function in the two-state mode. The proposed architecture offers the flexibility to configure the device in multiple modes to suit applications such as VMM, TRNG, and PUF implementations. Next, we will discuss using RRAM as a core device to implement these applications on a single crossbar.
### _VMM implementation_
The RRAM crossbar performs the VMM operation, a critical function in implementing NNs. The weight matrix required for VMM is stored on a crossbar, with weights corresponding to each device's resistance or conductance state. These devices are set up to store multi-bit weights, and their rows are linked to a digital-to-analog converter (DAC). The input vector is then fed into the DACs, which generally convert the input into analog output voltage levels, as depicted in Fig. 2. Following Ohm's current law, the applied input voltages produce a current through each device depending on the device's resistance or conductance value (weight). The current flowing through each device connected in a column is combined based on Kirchhoff's current law. Ultimately, the current values in the columns are utilized to carry out a multiplication and accumulation operation of the weights and input vector, which is nothing but a VMM operation in memory.
\[\begin{array}{l}c_{0}\quad c_{1}\quad c_{2}\quad c_{3}\\ v_{1}\begin{pmatrix}1\\ 2\\ 2\\ v_{2}\end{pmatrix}\begin{pmatrix}1\\ 2\\ 2\\ 0\end{pmatrix}\begin{pmatrix}1\\ 0\\ 2\\ 3\\ 2\end{pmatrix}\begin{pmatrix}1\\ 0\\ 2\\ 2\\ 2\\ 3\end{pmatrix}\begin{pmatrix}1\\ 2\\ 2\\ 2\\ 2\\ 3\end{pmatrix}\begin{pmatrix}r_{0}\\ r_{1}\\ r_{2}\\ r_{3}\end{pmatrix}\begin{pmatrix}r_{0}\\ r_{1}\\ 2\\ 3\end{pmatrix}\begin{pmatrix}r_{1}\\ r_{2}\\ 2\\ 3\end{pmatrix}\begin{pmatrix}r_{2}\\ 2\\ 2\\ 3\end{pmatrix}\begin{pmatrix}r_{1}\\ r_{2}\\ r_{3}\end{pmatrix}\begin{pmatrix}r_{0}\\ r_{1}\\ 2\\ 3\end{pmatrix}\begin{pmatrix}r_{2}\\ 2\\ 3\end{pmatrix}\begin{pmatrix}r_{1}\\ r_{2}\\ 2\\ 3\end{pmatrix}\begin{pmatrix}r_{2}\\ 2\\ 2\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\\ 2\end{pmatrix}\begin{pmatrix}r_{1}\\ r_{2}\\ r_{3}\end{pmatrix}\begin{pmatrix}r_{2}\\ 2\\ 3\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\\ 2\end{pmatrix}\begin{pmatrix}r_{1}\\ r_{2}\\ r_{3}\end{pmatrix}\begin{pmatrix}r_{2}\\ 2\\ 3\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\\ 2\end{pmatrix}\begin{pmatrix}r_{1}\\ r_{2}\\ r_{3}\end{pmatrix}\begin{pmatrix}r_{2}\\ 2\\ 3\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\\ 2\end{pmatrix}\begin{pmatrix}r_{1}\\ r_{2}\\ 2\end{pmatrix}\begin{pmatrix}r_{2}\\ 2\\ 3\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\end{pmatrix}\begin{pmatrix}r_{1}\\ r_{2}\\ 2\end{pmatrix}\begin{pmatrix}r_{2}\\ 2\\ 3\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\end{pmatrix}\begin{pmatrix}r_{1}\\ r_{2}\\ 2\end{pmatrix}\begin{pmatrix}r_{2}\\ 2\\ 3\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\\ 3\end{pmatrix}\begin{pmatrix}r_{1}\\ r_{2}\\ 2\end{pmatrix}\begin{pmatrix}r_{1}\\ r_{2}\\ 2\end{pmatrix}\begin{pmatrix}r_{2}\\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{1}\\ 2\\ 3\end{pmatrix}\begin{pmatrix}r_{2}\\ 2\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\end{pmatrix}\begin{pmatrix}r_{1}\\ r_{2}\\ 2\end{pmatrix}\begin{pmatrix}r_{2}\\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{1}\\ r_{2}\\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{2}\\ 2\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\end{pmatrix}\begin{pmatrix}r_{1}\\ r_{2}\\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3} \\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3} \\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3} \\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3} \\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3} \\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3} \\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3} \end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3} \\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3} \\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3} \end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3} \end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3} \end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3} \end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3} \end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3} \end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3} \end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3} \end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{}r_{3}\end{pmatrix}\begin{pmatrix}r_{3} \end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{}r_{3} \end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin
than two bits, which need to be converted back to the digital domain. The analog-to-digital (ADC) resolution is determined by \(\lceil\log_{2}(w\times m)\rceil\), where \(w\) represents the weight bits (2 in this example) and \(m\) means the number of devices in a single column (4 in this example).
### _Trng_
To implement the TRNG on the RRAM crossbar, we have used the technique presented in [21], where the device-to-device (D2D) and cycle-to-cycle (C2C) variations on the crossbar have been used to generate the random switching in the crossbar. The switching threshold of each device on the RRAM crossbar is used to generate random bits. By applying a 50% switching probability pulse to the crossbar, random devices switch their state to LRS, and the others remain in the HRS. Due to the crossbar variation, each device's switching threshold is different, resulting in random switching of the devices. In order to implement the TRNG in the proposed architecture, another terminal of the device must connect to GND, which the 1x4 DeMUX controls.
### _Puf_
A single RRAM crossbar is utilized in this proposed architecture to implement a TRNG and a PUF. The crossbar is first initialized to an entropy source based on the TRNGs algorithm, which provides a source of randomness to generate random bits. Challenges are then applied to the crossbar's rows, and the responses of the PUF are collected. The challenges are mapped to a read voltage pulse, which varies based on the input challenge. However, during PUF implementation, the crossbar is configured to two-state rather than multi-state switching.
To collect the responses of the PUF, Kirchoff's current law is applied to the crossbar, which collects the current flowing through each device at the column lines. The input challenge and device variations influence this current. The sneak path affects the current in the crossbar, which contributes to the current at each column in a completely random manner. The analog current values are converted to boolean response bits at the output using a current sense amplifier (CSA). As the response bit can be either 0 or 1, ADC in the path has been bi-passed using 1x4 DeMUX. The digital interface can further use the collected responses to lock the weights matrix on the crossbar. The randomness and unpredictability of the PUF response make it suitable for use in secure authentication and key generation applications, and the incorporation of TRNG adds a layer of security.
## III Weights locking
Weights locking is a required method used to safeguard the intellectual property of NN models, particularly in scenarios where the model has been trained on sensitive data or where the model's performance is essential to business success. The proposed architecture integrates a PUF as a hardware security module in the RRAM crossbar. During training, the weights are encrypted using a unique key generated by the PUF. The architecture is configured to implement the PUF and generate the key to encrypt the weights. The encrypted weights and the challenge are then provided to the user.
During the inference process, the architecture is reconfigured to implement the PUF, and the challenge provided with the encrypted weights is applied to generate the key. Next, the encrypted weights are loaded into the NN, and the key generated by the PUF is utilized to decrypt the weights. The decrypted weights are then used to make predictions.
### _Locking_
The suggested design enables the NN to be secured within an RRAM crossbar. With the hardware security module situated on the same crossbar, a key can be generated via configuration in both the TRNG and PUF setups. The PUF-generated key can then be used to encrypt the weights to be stored in the crossbar. Algorithm 1 outlines the use of the proposed architecture for NN weights locking.
```
1:Choose TRNG implementation
2:Apply 50% probability switching pulse
3:Choose PUF implementation
4:Apply challenge and collect the responses (key)
5:Store the CRPs on the server side and use a key to encrypt the weights.
6:Send the encrypted weights to sharing platform with the challenge
```
**Algorithm 1** An algorithm for weights locking
### _Unlocking_
Once the weights have been encrypted, they can be transmitted to the user via any sharing platform. An identical challenge will be employed on the device's end to generate the required key. Algorithm 2 specifies the decryption procedure. The device's inherent randomness is expressed via CRPs, which are unique to each device, which helps prevent attackers from using the same weights on a different hardware device.
```
1:Receive the encrypted weights
2:Choose TRNG implementation
3:Apply 50% probability switching pulse
4:Choose PUF implementation
5:Apply challenge and generate the key
6:Choose the key to decrypt the weights
7:Store the decrypted weights on the same crossbar (overwrite the TRNG entropy)
```
**Algorithm 2** An algorithm for unlocking the weights
In summary, this paper describes an architecture that integrates NNs and hardware security modules using passive RRAM cells connected in a crossbar structure. The RRAM device can store multiple resistive states between low and high resistive states, allowing it to function as a two-state or multi-state device. The proposed architecture offers flexibility to configure the device in multiple modes to suit different applications, such as VMM, TRNG, and PUF.
## IV Experimental Results
For this study, an RRAM cell comprises a P/Ti/TiO\({}_{x}\)/HfO\({}_{2}\)/Pt material stack, demonstrating improved stability in terms of both electroforming voltage and thermal [22]. This device can be configured into different configurations, such as binary and multi-state switching. The device is programmed into resistance by applying a voltage pulse with a specific duration and amplitude. The device exhibits resistance between \(60-100K\Omega\) in HRS and \(1.5-1.6K\Omega\) in LRS. However, in a multi-state configuration, there can be multiple states between HRS and LRS. The devices in the crossbar are utilized without any selector (passive) in series. Passive crossbars typically face the issue of sneak-path current, which can be utilized to design TRNGs and PUFs.
### _Switching and Variations_
In order to switch the device between binary states, a 150ns pulse of 2.0V with 10ns rise and the fall time is applied to switch the device to LRS (programming to 1), while a negative pulse of 2.0V is applied to switch the device to HRS (programming to 0). However, a gradual RESET method has been employed to achieve multi-state behavior. In this method, the device is initialized to LRS, and then an incremental pulse is applied to switch it to multiple states. The I-V curves of multi-state switching have been shown in Fig. 3.
The switching of the devices can be affected by variations in D2D and C2C parameters. These variations are influenced by manufacturing variations in device radius, device length, and oxygen ion concentration in the dielectric. As a result of these variations, the HRS resistance varies from 31\(K\Omega\) to 155\(K\Omega\) with an average of 65.56\(K\Omega\), while the LRS resistance varies from 1.55\(K\Omega\) to 1.67\(K\Omega\) with an average of 1.64\(K\Omega\). However, the HRS has a wide distribution; all HRS values are distinguishable from LRS. By carefully selecting the pulse to RESET the devices, they can be switched randomly from LRS to HRS or vice versa, which can be exploited to design the TRNGs.
### _PUF properties_
We conducted extensive experiments to evaluate the reliability and performance of PUF on the proposed architecture. Our results demonstrate a reliability of 100%, indicating that the CRPs generated by the proposed PUF are consistent and repeatable across multiple trials. Additionally, the uniqueness of the CRPs was found to be 47.78%, meaning that the probability of generating the same CRP for two different devices is low. The uniformity of the CRPs was measured to be 49.79%, indicating that the distribution of the CRPs is approximately uniform. Finally, the bit-aliasing was found to be 48.57%, indicating that the probability of generating the same CRP for two different challenges is also low.
Importantly, our results show that the proposed PUF achieves these performance metrics without needing post-processing techniques, making it a practical and efficient hardware security primitive. Overall, our findings demonstrate the potential for using RRAM-based PUF designs in hardware security applications with high reliability and security characteristics.
### _Nn_
The crossbar can be utilized to map NN weights, similar to VMM applications. Fig.(a)a shows the mapping of 2-bit weights to the crossbar. To enable the multi-state behavior of a device, a gradual RESET method is employed, as illustrated in Fig.(b)b. We conducted a proof-of-concept by implementing VMM multiplication on a 16x16 crossbar, which is a critical operation for any NN implementation. The device's variations result in error accumulation at the input of the ADC. Nonetheless, this issue can be resolved through onboard fault-aware training of the required application.
## V Conclusions
In conclusion, this paper described a proposed architecture integrating NNs and hardware security modules using the RRAM crossbar as the core device. The proposed architecture enables a single crossbar to implement NN, TRNG, and PUF applications. The RRAM crossbar's multi-state storage characteristic is exploited to perform the vector-matrix multiplication operations required for NN implementation. The TRNG uses the crossbar's variation in device switching thresholds to generate random bits. The PUF is implemented using the same crossbar initialized as an entropy source for the TRNG. The proposed architecture provides flexibility to configure the RRAM device in multiple modes to suit different applications. This paper also presents the algorithms for NN weight locking. Overall, the proposed architecture shows promise in achieving a more efficient and compact design for the hardware implementation of NNs and security primitives.
## Acknowledgments
This work was supported in part by the Federal Ministry of Education and Research (BMBF, Germany) in the project NEUROTEC II under Project 16ME0398K, Project 16ME0399 and through Dr. Suhas Pai Donation Fund at IIT Bombay.
Fig. 3: The mapping of 2-bit weights to the crossbar is demonstrated in (a), while the multi-state behavior of devices for analog weight storage is illustrated in (b). |
2303.17096 | ImageNet-E: Benchmarking Neural Network Robustness via Attribute Editing | Recent studies have shown that higher accuracy on ImageNet usually leads to
better robustness against different corruptions. Therefore, in this paper,
instead of following the traditional research paradigm that investigates new
out-of-distribution corruptions or perturbations deep models may encounter, we
conduct model debugging in in-distribution data to explore which object
attributes a model may be sensitive to. To achieve this goal, we create a
toolkit for object editing with controls of backgrounds, sizes, positions, and
directions, and create a rigorous benchmark named ImageNet-E(diting) for
evaluating the image classifier robustness in terms of object attributes. With
our ImageNet-E, we evaluate the performance of current deep learning models,
including both convolutional neural networks and vision transformers. We find
that most models are quite sensitive to attribute changes. A small change in
the background can lead to an average of 9.23\% drop on top-1 accuracy. We also
evaluate some robust models including both adversarially trained models and
other robust trained models and find that some models show worse robustness
against attribute changes than vanilla models. Based on these findings, we
discover ways to enhance attribute robustness with preprocessing, architecture
designs, and training strategies. We hope this work can provide some insights
to the community and open up a new avenue for research in robust computer
vision. The code and dataset are available at
https://github.com/alibaba/easyrobust. | Xiaodan Li, Yuefeng Chen, Yao Zhu, Shuhui Wang, Rong Zhang, Hui Xue | 2023-03-30T02:02:32Z | http://arxiv.org/abs/2303.17096v1 | # ImageNet-E: Benchmarking Neural Network Robustness via Attribute Editing
###### Abstract
Recent studies have shown that higher accuracy on ImageNet usually leads to better robustness against different corruptions. Therefore, in this paper, instead of following the traditional research paradigm that investigates new out-of-distribution corruptions or perturbations deep models may encounter, we conduct model debugging in distribution data to explore which object attributes a model may be sensitive to. To achieve this goal, we create a toolkit for object editing with controls of backgrounds, sizes, positions, and directions, and create a rigorous benchmark named ImageNet-E(diting) for evaluating the image classifier robustness in terms of object attributes. With our ImageNet-E, we evaluate the performance of current deep learning models, including both convolutional neural networks and vision transformers. We find that most models are quite sensitive to attribute changes. A small change in the background can lead to an average of 9.23% drop on top-1 accuracy. We also evaluate some robust models including both adversarially trained models and other robust trained models and find that some models show worse robustness against attribute changes than vanilla models. Based on these findings, we discover ways to enhance attribute robustness with preprocessing, architecture designs, and training strategies. We hope this work can provide some insights to the community and open up a new avenue for research in robust computer vision. The code and dataset are available at [https://github.com/alibaba/easyrobust](https://github.com/alibaba/easyrobust).
+
Footnote †: This research is supported in part by the National Key Research and Development Program of China under Grant No.2020AAA0140000.
## 1 Introduction
Deep learning has triggered the rise of artificial intelligence and has become the workhorse of machine intelligence. Deep models have been widely applied in various fields such as autonomous driving [27], medical science [32], and finance [37]. With the spread of these techniques, the robustness and safety issues begin to be essential, especially after the finding that deep models can be easily fooled by negligible noises [15]. As a result, more researchers contribute to building datasets for benchmark
ing model robustness to spot vulnerabilities in advance.
Most of the existing work builds datasets for evaluating the model robustness and generalization ability on out-of-distribution data [21, 6, 29] using adversarial examples and common corruptions. For example, the ImageNet-C(orruption) dataset conducts visual corruptions such as Gaussian noise to input images to simulate the possible processors in real scenarios [21]. ImageNet-R(enditions) contains various renditions (_e.g._, paintings, embroidery) of ImageNet object classes [20]. As both studies have found that higher accuracy on ImageNet usually leads to better robustness against different domains [21, 50]. However, most previous studies try to achieve this in a top-down way, such as architecture design, exploring a better training strategy, _etc_. We advocate that it is also essential to manage it in a bottom-up way, that is, conducting model debugging with the in-distribution dataset to provide clues for model repairing and accuracy improvement. For example, it is interesting to explore whether a bird with a water background can be recognized correctly even if most birds appear with trees or grasses in the training data. Though this topic has been investigated in studies such as causal and effect analysis [8], the experiments and analysis are undertaken on domain generalization datasets. How a deep model generalizes to different backgrounds is still unknown due to the vacancy of a qualified benchmark. Therefore, in this paper, we provide a detached object editing tool to conduct the model debugging from the perspective of object attribute and construct a dataset named ImageNet-E(diting).
The ImageNet-E dataset is a compact but challenging test set for object recognition that contains controllable object attributes including backgrounds, sizes, positions and directions, as shown in Fig. 1. In contrast to ObjectNet [5] whose images are collected by their workers via posing objects according to specific instructions and differ from the target data distribution. This makes it hard to tell whether the degradation comes from the changes of attribute or distribution. Our ImageNet-E is automatically generated with our object attribute editing tool based on the original ImageNet. Specifically, to change the object background, we provide an object background editing method that can make the background simpler or more complex based on diffusion models [24, 46]. In this way, one can easily evaluate how much the background complexity can influence the model performance. To control the object size, position, and direction to simulate pictures taken from different distances and angles, an object editing method is also provided. With the editing toolkit, we apply it to the large-scale ImageNet dataset [41] to construct our ImageNet-E(diting) dataset. It can serve as a general dataset for benchmarking robustness evaluation on different object attributes.
With the ImageNet-E dataset, we evaluate the performance of current deep learning models, including both convolutional neural networks (CNNs), vision transformers as well as the large-scale pretrained CLIP [39]. We find that deep models are quite sensitive to object attributes. For example, when editing the background towards high complexity (see Fig. 1, the \(3\)rd row in the background part), the drop in top-1 accuracy reaches 9.23% on average. We also find that though some robust models share similar top-1 accuracy on ImageNet, the robustness against different attributes may differ a lot. Meanwhile, some models, being robust under certain settings, even show worse results than the vanilla ones on our dataset. This suggests that improving robustness is still a challenging problem and the object attributes should be taken into account. Afterward, we discover ways to enhance robustness against object attribute changes. The main contributions are summarized as follows:
* We provide an object editing toolkit that can change the object attributes for manipulated image generation.
* We provide a new dataset called ImageNet-E that can be used for benchmarking robustness to different object attributes. It opens up new avenues for research in robust computer vision against object attributes.
* We conduct extensive experiments on ImageNet-E and find that models that have good robustness on adversarial examples and common corruptions may show poor performance on our dataset.
## 2 Related Work
The literature related to attribute robustness benchmarks can be broadly grouped into the following themes: robustness benchmarks and attribute editing datasets. Existing robustness benchmarks such as ImageNet-C(orruption) [21], ImageNet-R(endition) [20], ImageNet-Stylized [13] and ImageNet-3DCC [29] mainly focus on the exploration of the corrupted or out-of-distribution data that models may encounter in reality. For instance, the ImageNet-R dataset contains various renditions (_e.g._, paintings, embroidery) of ImageNet object classes. ImageNet-C analyzes image models in terms of various simulated image corruptions (_e.g._, noise, blur, weather, JPEG compression, _etc._). Attribute editing dataset creation is a new topic and few studies have explored it before. Among them, ObjectNet [5] and ImageNet-9 (_a.k.a._ background challenge) [50] can be the representative. Specifically, ObjectNet collects a large real-world test set for object recognition with controls where object backgrounds, rotations, and imaging viewpoints are random. The images in ObjectNet are collected by their workers who image objects in their homes. It consists of \(313\) classes which are mainly household objects. ImageNet-9 mainly creates a suit of datasets that help disentangle the impact of foreground and background signals on classification. To achieve this goal, it uses coarse-grained classes
with corresponding rectangular bounding boxes to remove the foreground and then paste the cut area with other backgrounds. It can be observed that there lacks a dataset that can smoothly edit the object attribute.
## 3 Preliminaries
Since the editing tool is developed based on diffusion models, let us first briefly review the theory of denoising diffusion probabilistic models (DDPM) [24, 46] and analyze how it can be used to generate images.
According to the definition of the Markov Chain, one can always reach a desired stationary distribution from a given distribution along with the Markov Chain [14]. To get a generative model that can generate images from random Gaussian noises, one only needs to construct a Markov Chain whose stationary distribution is Gaussian distribution. This is the core idea of DDPM. In DDPM, given a data distribution \(\mathbf{x}_{0}\sim q(\mathbf{x}_{0})\), a forward noising process produces a series of latents \(\mathbf{x}_{1},...,\mathbf{x}_{T}\) of the same dimensionality as the data \(\mathbf{x}_{0}\) by adding Gaussian noise with variance \(\beta_{t}\in(0,1)\) at time \(t\):
\[q(\mathbf{x}_{t}|\mathbf{x}_{t-1})=\mathcal{N}(\sqrt{1-\beta_{t}}\mathbf{x}_{ t-1},\beta_{t}\mathbf{I}),s.t.\ 0<\beta_{t}<1, \tag{1}\]
where \(\beta_{t}\) is the diffusion rate. Then the distribution \(q(\mathbf{x}_{t}|\mathbf{x}_{0})\) at any time \(t\) is:
\[q(\mathbf{x}_{t}|\mathbf{x}_{0})=\mathcal{N}(\sqrt{\bar{\alpha}_{t}},(1-\bar{ \alpha}_{t})\mathbf{I}),\ \mathbf{x}_{t}=\sqrt{\bar{\alpha}_{t}}\mathbf{x}_{0}+\sqrt{1-\bar{ \alpha}_{t}}\epsilon \tag{2}\]
where \(\bar{\alpha}_{t}=\prod_{s=1}^{t}(1-\beta_{t})\), \(\epsilon\sim\mathcal{N}(0,\mathbf{I})\). It can be proved that \(\lim_{t\to\infty}q(\mathbf{x}_{t})=\mathcal{N}(0,\mathbf{I})\). In other words, we can map the original data distribution into a Gaussian distribution with enough iterations. Such a stochastic forward process is named as diffusion process since what the process \(q(\mathbf{x}_{t}|\mathbf{x}_{t-1})\) does is adding noise to \(\mathbf{x}_{t-1}\).
To draw a fresh sample from the distribution \(q(\mathbf{x}_{0})\), the Markov process is reversed. That is, beginning from a Gaussian noise sample \(\mathbf{x}_{T}\sim\mathcal{N}(0,\mathbf{I})\), a reverse sequence is constructed by sampling the posteriors \(q(\mathbf{x}_{t-1}|\mathbf{x}_{t})\). To approximate the unknown function \(q(\mathbf{x}_{t-1}|\mathbf{x}_{t})\), in DDPMs, a deep model \(p_{\theta}\) is trained to predict the mean and the covariance of \(\mathbf{x}_{t-1}\) given \(\mathbf{x}_{t}\) instead. Then the \(\mathbf{x}_{t-1}\) can be sampled from the normal distribution defined as:
\[p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t})=\mathcal{N}(\mu_{\theta}(\mathbf{ x}_{t},t),\Sigma_{\theta}(\mathbf{x}_{t},t)). \tag{3}\]
In stead of inferring \(\mu_{\theta}(\mathbf{x}_{t},t)\) directly, [24] propose to predict the noise \(\epsilon_{\theta}(\mathbf{x}_{t},t)\) which was added to \(\mathbf{x}_{0}\) to get \(\mathbf{x}_{t}\) with Eq. (2). Then \(\mu_{\theta}(\mathbf{x}_{t},t)\) is:
\[\mu_{\theta}(\mathbf{x}_{t},t)=\frac{1}{\sqrt{\bar{\alpha}_{t}}}\left(\mathbf{ x}_{t}-\frac{\beta_{t}}{\sqrt{1-\bar{\alpha}_{t}}}\epsilon_{\theta}(\mathbf{x}_{t},t )\right). \tag{4}\]
[24] keep the value of \(\Sigma_{\theta}(\mathbf{x}_{t},t)\) to be constant. As a result, given a sample \(\mathbf{x}_{t}\) at time \(t\), with a trained model that can predict the noise \(\epsilon_{\theta}(\mathbf{x}_{t},t)\), we can get \(\mu_{\theta}(\mathbf{x}_{t},t)\) according to Eq. (4) to reach the \(\mathbf{x}_{t-1}\) with Equation (3) and eventually we can get to \(\mathbf{x}_{0}\).
Previous studies have shown that diffusion models can achieve superior image generation quality compared to the current state-of-the-art generative models [1]. Besides, there have been plenty of works on utilizing the DDPMs to generate samples with desired properties, such as semantic image translation [36], high fidelity data generation from low-density regions [44], _etc_. In this paper, we also choose the DDPM adopted in [1] as our generator.
## 4 Attribute Editing with Diffusion Models and ImageNet-E
Most previous robustness-related work has focused on the important challenges of robustness on adversarial examples [6], common corruptions [21]. They have found that higher clean accuracy usually leads to better robustness.
Figure 2: Attribute editing with DDPMs. Give an input image and its corresponding object mask, the object is firstly removed with inpainting operation to get the pure background image. Then, we leverage the diffusion process to edit the background image \(\mathbf{x}_{0}\) and object image coherently. \(\odot\) denotes the element-wise blending of these two images using the object mask. For background editing, the background complexity objective function is added during the diffusion process (Alg. 1, line \(5\)). For other object attributes editing, the object image needs to be transformed first (Alg. 2, line \(1\)).
Therefore, instead of exploring a new corruption that models may encounter in reality, we pay attention to the model debugging in terms of object attributes, hoping to provide new insights to clean accuracy improvement. In the following, we describe our object attribute editing tool and the generated ImageNet-E dataset in detail. The whole pipeline can be found in Fig. 2.
### Object Attribute Editing with Diffusion Models
**Background editing.** Most existing corruptions conduct manipulations on the whole image, as shown in Fig. 1. Compared to adding global corruptions that may hinder the visual quality, a more likely-to-happen way in reality is to manipulate the backgrounds to fool the model. Besides, it is shown that there exists a spurious correlation between labels and image backgrounds [12]. From this point, a background corruption benchmark is needed to evaluate the model's robustness. However, the existing background challenge dataset achieves background editing with copy-paste operation, resulting an obvious artifacts in generated images [50]. This may leave some doubts about whether the evaluation is precise since the dataset's distribution may have changed. To alleviate this concern, we adopt DDPM approach to incorporate background editing by adding a guiding loss that can lead to backgrounds with desired properties to make the generated images stay in/close to the original distribution. Specifically, we choose to manipulate the background in terms of texture complexity due to the hypothesis that an object should be observed more easily from simple backgrounds than from complicated ones. In general, the texture complexity can be evaluated with the gray-level co-occurrence matrix (GLCM) [16], which calculates the gray-level histogram to show the texture characteristic. However, the calculation of GLCM is non-differentiable, thus it cannot serve as the conditional guidance of image generation. We hypothesize that a complex image should contain more frequency components in its spectrum and higher amplitude indicates greater complexity. Thus, we define the objective of complexity as:
\[\mathcal{L}_{c}=\sum\left|\mathcal{A}(\mathcal{F}(\mathbf{x}))\right|, \tag{5}\]
where \(\mathcal{F}\) is the Fourier transform [45], \(\mathcal{A}\) extracts the amplitude of the input spectrum. \(\mathbf{x}\) is the evaluated image. Since minimizing this loss helps us generate an image with desired properties and should be conducted on the \(\mathbf{x}_{0}\), we need a way of estimating a clean image \(\mathbf{x}_{0}\) from each noisy latent representation \(\mathbf{x}_{t}\) during the denoising diffusion process. Recall that the process estimates at each step the noise \(\epsilon_{\theta}(\mathbf{x}_{t},t)\) added to \(\mathbf{x}_{0}\) to obtain \(\mathbf{x}_{t}\). Thus, \(\hat{\mathbf{x}}_{0}\) can be estimated via Equation (6) [1]. The whole optimization procedure is shown in Algorithm 1.
\[\hat{\mathbf{x}}_{0}=\frac{\mathbf{x}_{t}}{\sqrt{\hat{\alpha}_{t}}}-\frac{ \sqrt{1-\bar{\alpha}_{t}}\epsilon_{\theta}(\mathbf{x}_{t},t)}{\sqrt{\bar{ \alpha}_{t}}}. \tag{6}\]
As shown in Fig. 3(a), with the proposed method, when we guide the generation procedure with the proposed objective towards the complex direction, it will return images with visually complex backgrounds. We also provide the GLCM dissimilarity and contrast of each image to make a quantitative analysis of the generated images. A higher dissimilarity/contrast score indicates a more complex image background [16]. It can be observed that the complexity is consistent with that calculated with GLCM, indicating the effectiveness of the proposed method.
**Controlling object size, position and direction.** In general, the human vision system is robust to position, direction and small size changes. Whether the deep models are also robust to these object attribute changes is still unknown to researchers. Therefore, we conduct the image editing with controls of object sizes, positions and directions to find the answer. For a valid evaluation on different attributes, all other variables should remain unchanged, especially the background. Therefore, we first disentangle the object and background with the in-painting strategy provided by [54]. Specifically, we mask the object area in input image \(\mathbf{x}\). Then we conduct in-painting to remove the object and get the pure background image \(\mathbf{x}^{b}\), as shown in Fig. 3(b) column \(3\). To realize the aforementioned object attribute controlling, we adopt the orthogonal transformation. Denote \(P\) as the pixel locations of object in image \(\mathbf{x}\) where \(P\in\mathbb{R}^{3\times N_{o}}\). \(N_{o}\) is the number of pixels belong to object and \(p_{i}=[x_{i},y_{i},1]^{T}\) is the position of object's \(i\)-th pixel. \(h^{\prime}\in[0,H-h],w^{\prime}\in[0,W-w]\) where \([x,y,w,h]\) stand for the enclosing rectangle of the object with mask \(M\). Then the newly edited \(\mathbf{x}[T_{\text{attribute}}\cdot P]=\mathbf{x}[P]\) and \(M[T_{\text{attribute}}\cdot P]=M[P]\), where
\[T_{\text{size}}=\left[\begin{smallmatrix}8&0&\Delta x\\ 0&s&\Delta y\\ 0&0&1\end{smallmatrix}\right],T_{\text{poi}}=\left[\begin{smallmatrix}1&0&w^{ \prime}\\ 0&1&h^{\prime}\\ 0&0&1\end{smallmatrix}\right],T_{\text{direction}}=\left[\begin{smallmatrix}\cos \theta&\sin\theta&0\\ -\sin\theta&\cos\theta&0\\ 0&0&1\end{smallmatrix}\right] \tag{7}\]
where \(s\) is the resize scale. \(\theta\) is the rotation angle. \(\Delta x=(1-s)\cdot(x+w/2),\Delta y=(1-s)\cdot(y+h/2)\).
With the background image \(\mathbf{x}^{b}\) and edited object \(\mathbf{x}^{o}\), a naive way is to place the object in the original image to the corresponding area of background image \(\mathbf{x}^{b}\) as \(M\odot\mathbf{x}^{o}+(1-M)\odot\mathbf{x}^{b}\). However, the result generated in this manner may look disharmonic, lacking a delicate adjustment to blending them together. Besides, as shown in Fig. 3(b) column \(3\), the object-removing operation may leave some artifacts behind, failing to produce a coherent and seamless result. To deal with this problem, we leverage DDPM models to blend them at different noise levels along the diffusion process. Denote the image with desired object attribute as \(\mathbf{x}^{o}\). Starting from the pure background image \(\mathbf{x}^{b}\) at time \(t_{0}\), at each stage, we perform a guided diffusion step with a latent \(\mathbf{x}_{t}\) to obtain the \(\mathbf{x}_{t-1}\) and at the same time, obtain a noised version of object image
\(\mathbf{x}_{t-1}^{o}\). Then the two latents are blended with the mask \(M\) as \(\mathbf{x}_{t-1}=M\odot\mathbf{x}_{t-1}^{o}+(1-M)\odot\mathbf{x}_{t-1}\). The DDPM denoising procedure may change the background. Thus a proper initial timing is required to maintain a high resemblance to the original background. We set the iteration steps \(t_{0}\) as 50 and 25 in Algorithm 1 and 2 respectively.
```
input : source image \(\mathbf{x}\), mask \(M\), diffusion model \((\mu_{\theta}(\mathbf{x}_{t}),\Sigma_{\theta}(\mathbf{x}_{t}))\), \(\bar{\alpha}_{t}\), \(\lambda\), iteration steps \(t_{0}\) output : edited image \(\mathbf{x}_{0}\)
1\(\mathbf{x}_{t_{0}}\sim\mathcal{N}(\sqrt{\bar{\alpha}_{t_{0}}}\mathbf{x},(1- \bar{\alpha}_{t_{0}})\mathbf{I})\);
2for\(t\gets t_{0}\)todo
3\(\hat{\mathbf{x}}_{0}\leftarrow\frac{\mathbf{x}_{t}}{\sqrt{\bar{\alpha}_{t_{0 }}}}-\frac{\sqrt{1-\bar{\alpha}_{t}}\bar{\alpha}_{t_{0}}(\mathbf{x}_{t},t)}{ \sqrt{\bar{\alpha}_{t}}}\);
4\(\nabla_{bg}\leftarrow\nabla_{bg}\mathcal{N}_{\theta_{\text{c}}}\mathcal{L}_{ \text{c}}(\hat{\mathbf{x}}_{0})\);
5\(\mathbf{x}_{t-1}^{b}\sim\mathcal{N}(\mu_{\theta}(\mathbf{x}_{t})+\lambda \Sigma_{\theta}(\mathbf{x}_{t})\nabla_{bg},\Sigma_{\theta}(\mathbf{x}_{t}))\);
6\(\mathbf{x}^{o}\sim\mathcal{N}(\sqrt{\bar{\alpha}_{t}}\mathbf{x},(1-\bar{ \alpha}_{t})\mathbf{I})\);
7\(\mathbf{x}_{t-1}\gets M\odot\mathbf{x}^{o}+(1-M)\odot\mathbf{x}_{t-1}^{b}\);
8
9 end while
```
**Algorithm 1**Background editing
### ImageNet-E dataset
With the tool above, we conduct object attribute editing including background, size, direction and position changes based on the large-scale ImageNet dataset [41] and ImageNet-S [11], which provides the mask annotation. To guarantee the dataset quality, we choose the animal classes from ImageNet classes such as dogs, fishes and birds, since they appear more in nature without messy backgrounds. Classes such as stove and mortarboard are removed. Finally, our dataset consists of \(47872\) images with \(373\) classes based on the initial 4352 images, each of which is applied 11 transforms. Detailed information can be found in Appendix A. For background editing, we choose three levels of the complexity, including \(\lambda=-20,\lambda=20\) and \(\lambda=20\)-adv with adversarial guidance (see Sec.B for details) instead of complexity. Larger \(\lambda\) indicates stronger guidance towards high complexity. For the object size, we design four levels of sizes in terms of the object pixel rates (\(=\text{sum}(M>0.5)/\text{sum}(M\geq 0)\)): \([\text{Full},0.1,0.08,0.05]\) where 'Full' indicates making the object as large as possible while maintaining its whole body inside the image. Smaller rates indicate smaller objects. For object position, we find that some objects hold a high object pixel rate in the whole image, resulting in a small \(H-h\). Take the first picture in Fig. 3 for example, the dog is big and it will make little visual differences after position changing. Thus, we adopt the data whose pixel rate is 0.05 as the initial images for the position-changing operation.
In contrast to benchmarks like ImageNet-C [21] giving images from different domains so that the model robustness in these situations may be assessed, our effort aims to give an editable image tool that can conduct model debugging with in-distribution (ID) data, in order to identify specific shortcomings of different models and provide some insights for clean accuracy improving. Thus, the data distribution should not differ much from the original ImageNet. We choose the out-of-distribution (OOD) detection methods Energy [33] and GradNorm [26] to evaluate whether our editing tool will move the edited image out of its original distribution. These OOD detection methods aim to distinguish the OOD examples from the ID exam
Figure 3: (a) Images generated with the proposed background complexity editing method. (b) Edited images with size changing. The Fréchet inception distance (FID) for pasting is 50.64 while it is 32.59 for ours, indicating the effectiveness of the leveraging of DDPMs.
ples. The results are shown in Fig. 4. \(x\)-axis is the ID score in terms of the quantities in Energy and GradNorm and \(y\)-axis is the frequency of each ID score. A high ID score indicates the detection method takes the input sample as the ID data. Compared to other datasets, our method barely changes the data distribution under both Energy (the 1st row) and GradNorm (the 2nd row) evaluation methods. Besides, the Frechet inception distance (FID) [23] for our ImageNet-E is 15.57 under the random background setting, while it is 34.99 for ImageNet-9 (background challenge). These all imply that our editing tool can ensure the proximity to the original ImageNet, thus can give a controlled evaluation on object attribute changes. To find out whether the DDPM will induce some degradation to our evaluation, we have conducted experiment in Tab. 1 with the setting \(\lambda=0\) during background editing. This operation will first add noises to the original and then denoise them. It can be found in "Inver" column that the degradation is negligible compared to degradation induced by attribute changes.
## 5 Experiments
We conduct evaluation experiments on various architectures including both CNNs (ResNet (RN) [19], DenseNet [25], EfficientNet (EF) [47], ResNest [53], ConvNeXt [35]) and transformer-based models (Vision-Transformer (ViT) [9], Swin-Transformer (Swin) [34]). Other state-of-the-art models that trained with extra data such as CLIP [39], EfficientNet-L2-Noisy-Student [51] are also evaluated in the Appendix. Apart from different sizes of these models, we have also evaluated their adversarially trained versions for comprehensive studies. We report the drop of top-1 accuracy as metric based on the idea that the attribute changes should induce little influence to a robust trained model. More experimental details and results of top-1 accuracy can be found in the Appendix.
### Robustness evaluation
**Normally trained models.** To find out whether the widely used models in computer vision have gained robustness against changes on different object attributes, we conduct extensive experiments on different models. As shown in Tab. 1, when only the background is edited towards high complexity, the average drop in top-1 accuracy is 9.23% (\(\lambda=20\)). This indicates that most models are sensitive to object background changes. Other attribute changes such as size and position can also lead to model performance degradation. For example, when changing the object pixel rate to \(0.05\), as shown in Fig. 1 row \(4\) in the'size' column, while we can still recognize the image correctly, the performance drop is 18.34% on average. We also find that the robustness under different object attributes is improved along with improvements in terms of clean accuracy (Original) on different models. Accordingly, a switch from an RN50 (92.69% top-1 accuracy) to a Swin-S (96.21%) leads to the drop in accuracy decrease from 15.72% to 10.20% on average. By this measure, models have become more and more capable of generalizing to different backgrounds, which implies that they indeed learn some robust features. This shows that object attribute robustness can be a good way to measure future progress in representation learning. We also observe that larger networks possess better robustness on the attribute editing. For example, swapping a Swin-S (96.21% top-1 accuracy) with the larger Swin-B (95.96% top-1 accuracy) leads to the decrease of the dropped accuracy from 10.20% to 8.99% when \(\lambda=20\). In a similar fashion, a ConvNeXt-T (9.32% drop) is less robust than the giant ConvNeXt-B (7.26%). Consequently, models with even more depth, width, and feature aggregation may attain further attribute robustness. Previous studies [30] have shown that zero-shot CLIP exhibits better out-of-distribution robustness than the finetuned CLIP, which is opposite to our ImageNet-E as shown in Tab. 1. This may serve as the evidence that our ImageNet-E has a good proximity to ImageNet. We also find that compared with fully
Figure 4: Distributions of ID score of different datasets in terms of the quantities in Energy (the first row) and GradNorm (the second row) for in-distribution (ImageNet) and other datasets. Higher overlap indicates greater proximity to ImageNet.
supervised trained model under the same backbone (ViT-B), the CLIP fails to show a better attribute robustness. We think this may be caused by that the CLIP has spared some capacity for OOD robustness.
**Adversarially trained models.** Adversarial training [42] is one of the state-of-the-art methods for improving the adversarial robustness of deep models and has been widely studied [2]. To find out whether they can boost the attribute robustness, we conduct extensive experiments in terms of different architectures and perturbation budgets (constraints of \(l_{2}\) norm bound). As shown in Fig. 5, the adversarially trained ones are not robust against attribute changes including both backgrounds and size-changing situations. The dropped accuracies are much greater compared to normally trained models. As the perturbation budget grows, the situation gets worse. This indicates that adversarial training can do harm to robustness against attributes.
### Robustness enhancements
Based on the above evaluations, we step further to discover ways to enhance the attribute robustness in terms of preprocessing, network design and training strategies. More details including training setting and numerical experimental results can be found in Appendix C.5.
**Preprocessing.** Given that an object can be inconspicuous due to its small size or subtle position, viewing an object at several different locations may lead to a more stable prediction. Having this intuition in mind, we perform the classical Ten-Crop strategy to find out if this operation can help to get a robustness boost. The Ten-Crop operation is executed by cropping all four corners and the center of the input image. We average the predictions of these crops together with their horizontal mirrors as the final result. We find this operation can contribute a 0.69% and 1.24% performance boost on top-1 accuracy in both background and size changes scenarios on average respectively.
**Network designs.** Intuitively, a robust model should tend to focus more on the object of interest instead of the background. Therefore, recent models begin to enhance the model by employing attention modules. Of these, the ResNet [53] can be a representative. The ResNet is a modularized architecture, which applies channel-wise attention on different network branches to leverage their success in capturing cross-feature interactions and learning diverse representations. As it has achieved a great boost in the ImageNet dataset, it also shows superiority on ImageNet-E compared to ResNet. For example, a switch from RN50 decreases the average dropped accuracy from 15.72% to 12.57%. This indicates that the channel-wise attention module can be a good choice to improve the attribute robustness. Another representative model can be the vision transformer, which consists of multiple self-attention modules. To study whether incorporating transformer's self-attention-like architecture into the model design can help attribute robustness generalization, we establish a hybrid architecture by directly feeding the output of res_3 block in RN50 into ViT-S as the input feature like [3]. The dropped accuracy decreases by 1.04% compared to the original RN50, indicating the effectiveness of the self-attention-like architectures.
**Training strategy.** a) _Robust trained._ There have been plenty of studies focusing on the robust training strategy to improve model robustness. To find out whether these works can boost the robustness on our dataset, we further evaluate these state-of-the-art models including SIN [13], Debiased-CNN [31], Augmix [22], ANT [40], DeepAugment [20] and model trained with lots of standard augmentations (RN50-T) [48]. As shown in Tab. 2, apart from the RN50-T, while the Augmix model shows the best performance against the background change scenario, the Debiased model holds the best in the object size change scenario. What we find unexpectedly is the SIN performance. The SIN method features the novel data augmentation scheme where ImageNet images are stylized with style transfer as the training data to force the model to rely less on textural cues for classification. Though the robustness boost is achieved on ImageNet-C (mCE 69.32%) compared to its vanilla model (mCE 76.7%), it fails to improve the robustness in both object background and size-changing scenarios. The drops of top-1 accuracy for vanilla RN50 and RN50-SIN are 21.26% and 24.23% respectively, when the object size rate is 0.05, though they share similar accuracy on original ImageNet. This indicates that existing benchmarks cannot reflect the real robustness in object attribute changing. Therefore, a dataset like ImageNet-E is necessary for comprehensive evaluations on deep models. b) _Masked image modeling._ Considering that masked image modeling has demonstrated impressive results in self-supervised representation learning by recovering corrupted image patches [4], it may be robust to the attribute changes. Therefore, we choose the Masked AutoEncoder (MAE) [17] as the training strategy since its objective is recovering images with only 25% patches. Specifically, we adopt the MAE training strategy with ViT-B backbone and then finetune it with ImageNet
Figure 5: Comparisons between vanilla models and adversarially trained models across different architectures in terms of size changes (left). Evaluation of adversarial models (RN50) trained with different perturbation budgets is provided in the right figure.
training data. We find that the robustness is improved. For example, the dropped accuracy decreases from 10.62% to 9.05% on average compared to vanilla ViT-B.
### Failure case analysis
To explore the reason why some robust trained models may fail, we leverage the LayerCAM [28] to generate the heat map for different models including vanilla RN50, RN50+SIN and RN50+Debiased for comprehensive studies. As shown in Fig. 6, the heat map of the Debiased model aligns better with the objects in the image than that of the original model. It is interesting to find that the SIN model sometimes makes wrong predictions even with its attention on the main object. We suspect that the SIN model relies too much on the shape. for example, the'sea urchin' looks like the 'acron' with the shadow. However, its texture clearly indicates that it is the'sea urchin'. In contrast, the Debiased model which is trained to focus on both the shape and texture can recognize it correctly. More studies can be found in Appendix C.4.
### Model repairing
To validate that the evaluation on ImageNet (IN)-E can help to provide some insights for model's applicability and enhancement, we conduct a toy example for model repairing. Previous evaluation shows that the ResNet50 is vulnerable to background changes. Based on this observation, we randomly replace the backgrounds of objects with others during training and get a validation accuracy boost from 77.48% to 79.00%. Note that the promotion is not small as only 8781 training images with mask annotations are available in ImageNet. We also step further to find out if the improved model can get a boost the OOD robustness, as shown in the Tab. 3. It can be observed that with the insights provided by the evaluation on ImageNet-E, one can explore the model's attribute vulnerabilities and manage to
\begin{table}
\begin{tabular}{c|c|c c c c c|c c c|c|c|c} \hline \multirow{2}{*}{Models} & \multirow{2}{*}{Original} & \multicolumn{6}{c|}{Background changes} & \multicolumn{6}{c|}{Size changes} & Position & Direction & \multirow{2}{*}{Avg.} \\ \cline{3-3} \cline{5-16} & & Inver & \(\lambda=-20\) & \(\lambda=20\) & \(\lambda=20\)-adv & Random & Full & 0.1 & 0.08 & 0.05 & rp & rd \\ \hline \hline RN50 & 92.69\% & 1.97\% & 7.30\% & 13.35\% & 29.92\% & 13.34\% & 2.71\% & 7.25\% & 10.51\% & 21.26\% & 26.46\% & 25.12\% & 15.72\% \\ DenseNet121 & 92.10\% & 1.49\% & 6.29\% & 9.00\% & 29.20\% & 12.43\% & 3.50\% & 7.00\% & 10.68\% & 21.55\% & 26.53\% & 23.64\% & 14.98\% \\ EF-B0 & 92.85\% & 1.07\% & 7.10\% & 10.71\% & 34.88\% & 15.64\% & 3.03\% & 8.00\% & 11.57\% & 23.28\% & 27.91\% & 19.15\% & 16.12\% \\ ResNet50 & 93.83\% & 1.44\% & 6.33\% & 8.98\% & 26.62\% & 11.28\% & 2.53\% & 5.27\% & 8.01\% & 18.03\% & 21.37\% & 17.32\% & 12.57\% \\ ViT-S & 94.14\% & **0.82\%** & 6.42\% & 8.98\% & 31.12\% & 13.06\% & **0.80\%** & 5.37\% & 8.95\% & 17.37\% & 22.86\% & 17.13\% & 13.17\% \\ Swin-S & **96.21\%** & 1.13\% & 5.18\% & 7.33\% & 23.50\% & 9.31\% & 1.27\% & 4.21\% & 6.29\% & 14.16\% & 17.35\% & **13.42\%** & 10.20\% \\ ConvNeXt-T & 96.07\% & 1.43\% & **4.69\%** & **6.26\%** & **19.83\%** & **7.93\%** & 1.75\% & **3.28\%** & **5.18\%** & **12.76\%** & **15.71\%** & 15.78\% & **9.32\%** \\ \hline \hline RN101 & 94.00\% & 2.11\% & 7.05\% & 11.62\% & 29.47\% & 13.57\% & 2.57\% & 6.81\% & 10.12\% & 26.05\% & 25.85\% & 24.42\% & 15.21\% \\ DenseNet169 & 92.37\% & 1.12\% & 5.81\% & 8.43\% & 27.51\% & 11.61\% & 2.25\% & 6.90\% & 10.41\% & 20.59\% & 24.93\% & 20.68\% & 13.91\% \\ EF-B3 & 94.97\% & 1.87\% & 7.77\% & 8.40\% & 29.90\% & 12.92\% & 1.36\% & 6.80\% & 10.16\% & 21.36\% & 24.98\% & 17.24\% & 14.09\% \\ ResNet101 & 95.54\% & 1.10\% & 5.58\% & 6.65\% & 23.03\% & 10.40\% & 1.35\% & 3.97\% & 6.53\% & 15.44\% & 19.11\% & 14.31\% & 10.64\% \\ ViT-B & 95.38\% & 0.83\% & 5.32\% & 8.43\% & 26.60\% & 10.98\% & **0.62\%** & 4.00\% & 6.30\% & 14.51\% & 18.82\% & 14.95\% & 11.05\% \\ Swin-B & 95.96\% & 0.79\% & 4.46\% & 6.23\% & 21.44\% & 8.25\% & 0.99\% & 3.16\% & 5.04\% & 12.34\% & 13.35\% & **12.60\%** & **8.99\%** \\ ConvNeXt-B & **96.42\%** & **0.69\%** & **3.75\%** & **4.86\%** & **16.49\%** & **6.04\%** & 9.09\% & **2.25\%** & **3.36\%** & **9.47\%** & **12.40\%** & 13.01\% & **7.26\%** \\ \hline CLIP-zeroshot & 80.01\% & 4.88\% & 11.56\% & 15.28\% & 36.14\% & 20.09\% & 3.33\% & 12.67\% & 15.77\% & 25.31\% & 28.87\% & 21.57\% & 19.06\% \\ CLIP-finetuned & 93.68\% & 2.17\% & 9.82\% & 11.83\% & 38.33\% & 18.19\% & 9.06\% & 9.25\% & 12.67\% & 23.32\% & 28.56\% & 22.00\% & 18.30\% \\ \hline \end{tabular}
\end{table}
Table 1: Evaluations with different state-of-the-art models in terms of Top-1 accuracy and the corresponding droped accuracy under background changes, size changes, random position (rp) and random direction (rd).
\begin{table}
\begin{tabular}{c|c|c c c c|c c c c|c} \hline \multirow{2}{*}{Architectures} & \multirow{2}{*}{Ori} & \multicolumn{6}{c|}{Background changes} & \multicolumn{6}{c|}{Size changes} & Position & Direction & \multirow{2}{*}{Avg.} \\ \cline{3-3} \cline{5-16} & & Inver & \(\lambda=-20\) & \(\lambda=20\) & \(\lambda=20\)-adv & Random & Full & 0.1 & 0.08 & 0.05 & rp & rd \\ \hline RN50 & 92.69\% & 1.97\% & 7.30\% & 13.35\% & 29.92\% & 13.34\% & 2.71\% & 7.25\% & 10.51\% & 21.26\% & 26.46\% & 25.12\% & 15.72\% \\ RN50-Adversarial & 81.96\% & **0.66\%** & **4.75\%** & 13.62\% & 37.87\% & 15.25\% & 4.87\% & 9.62\% & 13.94\% & 25.51\% & 31.96\% & 18.99\% \\ RN50-SIN & 91.57\% & 2.23\% & 7.61\% & 12.19\% & 33.16\% & 13.58\% & 1.68\% & 8.30\% & 12.60\% & 24.23\% & 29.16\% & 27.24\% & 16.98\% \\ RN50-Debiased & 93.34\% & 1.43\% & 6.09\% & 11.45\% & 27.99\% & 12.12\% & 1.98\% & 5.53\%
repair the model and get a performance boost accordingly.
## 6 Conclusion and Future work
In this paper, we put forward an image editing toolkit that can take control of object attributes smoothly. With this tool, we create a new dataset called ImageNet-E that can serve as a general dataset for benchmarking robustness against different object attributes. Extensive evaluations conducted on different state-of-the-art models show that most models are vulnerable to attribute changes, especially the adversarially trained ones. Meanwhile, other robust trained models can show worse results than vanilla models even when they have achieved a great robustness boost on other robustness benchmarks. We further discover ways for robustness enhancement from both preprocessing, network designing and training strategies.
**Limitations and future work.** This paper proposes to edit the object attributes in terms of backgrounds, sizes, positions and directions. Therefore, the annotated mask of the interest object is required, resulting in a limitation of our method. Besides, since our editing toolkit is developed based on diffusion models, the generalization ability is determined by DDPMs. For example, we find synthesizing high-quality person images is difficult for DDPMs. Under the consideration of both the annotated mask and data quality, our ImageNet-E is a compact test set. In our future work, we would like to explore how to leverage the edited data to enhance the model's performance, including both the validation accuracy and robustness.
\begin{table}
\begin{tabular}{c|c|c c c c c c} \hline \hline Models & IN & IN-v2 & IN-A & IN-C\(\downarrow\) & IN-R & IN-Sketch & IN-E \\ \hline RN50 & 77.5 & 65.7 & 6.5 & 68.6 & 39.6 & 27.5 & 83.7 \\ RN50-repaired & **79.0** & **67.2** & **9.4** & **65.8** & **40.7** & **29.4** & **85.0** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Model repairing results. Top-1 accuracy (%) is reported except for IN-C, which is mCE (mean Corruption Error). Higher top-1 accuracy and lower mCE indicate better performance. IN-E reports the average accuracy on ImageNet-E. |
2310.04878 | Hybrid Recommendation System using Graph Neural Network and BERT
Embeddings | Recommender systems have emerged as a crucial component of the modern web
ecosystem. The effectiveness and accuracy of such systems are critical for
providing users with personalized recommendations that meet their specific
interests and needs. In this paper, we introduce a novel model that utilizes a
Graph Neural Network (GNN) in conjunction with sentence transformer embeddings
to predict anime recommendations for different users. Our model employs the
task of link prediction to create a recommendation system that considers both
the features of anime and user interactions with different anime. The
hybridization of the GNN and transformer embeddings enables us to capture both
inter-level and intra-level features of anime data.Our model not only
recommends anime to users but also predicts the rating a specific user would
give to an anime. We utilize the GraphSAGE network for model building and
weighted root mean square error (RMSE) to evaluate the performance of the
model. Our approach has the potential to significantly enhance the accuracy and
effectiveness of anime recommendation systems and can be extended to other
domains that require personalized recommendations. | Shashidhar Reddy Javaji, Krutika Sarode | 2023-10-07T17:24:41Z | http://arxiv.org/abs/2310.04878v1 | # Hybrid Recommendation System using Graph Neural Network and BERT Embeddings
###### Abstract
Recommender systems have emerged as a crucial component of the modern web ecosystem. The effectiveness and accuracy of such systems are critical for providing users with personalized recommendations that meet their specific interests and needs. In this paper, we introduce a novel model that utilizes a Graph Neural Network (GNN) in conjunction with sentence transformer embeddings to predict anime recommendations for different users. Our model employs the task of link prediction to create a recommendation system that considers both the features of anime and user interactions with different anime. The hybridization of the GNN and transformer embeddings enables us to capture both inter-level and intra-level features of anime data.Our model not only recommends anime to users but also predicts the rating a specific user would give to an anime. We utilize the GraphSAGE network for model building and weighted root mean square error (RMSE) to evaluate the performance of the model. Our approach has the potential to significantly enhance the accuracy and effectiveness of anime recommendation systems and can be extended to other domains that require personalized recommendations.
## 1 Introduction
Recommendation systems are algorithms that suggest items to users based on their past behavior. They are used in a variety of applications, such as online shopping, music streaming, and social media. There are two main types of recommendation systems: collaborative filtering and content-based filtering. Collaborative filtering systems recommend items to users based on the ratings or preferences of other users. For example, if you have rated a number of movies on Netflix, the collaborative filtering system will recommend other movies that other users with similar ratings have also enjoyed.
Content-based filtering systems recommend items to users based on the content of the items themselves. For example, if you have listened to a number of songs by a particular artist, the content-based filtering system will recommend other songs by the same artist. In recent years, there has been a trend towards using hybrid recommendation systems that combine the strengths of collaborative filtering and content-based filtering. These systems can provide more accurate recommendations than either type of system on its own.
There are a number of different ways to build recommendation systems. One common approach is to use machine learning algorithms. Machine learning algorithms can be trained on large datasets of user ratings or preferences to learn how to predict which items a user will like. Another approach to building recommendation systems is to use artificial intelligence (AI) techniques. AI techniques, such as deep learning, can be used to create more complex and powerful recommendation systems. Recommendation systems have become an integral part of our daily lives, aiding us in making informed decisions about the products and services we use. The success of these systems can be attributed to their ability to filter and personalize vast amounts of information, making it easier for
users to find relevant and useful items. However, the increasing complexity and heterogeneity of data have made it challenging to develop accurate and efficient recommendation systems.
In recent years, graph neural networks (GNNs) have emerged as a promising solution to this problem, allowing us to incorporate relational data into our recommendation models. GNNs can effectively capture the inherent structure and dependencies in the data, enabling us to make more accurate and personalized recommendations. Graph Neural Networks (GNNs) have emerged as a powerful approach to solving problems in the domain of recommendation systems. Recommendation systems aim to recommend items to users that are relevant and useful to them, based on their past behavior and preferences. GNNs can help in creating better recommendations by modeling the complex relationships between users and items in a graph-based representation. One of the key challenges in recommendation systems is the sparsity of the data. In many cases, users may have only interacted with a small subset of items, and the available data may not be sufficient to learn accurate models. GNNs can help address this challenge by leveraging the graph structure of the data to propagate information from observed to unobserved nodes.
GNNs can be used in both content-based and collaborative filtering approaches to the recommendation. In a content-based approach, GNNs can be used to model the features of the items and users and create recommendations based on the similarity between their embeddings. In a collaborative filtering approach, GNNs can be used to model the interactions between users and items in a graph, and create recommendations based on the relationships between the nodes.
One of the popular approaches for GNN-based recommendation is GraphSAGE. GraphSAGE is a variant of GNN that aggregates information from neighboring nodes to generate node embeddings. In GraphSAGE, each node is assigned an initial feature vector, and these features are updated iteratively by aggregating information from the node's neighbors. The aggregated features are then passed through a neural network layer to generate a new embedding for the node. In the context of recommendation, GraphSAGE can be used to generate embeddings for both users and items. The model can be trained to predict the likelihood of a user interacting with an item, based on the embeddings of the user and item. The learned embeddings can then be used to generate recommendations for users.
To improve the performance of the recommendation system, additional features can be incorporated into the model. For example, in the case of movie recommendations, features such as the genre and the synopsis of the movie can be used to augment the embeddings of the movies. Similarly, features such as the age and gender of the user can be used to augment the embeddings of the users. Overall, GNNs have shown great promise in the domain of recommendation systems and can help in creating more accurate and personalized recommendations for users. With the availability of large amounts of data and the increasing interest in personalized recommendations
The rest of the paper is organized as follows: Section 2 provides a brief overview of related work. Section 3 describes the dataset and the pre-processing steps used to prepare the data. Section 4 presents the proposed model in detail. Section 5 presents the experimental setup and results. Finally, Section 6 concludes the paper with a summary of the contributions and directions for future work
## 2 Related Work
Recommender systems have been widely used to provide personalized recommendations to users. Collaborative filtering (CF) is a popular technique that utilizes users' past behavior to make recommendations. Matrix factorization, a type of CF algorithm, decomposes the user-item interaction matrix into two lower-dimensional matrices to represent users and items. The regularization weights of the latent factors can be assigned based on items' popularity and users' activeness, which can improve the prediction results of the matrix factorization technique. [4]
The paper on graph neural networks in recommender systems provides a survey of various graph-based techniques for recommender systems, including GCNs, GATs, and GAEs. The paper discusses how these techniques can be used to handle cold-start problems, incorporate side information, and enhance recommendation accuracy. [5] Graph-based models have become increasingly popular in recent years for their ability to handle complex interactions between users and items. The linear residual graph convolutional network approach for CF-based recommender systems revisits GCNs in CF models and shows that removing non-linearities can enhance recommendation performance. The
proposed model uses a residual network structure that is specifically designed for CF with user-item interaction modeling, which alleviates the over-smoothing problem in graph convolution aggregation operation with sparse data. [3]
The graph-based hybrid recommendation system (GHRS) combines content-based and collaborative filtering approaches to extract new features based on users' ratings, demographic, and location information. These features are then used for clustering users, which improves recommendation accuracy and dominates other methods' performance in the cold-start problem. The experimental results on the MovieLens dataset show that the proposed algorithm outperforms many existing recommendation algorithms on recommendation accuracy. [1]
Inductive matrix completion is another popular approach to building recommender systems that can handle the cold-start problem. The paper on learning to transfer graph embeddings for inductive graph-based recommendation proposes a transfer learning framework for personalized video highlight recommendation. The proposed framework is composed of two parts: a graph neural network that exploits the higher-order proximity between users and segments to alleviate the user cold-start problem and an item embedding transfer network that approximates the learned item embeddings from graph neural networks. [2]
Matrix factorization, specifically, is a widely used technique in recommender systems that utilizes users' past behavior, such as ratings or purchases, to make recommendations. One of the most popular CF algorithms is matrix factorization, which decomposes the user-item interaction matrix into the product of two lower dimensionality rectangular matrices, user and item embeddings, that represent users and items in a lower-dimensional space. The regularization weights of the latent factors can be assigned based on items' popularity and users' activeness, which can improve the prediction results of the matrix factorization technique. The paper on matrix factorization techniques for recommender systems provides a foundational understanding of collaborative filtering and matrix factorization for building recommender systems. [4]
In summary, the related papers cover various techniques for building recommender systems, including matrix factorization, graph-based models, inductive matrix completion, and transfer learning. These papers provide further insights into the use of these techniques in recommender systems and how they can be used to handle cold-start problems, incorporate side information, and enhance recommendation accuracy.
## 3 Dataset
The Anime Recommendation Database 2020 is a dataset available on Kaggle, containing information about anime and user interactions from the website MyAnimeList. The dataset was created by scraping the website and contains recommendation data from 320,000 users and 16,000 animes.
The dataset is comprised of two main tables: the anime table and the rating table. The anime table contains information about each anime, including its ID, name, genre, type, episodes, and synopsis. The genre field is a list of genres associated with anime, such as "Action", "C comedy", "Drama", and "Fantasy". The type field indicates whether the anime is a TV series, movie, OVA, or other formats. The episodes field indicates the number of episodes in the series. The synopsis field provides a brief description of the anime's plot.
The rating table contains information about user interactions with the animes, including the user ID, the anime ID, and the user's rating for the anime on a scale of 1 to 10. The dataset also includes a timestamp field indicating the time when the user rated the anime.
The dataset contains a total of 78,460,895 user-anime interactions, with an average of 4.9 ratings per user. The most popular anime in the dataset is "Death Note", with over 150,000 ratings. The dataset is useful for building recommendation systems for anime, as it contains information about both the animes and user preferences.
### Preprocessing
The dataset used in this research consists of two primary data sources: the "anime with synopsis" and "rating complete" files, which were merged to obtain relevant columns for the model. Specifically, the dataset includes anime id, user id, synopsis, genres, and rating. Prior to analysis, the dataset
undervent a preprocessing step which involved data cleaning to remove rows with null values in any column. One hot encoding was also applied to the genres column in order to transform the categorical variable into a numerical format suitable for analysis.
Furthermore, two dictionaries were created to map the user id's and anime id's in the dataset. These dictionaries were used to facilitate the analysis and interpretation of the data. Overall, the resulting dataset is suitable for use in conducting research on anime recommendation systems, and provides a robust foundation for the development and evaluation of machine learning algorithms for this purpose.
We created three classes: SequenceEncoder, IdentityEncoder, and GenresEncoder, which encode different types of data into PyTorch tensors. These classes are used to load and process node and edge data for a graph-based recommendation system. The SequenceEncoder class encodes text data using the SentenceTransformer model. The input data is a Pandas dataframe, and the output is a PyTorch tensor that represents the sentence embeddings. The IdentityEncoder class converts raw column values to PyTorch tensors, and the GenresEncoder class encodes genre information from the raw data. The load node csv function uses these encoders to process the node data, concatenating the resulting tensors into a single tensor.
The load edge csv function loads edge data and generates labels for each edge. It takes two arguments, ratings user id and ratings movie id, which are the user and movie IDs for each rating. It then generates edge labels by looking up the corresponding ratings from a dictionary user anime rating and returns a PyTorch tensor containing the edge labels. Overall, the code shows how the dataset is preprocessed before being fed into the graph-based recommendation system. The SequenceEncoder, IdentityEncoder, and GenresEncoder classes are used to encode different types of data into PyTorch tensors, which are then concatenated into a single tensor using the load node csv function. The load edge csv function loads edge data and generates labels for each edge, completing the dataset preprocessing pipeline.
## 4 Proposed Methodology
In an anime recommendation system, the features used for node creation can have a significant impact on the performance of the system. One common approach is to use genres as the features for each anime. Genres are categorical variables that can be one-hot encoded and used to represent the anime's content. This approach is straightforward and easy to implement, but it has some limitations.
Figure 1: Bar graph of ratings given by each users
One limitation is that genres alone may not capture the complexity and nuances of the anime. For example, two anime could have the same genres, but one could be a comedy with a light-hearted tone while the other could be a dark psychological thriller. In this case, relying solely on genres may not differentiate between the two anime and could lead to poor recommendations.
To overcome this limitation, we can combine the genres with the sentence embeddings of the synopsis. The synopsis is a brief summary of the anime's plot, and it can provide additional information about the anime's content and style. By using sentence embeddings, we can capture the meaning and context of the synopsis, which can help to differentiate between anime with similar genres.
To do this, we first preprocess the synopsis by removing stop words, punctuation, and other irrelevant information. We then use a pre-trained sentence embedding model such as BERT or GloVe to generate embeddings for each sentence in the synopsis. We can then average these embeddings to obtain a single embedding for the entire synopsis. We can then concatenate the one-hot encoded genres with the synopsis embedding to create a feature vector for each anime. This feature vector captures both the categorical information about the anime's genres and the semantic information about the anime's content and style. Once we have the feature vectors for each anime, we can use them to create nodes in the graph. We can then use graph neural networks (GNNs) to learn the representations of these nodes and generate recommendations based on the learned representations. Compared to using genres alone, combining genres with the synopsis embeddings can lead to more accurate and personalized recommendations. This approach can capture the complex and nuanced content of the anime and provide better differentiation between anime with similar genres. Additionally, this approach can be extended to incorporate other textual features such as reviews or user feedback, which can further improve the recommendations.
The Model class inherits from the PyTorch Module class, which provides a convenient way to define a neural network model. The _init_ method defines the components of the model and initializes their parameters. The forward method defines the computation that will be performed by the model when it is run on input data. The GNNEncoder class is a custom implementation of a GNN encoder that takes as input a set of node features and edge connections and outputs a set of node embeddings. The hidden_channels argument specifies the dimensionality of the node embeddings. The GNNEncoder class is defined in a separate file and is not shown in the code snippet provided.
HeteroData( user={ x=[100, 100] }, anime={ x=[3534, 427] }, (user, rates, anime)={ edge_index=[2, 16143], edge_label=[16143] }, (anime, rev_rates, user)={ edge_index=[2, 16143] } }
: HeteroData Structure
Figure 2: Architecture of the model
The encoder attribute of the Model class is an instance of the GNNEncoder class. It takes the hidden_channels argument as input and is initialized with the same dimensionality for both the input and output features.
The to_hetero function is a utility function that converts the GNNEncoder object to a heterogeneous GNN. The data.metadata() argument specifies the schema of the heterogeneous graph, which includes information about the node types, edge types, and features of the graph. The aggr argument specifies the type of aggregation to be used when combining information from different node types. The EdgeDecoder class is a custom implementation of an edge decoder that takes as input a set of node embeddings and a set of edge connections and outputs a set of edge predictions. The hidden_channels argument specifies the dimensionality of the node embeddings. In the GNNEncoder class, the GraphSAGE implementation is achieved by using the SAGEConv module from PyTorch Geometric library. The SAGEConv module implements the GraphSAGE convolutional operator, which aggregates the feature vectors of a node and its neighbors using a graph convolutional operation.
The decoder attribute of the Model class is an instance of the EdgeDecoder class. It takes the hidden_channels argument as input and is initialized with the same dimensionality for both the input and output features. The forward method takes as input a dictionary of node features, a dictionary of edge connections, and a set of edge labels. The x_dict argument is a dictionary of PyTorch tensors representing the node features for each node type. The edge_index_dict argument is a dictionary of PyTorch tensors representing the edge connections for each edge type. The edge_label_index argument is a PyTorch tensor representing the edge labels.The forward method of the GNNEncoder class first applies a GraphSAGE layer to the input node features using the SAGEConv module. This layer aggregates the feature vectors of each node and its neighbors using a graph convolutional operation. The resulting feature vectors are then normalized and passed through a ReLU activation function.
The forward method first passes the input data through the encoder to obtain a set of node embeddings, represented as a dictionary of PyTorch tensors. It then passes these node embeddings and the edge labels through the decoder to obtain a set of predicted edge labels. In summary, the model architecture consists of a GNN encoder that takes as input node features and edge connections, a heterophily operator that converts the GNN encoder to a heterogeneous GNN, and an edge decoder that takes as input node embeddings and edge connections and outputs a set of predicted edge labels. The model is designed for semi-supervised learning on heterogeneous graphs and can handle multiple node and edge types with different feature representations.
In the context of graph neural networks (GNNs), the heterophily operator is a mechanism used to combine information from nodes of different types in a heterogeneous graph. In a heterogeneous graph, nodes can have different types, which correspond to different features or attributes. For example, in a citation network, nodes can represent papers, authors, or conferences, and each node type can have different attributes such as publication year, paper topic, or author affiliation. To capture such heterogeneity, GNNs use different weight matrices for each node type, allowing the model to learn different representations for nodes of different types.
In the GNNEncoder class, the GraphSAGE implementation is achieved by using the SAGEConv module from the PyTorch Geometric library. The SAGEConv module implements the GraphSAGE convolutional operator, which aggregates the feature vectors of a node and its neighbors using a graph convolutional operation. The GNNEncoder class takes two arguments: the number of input feature dimensions and the number of output feature dimensions. The forward method of this class applies two GraphSAGE layers to the input node features to generate the output node features. The forward method of the GNNEncoder class first applies a GraphSAGE layer to the input node features using the SAGEConv module. This layer aggregates the feature vectors of each node and its neighbors using a graph convolutional operation. The resulting feature vectors are then normalized and passed through a ReLU activation function. The output of the first GraphSAGE layer is then passed through a second GraphSAGE layer in a similar fashion. Finally, the resulting output features are returned as the output of the forward method of the GNNEncoder class. Overall, the GNNEncoder class implements a GraphSAGE-based neural network architecture for learning node representations in a graph by aggregating neighborhood information of each node in the graph
Evaluation and Results
The process of evaluation is as follows; this model is evaluated using Root Mean square Error(RMSE), the model is used to get the ratings between a given user and certain anime which the user haven't watched before, all such links are predicted with certain weight, so given a user we get the ratings for different anime in the list which they haven't watched yet, after this the predicted ratings along with the anime are taken for particular user and then the list is sorted according to the rating predicted, we get the list of anime with highest to lowest rated for the anime that would be given by the user if watched as predicted by the model, top 10 anime of this list are taken and are recommended to that user as the anime recommendation that the user can watch. The evaluation is done by using the test set where we have the ratings that are given by the user for different anime, these are not shown at the training time, trained model is used to predict the rating and then evaluate it with the ground truth labels, using RMSE we check how close the model is able to predict the values for the given graph.
**Recommendation for user: 415 ['Pokemon Movie 14 White: Victim to Kuroki Eiyuu Zekrom', 'Tsuki no Sango', 'Charlotte', 'Tanaka-kun wa Kyyou mo Kedaruge', 'Iblard Jikan', 'Teekyuu', 'Tenshi Nanka ja Nai', 'No Game No Life: Zero', 'Puchitto Gargantia', 'Pokemon: Sernitsu no Mirage Pokemon']**
**Recommendation for user: 30 ['Jormungand', 'Hanayamata', 'BlackRock Shooter (OVA)', 'Mahou Shoujo Ore', 'Selector Infected WIXOSS', 'Kara no Kyoukai 6: Boukyaku Rokuon', 'Claymore', 'Kamigami no Asobi', 'Zettai Bouei Leviathan', 'Kakeguru']**
**Recommendation for user: 189 ['Zero no Tsukaima F', 'Mahouka Koukou no Rettousei Movie: Hoshi wo Yobu Shoujo', 'Kami-tachi ni Hirowerata Otoko', 'Sunohara-sou no Kanrinin-san', 'Dragon Ball GT', 'Seishun Buta Yarou wa Bunny Girl Senpai no Yume wo Minai', 'Tamako Market', 'School Days', 'Kono Bijutsubu ni wa Mondai ga Aru!', 'Re:Zero kara Hajimeru Isekai Seikatsu 2nd Season']**
**Recommendation for user: 298 ['Mobile Suit Gundam 00', 'Bannou Bunka Neko-Musume DASH', 'Fullmetal Alchemist: Premium Collection', 'Naruto Movie 2: Dai Gekitotsu!' Madoroshi no Chiteiseki Dattebayo!', 'Ischo ni Training: Training with Hinako', 'Doraemon', 'School Rumble', 'Golden Boy', 'Rurouni Kenshin: Meiji Kenkaku Romantan - Tsuioku-hen', 'Death Note: Rewrite']**
The results of our experiment are presented in this section. The model was trained and tested on a dataset consisting of 800 users. The following are the results of our experiment:
**Train loss: 0.659**
**Test Loss: 0.667**
**Train Accuracy: 0.52**
**Test Accuracy: 0.37**
Figure 3: Results
Conclusion and Future Work
The results show that the model achieved a higher accuracy on the training data (52%) compared to the testing data (37%). The loss values for both the training and testing data are relatively high, indicating that the model may not be performing optimally. Though the accuracy is not that high, but the model is working and giving good results with very less amount of data, the compute resource required to run for large amount of data is very high The future plans of the model would be to try with more nodes which can be made into features and then make edges between the user and their features as well as the anime and the features of the animes. Also the other side of future work would to try on more data which would be possible with more compute resources. There is also possibility of trying more types of GNN's other than Graph SAGE network for the training process of the GNN.
|
2308.10110 | Robust Mixture-of-Expert Training for Convolutional Neural Networks | Sparsely-gated Mixture of Expert (MoE), an emerging deep model architecture,
has demonstrated a great promise to enable high-accuracy and ultra-efficient
model inference. Despite the growing popularity of MoE, little work
investigated its potential to advance convolutional neural networks (CNNs),
especially in the plane of adversarial robustness. Since the lack of robustness
has become one of the main hurdles for CNNs, in this paper we ask: How to
adversarially robustify a CNN-based MoE model? Can we robustly train it like an
ordinary CNN model? Our pilot study shows that the conventional adversarial
training (AT) mechanism (developed for vanilla CNNs) no longer remains
effective to robustify an MoE-CNN. To better understand this phenomenon, we
dissect the robustness of an MoE-CNN into two dimensions: Robustness of routers
(i.e., gating functions to select data-specific experts) and robustness of
experts (i.e., the router-guided pathways defined by the subnetworks of the
backbone CNN). Our analyses show that routers and experts are hard to adapt to
each other in the vanilla AT. Thus, we propose a new router-expert alternating
Adversarial training framework for MoE, termed AdvMoE. The effectiveness of our
proposal is justified across 4 commonly-used CNN model architectures over 4
benchmark datasets. We find that AdvMoE achieves 1% ~ 4% adversarial robustness
improvement over the original dense CNN, and enjoys the efficiency merit of
sparsity-gated MoE, leading to more than 50% inference cost reduction. Codes
are available at https://github.com/OPTML-Group/Robust-MoE-CNN. | Yihua Zhang, Ruisi Cai, Tianlong Chen, Guanhua Zhang, Huan Zhang, Pin-Yu Chen, Shiyu Chang, Zhangyang Wang, Sijia Liu | 2023-08-19T20:58:21Z | http://arxiv.org/abs/2308.10110v1 | # Robust Mixture-of-Expert Training for Convolutional Neural Networks
###### Abstract
Sparsely-gated Mixture of Expert (MoE), an emerging deep model architecture, has demonstrated a great promise to enable high-accuracy and ultra-efficient model inference. Despite the growing popularity of MoE, little work investigated its potential to advance convolutional neural networks (CNNs), especially in the plane of adversarial robustness. Since the lack of robustness has become one of the main hurdles for CNNs, in this paper we ask: How to adversarially robustify a CNN-based MoE model? Can we robustly train it like an ordinary CNN model? Our pilot study shows that the conventional adversarial training (AT) mechanism (developed for vanilla CNNs) no longer remains effective to robustify an MoE-CNN. To better understand this phenomenon, we dissect the robustness of an MoE-CNN into two dimensions: Robustness of routers (i.e., gating functions to select data-specific experts) and robustness of experts (i.e., the router-guided pathways defined by the subnetworks of the backbone CNN). Our analyses show that routers and experts are hard to adapt to each other in the vanilla AT. Thus, we propose a new router-expert alternating Adversarial training framework for MoE, termed AdvMoE. The effectiveness of our proposal is justified across 4 commonly-used CNN model architectures over 4 benchmark datasets. We find that AdvMoE achieves \(1\%\sim 4\%\) adversarial robustness improvement over the original dense CNN, and enjoys the efficiency merit of sparsity-gated MoE, leading to more than \(50\%\) inference cost reduction. Codes are available at [https://github.com/OPTML-Group/Robust-MoE-CNN](https://github.com/OPTML-Group/Robust-MoE-CNN).
+
Footnote †: Correspondence to: Yihua Zhang\(<\)[email protected]\(>\)
## 1 Introduction
Despite the state-of-the-art performance achieved by the outrageously large networks [1, 2, 3, 4, 5] in various deep learning (DL) tasks, it still remains challenging to train and deploy such models cheaply. A major bottleneck is the lack of parameter efficiency [6]: A single data prediction only requires activating a small portion of the parameters of the full model. Towards efficient DL, sparse Mixture of Experts (MoE) [7, 8, 9, 10, 11, 12, 13, 14, 15] aims to divide and conquer the model parameters based on their optimal responses to specific inputs so that inference costs can be reduced. A typical MoE structure is comprised of a set of 'experts' (_i.e._, sub-models extracted from the original backbone network) and 'routers' (_i.e._, additional small-scale gating networks to determine expert selection schemes across layers). During inference, sparse MoE only activates the most relevant experts and forms the expert-guided pathway for a given input data. By doing so, sparse MoE can boost the inference efficiency (see 'GFLOPS' measurement in **Fig. 1**). Architecture-wise, sparse MoE has been used for both CNNs [8, 16] and vision transformers (VITs) [7, 9, 10, 11, 12, 13, 14, 15, 17]. Yet, we will focus on the former since sparse MoE for CNNs is under-explored compared to non-sparse MoE for CNNs [18, 19, 20], and adversarial robustness (another key performance metric of our work) was extensively studied in the context of CNNs.
It is known that a main weakness of DL is the lack of adversarial robustness [21, 22, 23]. For example, CNNs can be easily fooled by adversarial attacks [21, 22, 23], in terms of tiny input perturbations generated to direct to erroneous predictions. Thus, adversarial training (**AT**) of CNNs has become a main research thrust [24, 25, 26, 27, 28, 29]. However, when CNN meets sparse MoE, it remains elusive if the improved inference efficiency brought by the sparse MoE comes at the cost of more complex adversarial training recipes. Thus, we ask:
[leftmargin=*]
**(Q)** _What will be the new insights into adversarial robustness of sparse MoE-integrated CNNs? And what will be the suited AT mechanism?_
To our best knowledge, problem (**Q**) remains open in the literature. The most relevant work to ours is [30], which investigated the adversarial robustness of MoE and lever
aged the ordinary AT recipe [24] to defend against adversarial attacks. However, it only focused on the ViT architecture, making a vacancy for the research on robustification for the sparse MoE-based CNN (termed **MoE-CNN** in this work). Most importantly, we find that the vanilla AT [24, 25] (widely used to robustify CNNs) is _no longer_ effective for MoE-CNN. Thus, new solutions are in demand.
To address **(Q)**, we need to (1) make careful sanity checks for AT in MoE-CNN, (2) make an in-depth analysis of its failure cases, and (3) advance new AT principles that can effectively improve robustness without losing the generalization and efficiency from sparse MoE. Specifically, our **contributions** are unfolded below.
* We dissect the MoE robustness into two new dimensions (different from CNNs): routers' robustness and experts' robustness. Such a robustness dissection brings novel insights into the (in)effectiveness of AT.
* Taking inspiration from the above robustness dissection, we propose a new Adversarial training framework for MoE, termed AdvMoE, which enforces routers and experts to make a concerted effort to improve the overall robustness of MoE-CNN.
* We conduct extensive experiments to demonstrate the effectiveness of AdvMoE across \(4\) CNN architectures and 4 datasets. For example, AdvMoE outperforms AT on the original dense CNN model (termed Dense) by a substantial margin: \(1\%\sim 4\%\) adversarial robustness improvement and over \(50\%\) reduction of inference overhead; see **Fig. 1** for illustrations on different CNN types and highlighted performance achieved.
## 2 Related Work
**Sparsely-activated Mixture of Experts (Sparse MoE).** As a special instance of compositional neural architectures [31, 32, 33], MoE [4, 7, 8, 9, 10, 11, 16, 18, 19, 20, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143] aims at solving ML tasks in a divide-and-conquer fashion, which creates a series of sub-models (known as the _experts_) and conducts input-dependent predictions by combing the output of sub-models. As an important branch of MoE, sparsely gated MoE [4, 7, 8, 9, 10, 11, 16, 18, 19, 20, 39, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111,
robustness and standard generalization ability. Other work [26, 28, 47, 48, 49, 50, 51, 52, 53, 54] aims at trimming down the computational costs of robust training while maintaining robustness. The work [30] studies the robustness of MoE-based architectures for the first time. Yet, its focus stays on MoE for ViTs and the relationship between model capacity and robustness.
## 3 Problem Statement
In this section, we start by presenting the setup of MoE-CNN in this work and then introduce the robust learning paradigm. The lack of adversarial robustness of deep models inspires us to investigate whether the adversarial training (AT) approach designed for vanilla CNNs keeps effective for MoE-CNN. Through a motivating example, we show that the conventional AT recipe is _incapable_ of equipping MoE-CNN with desired robustness. The resulting performance is even worse than that of AT-produced S-Dense, which has a much smaller model capacity than MoE-CNN. Thus, the question of how to robustify MoE-CNN arises.
Model setup.We consider a CNN-backboned MoE that consists of multiple MoE layers. Each MoE layer involves a router and a vanilla convolutional layer from the backbone CNN model. Within one MoE layer, we define \(N\) experts, each of which picks a subset of the channels from the convolutional layer. Specifically, suppose the \(l\)-th layer contains \(C_{l}\) channels, one expert will contain \(r\times C_{l}\) channels, where we call the ratio \(r\in[0,1]\)_model scale_ and keep it the same across different layers (see **Fig. 1a**). It is worth noting that as \(r\) increases, the per-expert model capacity increases (_i.e._, with more parameters) at the cost of the efficiency reduction. In a forward path, the router first makes an _input-specific_ expert selection. These selected layer-wise experts then form an end-to-end pathway to process this input. We use "_pathway_" to describe one experts-guided forward path (see **Fig. 1a**). We summarize the model setup in **Fig. A1**.
Further, we introduce different model types considered in this work and shown in **Fig. 1a**. First, we term the original dense CNN model '**Dense**', which serves as the _model basis_ for other model types that derive from. Second, we directly shrink the channel number of each layer in Dense (based on the model scale parameter \(r\)) to obtain the'small dense' model (termed '**S-Dense**'). Notably, S-Dense has the size _equivalent to a single pathway_ in MoE-CNN. Third, we use the structured pruning method [50] to create a sparse subnetwork from Dense, with the weight remaining ratio same as the model scale parameter \(r\) in MoE-CNN, which we call '**Sparse-CNN**'. In summary, S-Dense has the smallest model capacity (comparable to a single pathway of MoE-CNN), and should provide the _performance lower-bound_ for MoE-CNN. By contrast, Sparse-CNN has a larger model capacity but is smaller than MoE-CNN as it encodes a data-agnostic pathway of Dense, while MoE-CNN yields data-specific pathways at the same scale. Dense has the largest model capacity but the least inference efficiency.
Adversarial robustness: From CNN to MoE-CNN.It has been known that current machine learning models (_e.g._, CNNs) are vulnerable to adversarial attacks [21, 22, 23]. Towards the robust design, a variety of AT (adversarial training) methods have been developed. The predominant ones include the min-max optimization-based vanilla AT [24] and its TRADES variant [25] that strikes a balance between generalization and adversarial robustness. Throughout the paper, we adopt TRADES as the default conventional AT recipe, which solves the following problem:
\[\min_{\mathbf{\theta}}\mathbb{E}_{(\mathbf{x},y)\in\mathcal{D}}\left[\ell(\mathbf{ \theta};\mathbf{x},y)+\frac{1}{\lambda}\max_{\|\mathbf{\delta}\|_{\infty}\leq\epsilon }\ell_{\mathrm{KL}}(\mathbf{f_{\theta}}(\mathbf{x}),\mathbf{f_{\theta}}(\mathbf{x}+ \mathbf{\delta}))\right]\] (AT)
where \(\mathbf{\theta}\) denotes model parameters to be robustified, \((\mathbf{x},y)\in\mathcal{D}\) is a training sample, drawn from the training set \(\mathcal{D}\), with input feature \(\mathbf{x}\) and label \(y\), \(\ell(\mathbf{\theta},\mathbf{x};y)\) denotes the cross-entropy loss using model \(\mathbf{\theta}\) at data point \((\mathbf{x},y)\), \(\mathbf{\delta}\) signifies the input perturbation variable subject to the \(\ell_{\infty}\)-norm ball of radius \(\epsilon\), \(\mathbf{f_{\theta}}(\cdot)\) denotes the model's predictions, \(\ell_{\mathrm{KL}}\) is the KL divergence loss that characterizes the worst-case prediction stability at the presence of \(\mathbf{\delta}\), and \(\lambda>0\) is a regularization parameter to strike the tradeoff between empirical risk minimization and the robustness of model predictions.
Although AT has been well studied for adversarial robustness of CNNs, there exists few attempts to robustify MoE-CNN. This raises the problem of our interest:
**(Problem statement)** Can MoE-CNN be robustified as effectively as an ordinary CNN using AT? If not, how to robustly train MoE-CNN to achieve robustness not worse than AT-oriented S-Dense, Sparse-CNN, and Dense while preserving MoE's efficiency?
Warm-up study: AT for MoE-CNN is _not_ trivial.Our goal to robustify MoE-CNN includes (1) achieving high robustness, (2) maintaining high prediction accuracy, and (3) making full use of MoE routing to keep the model's high efficiency and expressiveness. Nonetheless, the routing system in MoE brings extra robustification challenges, which never exist in ordinary CNNs. Specifically, the input-specific expert selection in MoE could make the attacker easier to succeed, since input perturbations can _either_ mislead routers to select incorrect experts _or_ fool the pathophysiological predictor. Such a '_two-way attack mode_' makes AT for MoE-CNN highly non-trivial.
**Fig. 2** empirically justifies that the direct application of (AT) to MoE-CNN is problematic. In Fig. 2, we consider ResNet-18 as the model backbone (Dense) and CIFAR-10 for image classification. We apply (AT) to train MoE-CNN
and S-Dense, and report the robust accuracy (RA), _i.e._, testing accuracy over adversarial examples generated by 50-step PGD attacks [24], against different attack strengths \(\epsilon\).
As we can see, although MoE-CNN has a much larger model capacity than S-Dense, it leads to a significant RA drop when the conventional AT approach is applied. This implies that the design of AT for MoE-CNN is far from trivial. A new robust learning protocol is thus needed to improve the robustness of MoE-CNN without losing its merits in efficiency and generalization.
## 4 Methods
In this section, we start by peering into the failure case of (AT) in MoE-CNN by understanding the roles of the routers and pathways in (AT). We empirically show that these individual components are hard to adapt to each other and cannot make a concerted effort in AT. Based on that, we develop a new AT framework for MoE-CNN, AdvMoE, which also takes inspiration from bi-level optimization.
**Dissecting robustness of MoE-CNN: Routers' robustness vs. pathways' robustness.** The main puzzle in robustifying MoE-CNN comes from the coupling between the robustness of routers (which are responsible for expert selection across layers) and the robustness of the input-specific MoE pathways (which are in charge of the final prediction of an input). Given the failure case of AT for MoE-CNN in Fig. 2, we need to understand the roles of routers and pathways in AT, _i.e._, how the adversarial robustness of MoE-CNN is gained in the presence of the 'two-way attack mode'. To this end, we begin by assessing the influence of the routers' robustness on the overall robustness. This is also inspired by the recent pruning literature [50] showing that model robustness can be gained solely from network's sparse topology (regardless of model weights). We thus ask:
**(Q1)** Is improving routers' robustness sufficient to achieve a robust MoE-CNN?
To tackle **(Q1)**, we first split the parameters of MoE-CNN (_i.e._, \(\mathbf{\theta}\)) into two parts, the parameters of routers \(\mathbf{\phi}\) and the parameters of the backbone network \(\mathbf{\psi}\). This yields \(\mathbf{\theta}=[\mathbf{\phi}^{\top},\mathbf{\psi}^{\top}]^{\top}\), where \(\top\) is the transpose operation. We then call (AT) to robustly train routers (\(\mathbf{\phi}\)) but _fix_ the backbone network (\(\mathbf{\psi}\)) at its standard pre-trained weights. We denote this partially-robustified model by \(\mathbf{\bar{\theta}}=[\mathbf{\phi}^{\top},\mathbf{\psi}^{\top}]^{\top}\), where \(\bar{}\) indicates the updated parameters. To answer **(Q1)**, we assess the robustness gain of \(\mathbf{\bar{\theta}}\) vs. 3 baselines (M1-M3): (**M1**) the standard MoE-CNN \(\mathbf{\theta}\), (**M2**) AT-robustified S-Dense, and (**M3**) Sparse-CNN achieved by the robust sparse mask learning method [50] over the original Dense model.
Based on Insight 1, we further peer into the resilience of expert selection decisions to adversarial examples. If expert selections in _all_ MoE layers keep intact in the presence of an adversarial perturbation, we say that the routing system of MoE-CNN is robust against this adversarial example. We then divide adversarial examples into **four categories** according to whether they successfully attacked routers and the router-oriented pathways: _unsuccessful_ attack on _both_ routers and MoE pathways, _successful_ attack on routers but _not MoE_ pathways, _successful_ attack on MoE pathways but _not routers_, and _successful_ attack on _both_ routers and MoE pathways. Here \(\mathbf{\Theta}\) + \(\mathbf{\Theta}\) characterizes the robustness of routers, while \(\mathbf{\Theta}\) + \(\mathbf{\Theta}\) represents that of MoE. Thus, if \(\mathbf{\Theta}\) or \(\mathbf{\Theta}\) takes a large portion of generated adversarial examples, it implies that the routers' robustness does _not_ directly impact the MoE pathway-based predictor's robustness. **Fig. 4** shows the above categories \(\mathbf{\Theta}\)-\(\mathbf{\Theta}\) when attacking the router-robustified MoE-CNN (_i.e._, \(\mathbf{\bar{\theta}}\)). As we can see, routers' robustness indeed improves prediction robustness (as shown by \(31.74\%\) unsuccessful attacks against the MoE predictor in \(\mathbf{\Theta}\)). However, in the total number of unsuccessful attacks against routers (_i.e._, \(\mathbf{\Theta}\)+\(\mathbf{\Theta}\)\(=76.27\%\)), more than half of them successfully fool
Figure 3: Robustness comparison of router-robustified MoE-CNN (_i.e._\(\mathbf{\hat{\theta}}\)) and baseline models (M1 – M3) for different model scales under CIFAR-10 given the backbone network ResNet-18.
Figure 2: Performance of MoE-CNN and S-Dense robustly trained using (AT) on CIFAR-10 with ResNet-18 as the backbone.
the MoE predictor (_i.e._, ). The above results provide us an additional insight:
**Insight 2:** Improving routers' robustness is _not_ sufficient for the MoE predictor to gain satisfactory robustness although the former makes a positive impact.
Both **Insight 1** and **Insight 2** point out that only improving routers' robustness is _not_ adequate to obtain the desired robustness for the overall MoE-CNN. Thus, we next ask:
**(Q2)** Given the router-robustified model \(\bar{\mathbf{\theta}}\), can we equip \(\bar{\mathbf{\theta}}\) with additional robustness by robustly training expert weights (\(\mathbf{\psi}\))? And how does it further impact routers?
To answer **(Q2)**, we call (AT) to further robustly train the backbone network \(\mathbf{\psi}\) on top of the router-robustified model \(\bar{\mathbf{\theta}}\). We denote the resulting model by \(\bar{\mathbf{\theta}}=[\bar{\mathbf{\phi}}^{\top},\bar{\mathbf{\psi}}^{\top}]\).
**Fig. 5** shows the dissection of the robustness of \(\bar{\mathbf{\theta}}\) in the same setup of Fig. 4. Obviously, the overall prediction robustness (**0+\(\mathbf{\Theta}\)**) is further enhanced after updating \(\bar{\mathbf{\theta}}\) to \(\bar{\mathbf{\theta}}\). Thus, the gains in the robustness of experts' weights indeed further help improve the overall robustness. However, this leads to a surprising drop in the router's robustness (**0+\(\mathbf{\Theta}\)**) when comparing \(\bar{\mathbf{\theta}}\) with \(\bar{\mathbf{\theta}}\). This shows that routers' robustness is _not_ automatically preserved if experts are updated. We obtain the following insight into **(Q2)**:
**Input 3:** Robustness of MoE weights (a), (b), (c).
**Output 4:** Adversarial attack success analysis on dissected MoE-CNN models \(\bar{\mathbf{\theta}}=[\bar{\mathbf{\phi}}^{\top},\mathbf{\psi}^{\top}]\) (model scale \(r=0.5\)), where only \(\bar{\mathbf{\phi}}\) is (AT)-robustified. The adversarial evaluation is based on 50-step PGD attack [24] to fool \(\bar{\mathbf{\theta}}\), and other experiment setups align with Fig. 3. The evaluation is carried out on the test set with a total number of \(10000\) samples.
**Input 4:** Adversarial attack success analysis on dissected MoE-CNN models \(\bar{\mathbf{\theta}}=[\bar{\mathbf{\phi}}^{\top},\mathbf{\psi}^{\top}]\) (model scale \(r=0.5\)), where only \(\bar{\mathbf{\phi}}\) is (AT)-robustified. The adversarial evaluation is based on 50-step PGD attack [24] to fool \(\bar{\mathbf{\theta}}\), and other experiment setups align with Fig. 3. The evaluation is carried out on the test set with a total number of \(10000\) samples.
**Input 5:** Adversarial attack success analysis on forest (a), (b), (c), (d), (e), (f), (g), (h),
are fixed. We term the resulting algorithmic framework as Adversarially robust learning for MoE-CNN (AdvMoE); see Algorithm 1 for a summary.
```
1:Initialize: backbone network \(\mathbf{\psi}\), routers \(\mathbf{\phi}\), batch size \(b\), attack generation step \(K\).
2:for Iteration \(t=0,1,\dots,\)do
3: Pick different random data batches \(\mathcal{B}_{\psi}\) and \(\mathcal{B}_{\phi}\) for backbone and router training
4: Lower-level \(\mathbf{\phi}\)-update (with fixed \(\mathbf{\psi}\)): Given \(\mathbf{\psi}\), update \(\mathbf{\phi}\) by minimizing \(\ell_{\text{TRADES}}\) using \(K\)-step PGD attack [24] generator and SGD (with \(\mathcal{B}_{\mathbf{\phi}}\))
5: Upper-level \(\mathbf{\psi}\)-update (with fixed \(\mathbf{\phi}\)): Given \(\mathbf{\phi}\), update \(\mathbf{\psi}\) by minimizing \(\ell_{\text{TRADES}}\) using \(K\)-step PGD attack gen-erator and SGD (with \(\mathcal{B}_{\mathbf{\psi}}\))
6:endfor
```
**Algorithm 1** The AdvMoE algorithm
We highlight that AdvMoE will train robust routers and robust MoE pathways to 'accommodate' each other. In contrast to the conventional AT framework, AdvMoE delivers the coupled \(\mathbf{\phi}^{\star}(\mathbf{\psi})\) and \(\mathbf{\psi}\), where both parts make a concerted effort to improve the overall robustness. We also remark that AdvMoE does not introduce additional hyper-parameters, since in practice we found routers and experts can share the same learning rate and schedules. More implementation details are provided in Appendix B. In the meantime, we remark that since our proposal is a BLO with non-convex lower and upper-level objectives (1). It is difficult to prove the convergence of AdvMoE. Existing theoretical analysis of BLO typically relies on strongly convex assumptions of lower-level problems [58, 59]. Although without a proper theoretical analysis framework, our method converges well in practice (see Appendix C).
## 5 Experiments
In this section, we will demonstrate the effectiveness of our proposed AdvMoE approach on diverse datasets and models. We will also make an in-depth analysis of the router utility and the expert selection distribution for AdvMoE-trained MoE-CNN.
### Experiment Setup
**Model and dataset setups.** To implement MoE-CNN and other baselines, we conduct experiments on ResNet-18 [60], Wide-ResNet-28-10 [61], VGG-16 [62], and DenseNet [63]. Towards fair assessment, our performance comparison between different model types is restricted to using the same model scale parameter \(r\) (see Fig. 1 for an example). By doing so, an input example will leverage the same amount of model parameters for decision-making. For MoE-CNN, we consider \(N=2\) experts with \(r=0.5\) by default, see Appendix B for more details. Dataset-wise, we focus on the commonly used ones to evaluate the adversarial robustness of image classification [24, 25, 64], including CIFAR-10 [65], CIFAR-100 [65], TinyImageNet [66], and ImageNet [66].
**Baselines.** To make our performance comparison informative and comprehensive, we consider three kinds of baselines that are fairly comparable to (AdvMoE). 1 AT (S-Dense): we apply AT to S-Dense; 2 AT (Sparse): we apply the robustness-aware (structured) sparse mask learning method [50] to obtain Sparse-CNN; 3 AT (MoE): we directly apply AT to MoE-CNN, which co-trains the routers and backbone network. Note this method is also adopted in the latest robust training algorithm [30] for ViT-based MoE architectures. It is worth noting that the above baselines use the same number of model parameters as the pathway of MoE-CNN during model prediction. In addition, we cover 4 AT (Dense) (applying AT to Dense) to acquire a robustness performance reference. Yet, we remark that it is _not_ quite fair to directly compare Dense with the aforementioned smaller counterparts, since the former uses a larger model scale (\(r=1.0\)) at test-time inference.
**Training and evaluation.** We use TRADES [25] as the default robust training objective for all baselines. We also follow the literature [24, 25, 27, 64] to set the attack strength by \(\epsilon=8/255\) for CIFAR-10 and CIFAR-100, and \(\epsilon=2/255\) for TinyImageNet and ImageNet. To implement AdvMoE (Algorithm 1), we mimic the TRADES training pipeline but conduct the proposed BLO routine to robustify routers and backbone parameters in an interactive mode. We adopt \(2\)-step PGD attack [24] at training time for _all_ the methods, supported by the recent work [67] showing its compelling performance in AT. We refer readers to Appendix B for more training details. During evaluation, we report standard accuracy (**SA**) on the clean test dataset and robust accuracy (**RA**) against test-time 50-step PGD attacks [24] with the attack strength same as the training values. We also report **GFLOPs** (FLOPS \(\times 10^{9}\)) as an indicator of the test-time inference efficiency.
### Experiment Results
**Overall performance.** Tab. 1** presents the overall performance of our proposed AdvMoE algorithm vs. baselines. We make several key observations below.
**First.** AdvMoE yields a significant robustness enhancement over all the baselines in every data-model setup. Specifically, AdvMoE consistently yields an improvement of around \(1\%\sim 5\%\) on the robustness measured by RA against PGD attacks. Notably, AdvMoE can also outperform \(\bullet\) AT (Dense) in most cases, around \(1\%\sim 4\%\) robustness improvement (see highlighted results in [green]). This is remarkable since Dense (\(r=1.0\)) is twice larger
than an MoE pathway (\(r=0.5\)). **Second**, we observe that AdvMoE has a preference on wider models. For instance, when WRN-28-10 (the widest model architecture in experiments) is used, AdvMoE yields better robustness over the Dense counterpart across all the dataset setups. **Third**, we also observe that the direct AT application to MoE-CNN, _i.e._, AT (MoE), is worse than AT (S-Dense) and AdvMoE in all setups. This is consistent with our findings in Sec. 4. We remark that although the usefulness of AT (MoE) was exploited in [30] for the MoE-type ViT, it is _not_ effective for training MoE-type CNNs anymore. **Fourth**, AdvMoE can retain the high inference efficiency for MoE-CNN, as evidenced by the GFLOPS measurements in Tab. 1. Compared to S-Dense, MoE-CNN introduces minor computational overhead due to the routing system. However, it saves more than \(50\%\) of the inference cost vs. Dense. This implies that our proposal AdvMoE can preserve the efficiency merit of the MoE structure while effectively improving its adversarial robustness.
**Robust evaluation on AutoAttack [68].** In Tab. 2, we provide additional experiments evaluated by AutoAttack [68] (termed **RA-AA**), a popular robustness evaluation benchmark [69]. The experiment setting in Tab. 2 follows **Tab. 1**. We report RA-AA on CIFAR-10 and CIFAR-100 with ResNet-18 and WRN-28-10. As we can see, although AutoAttack leads to a lower RA-AA compared to RA evaluated using PGD attacks (termed
\begin{table}
\begin{tabular}{l|c|c c c|c|c c c} \hline \hline
**Method** & **Backbone** & **RA (\%)** & **SA (\%)** & **GFLOPS(\#)** & **Method** & **Backbone** & **RA (\%)** & **SA (\%)** & **GFLOPS (\#)** \\ \hline \multicolumn{10}{c}{**CIFAR-10**} \\ \hline \(\bullet\) AT (Dense) & & 50.13\(\pm\)0.13 & 82.99\(\pm\)0.11 & 0.54 & \(\bullet\) AT (Dense) & & 51.75\(\pm\)0.12 & 83.54\(\pm\)0.15 & 5.25 \\ \(\circ\) AT (S-Dense) & & 48.12\(\pm\)0.09 & 80.18\(\pm\)0.11 & 0.14 (74\(\%\)\(\downarrow\)) & \(\circ\) AT (S-Dense) & & 50.66\(\pm\)0.13 & 82.24\(\pm\)0.10 & 1.31 (75\(\%\)\(\downarrow\)) \\ \(\circ\) AT (Sparse) & ResNet-18 & 47.93\(\pm\)0.17 & **80.45\(\pm\)**0.13 & 0.14 (74\(\%\)\(\downarrow\)) & \(\circ\) AT (Sparse) & WRN-28-10 & 48.95\(\pm\)0.14 & 82.44\(\pm\)0.17 & 1.31 (75\(\%\)\(\downarrow\)) \\ \(\circ\) AT (MoE) & & 45.57\(\pm\)0.51 & 78.84\(\pm\)0.75 & 0.15 (72\(\%\)\(\downarrow\)) & \(\circ\) AT (MoE) & & 46.73\(\pm\)0.46 & 77.42\(\pm\)0.73 & 1.75 (67\(\%\)\(\downarrow\)) \\ \(\cline{2-10} \(\Delta\)**DivMoE** & & **51.83**\(\pm\)0.12 & 80.15\(\pm\)0.11 & 0.15 (72\(\%\)\(\downarrow\)) & \(\Delta\)**DivMoE** & & **55.73**\(\pm\)0.13 & **84.32**\(\pm\)0.18 & 1.75 (67\(\%\)\(\downarrow\)) \\ \hline \(\bullet\) AT (Dense) & & 46.19\(\pm\)0.21 & 82.18\(\pm\)0.23 & 0.31 & \(\bullet\) AT (Dense) & & 44.52\(\pm\)0.14 & 74.97\(\pm\)0.19 & 0.07 \\ \(\circ\) AT (S-Dense) & & 45.72\(\pm\)0.18 & **80.10**\(\pm\)0.16 & 0.07 (77\(\%\)\(\downarrow\)) & \(\circ\) AT (Sparse) & & 38.07\(\pm\)0.13 & 69.63\(\pm\)0.11 & 0.02 (71\(\%\)\(\downarrow\)) \\ \(\circ\) AT (Sparse) & VGG-16 & 46.13\(\pm\)0.15 & **79.32**\(\pm\)0.18 & 0.07 (77\(\%\)\(\downarrow\)) & \(\circ\) AT (Sparse) & DenseNet & 37.73\(\pm\)0.13 & 67.35\(\pm\)0.12 & 0.02 (71\(\%\)\(\downarrow\)) \\ \(\circ\) AT (MoE) & & 43.37\(\pm\)0.46 & 76.49\(\pm\)0.65 & 0.12 (61\(\%\)\(\downarrow\)) & \(\Delta\)**DivMoE** & & 35.21\(\pm\)0.74 & 64.41\(\pm\)0.81 & 0.03 (57\(\%\)\(\downarrow\)) \\ \(\cline{2-10} \(\Delta\)**DivMoE** & & **49.82**\(\pm\)0.11 & 80.03\(\pm\)0.10 & 0.12 (61\(\%\)\(\downarrow\)) & \(\Delta\)**DivMoE** & & **39.97**\(\pm\)0.11 & **70.13**\(\pm\)0.15 & 0.03 (57\(\%\)\(\downarrow\)) \\ \hline \multicolumn{10}{c}{**CIFAR-100**} \\ \hline \(\bullet\) AT (Dense) & & 27.23\(\pm\)0.08 & 58.21\(\pm\)0.12 & 0.54 & \(\bullet\) AT (Dense) & 27.90\(\pm\)0.13 & 57.60\(\pm\)0.09 & 5.25 \\ \(\circ\) AT (Sparse) & & 26.41\(\pm\)0.16 & 57.02\(\pm\)0.14 (74\(\%\)\(\downarrow\)) & \(\circ\) AT (S-Dense) & & 26.30\(\pm\)0.10 & 56.80\(\pm\)0.08 & 1.31 (75\(\%\)\(\downarrow\)) \\ \(\circ\) AT (Sparse) & ResNet-18 & 26.13\(\pm\)0.14 & 57.24\(\pm\)0.12 & 0.14 (74\(\%\)\(\downarrow\)) & \(\circ\) AT (Sparse) & WRN-28-10 & 25.83\(\pm\)0.16 & 57.39\(\pm\)0.14 & 1.31 (75\(\%\)\(\downarrow\)) \\ \(\cline{2-10} \(\Delta\)**DivMoE** & & 22.72\(\pm\)0.42 & 53.34\(\pm\)0.61 & 0.15 (72\(\%\)\(\downarrow\)) & \(\Delta\)**DivMoE** & & 22.92\(\pm\)0.55 & 53.39\(\pm\)0.49 & 1.75 (67\(\%\)\(\downarrow\)) \\ \(\cline{2-10} \(\Delta\)**DivMoE** & & **28.05**\(\pm\)0.13 & **57.73**\(\pm\)0.11 & 0.15 (72\(\%\)\(\downarrow\)) & \(\Delta\)**DivMoE** & & **28.82**\(\pm\)0.14 & **57.56**\(\pm\)0.17 & 1.75 (67\(\%\)\(\downarrow\)) \\ \hline \(\bullet\) AT (Dense) & & 22.37\(\pm\)0.15 & 52.36\(\pm\)0.17 & 0.31 & \(\bullet\) AT (Dense) & & 21.72\(\pm\)0.13 & 48.64\(\pm\)0.14 & 0.07 \\ \(\circ\) AT (S-Dense) & & 20.58\(\pm\)0.13 & **48.89**\(\pm\)0.14 & 0.07 (77\(\%\)\(\downarrow\)) & \(\circ\) AT (S-Dense) & & 16.86\(\pm\)0.21 & 39.97\(\pm\)0.11 & 0.02 (71\(\%\)\(\downarrow\)) \\ \(\circ\) AT (Sparse) & VGG-16 & 21.12\(\pm\)0.22 & 48.03\(\pm\)0.17 & 0.07 (77\(\%\)\(\downarrow\)) & \(\circ\) AT (Sparse) & DenseNet & 17.72\(\pm\)0.14 & 41.03\(\pm\)0.16 & 0.02 (71\(\%\)\(\downarrow\)) \\ \(\circ\) AT (MoE) & & 19.34\(\pm\)0.43 & 45.51\(\pm\)0.75 & 0.12 (61\(\%\)\(\downarrow\)) & \(\circ\) AT (MoE) & & 14.45\(\pm\)0.45 & 36.72\(\pm\)0.71 & 0.03 (57\(\%\)\(\downarrow\)) \\ \(\cline{2-10} \(\Delta\)**DivMoE** & & **21.21**\(\pm\)0.21 & **48.33**\(\pm\)0.17 & 0.12 (61\(\%\)\(\
RA-PGD, AdvMoE still outperforms AT (S-Dense), AT (Sparse), and AT (MoE) consistently, evidenced by the **bold** numbers in the RA-AA columns.
**MoE-CNN trained by AdvMoE enjoys better router utility.** Based on the results above and the preliminary studies in Sec. 4, we next peer into the performance difference achieved by AT (Sparse), AT (MoE), and AdvMoE from the perspective of pathway diversities. We ask:
1. What is the relationship between the dynamic pathways generated by the routers trained by AdvMoE and the static mask optimized by AT (Sparse)? What is the difference between the routing decisions using AdvMoE and AT (MoE), and how does it impact the performance?
Regarding 1, we investigate the cosine similarity between the pathways generated by training methods, either AT (MoE) or AdvMoE, and the static mask found by AT (Sparse). Since the latter can be regarded as a single pathway used for all the data, we term it _'mask pathway'_ in contrast to _'MoE pathway'_. We calculate the intersection of union (**IoU**) score between the MoE pathway and the mask pathway under each testing dataset (the clean or adversarial version). **Fig. 6** presents the IoU distributions based on the clean and adversarial test datasets (**Fig. 6a** for AdvMoE and **Fig. 6b** for AT (MoE)). We remark that a smaller IoU score indicates a larger discrepancy between the MoE pathway and the mask pathway. As we can see, the IoU distribution of AdvMoE vs. AT (Sparse) in Fig. 6a shifts closer to \(0\) compared with Fig. 6b. This observation applies to both standard and adversarial evaluation and suggests that AdvMoE (our proposal) has a better capability than AT (MoE) to re-build input-specific MoE pathways, which are more significantly different from the input-agnostic mask pathway identified by the pruning-based method, AT (Sparse).
Regarding 2, we observe from Fig. 6 that the routers learned by AT (MoE) are more fragile to adversarial attacks compared to AdvMoE, as evidenced by the less intersection area of adversarial data vs. clean data. This is also aligned with **Insight 3** in Sec.4. Moreover, the routing policy learned by AdvMoE is more diverse than AT (MoE), as indicated by the latter's density-concentrated IoU scores. In contrast, the distribution of AdvMoE is dispersed with a smaller peak value. Therefore, regarding the expert utility, AdvMoE is able to assign the inputs to a larger group of pathways than AT (MoE), making better use of experts.
A coupling effect of expert number \(N\) and per-expert model scale \(r\) on AdvMoE.Recall that there exist two key parameters involved in MoE-CNN (**Fig. A1**): _(a)_ the number of experts \(N\), and _(b)_ the model scale \(r\) that defines the per-expert (or per-pathway) model capacity. Given the backbone model (_e.g._, ResNet-18 in this experiment), a larger \(N\) paired with a small \(r\) implies that each expert may only have limited model capacity, _i.e._, corresponding to a less number of channels. Regardless of \(N\), if \(r=1\), the full backbone network will be used to form the identical decision pathway.
**Fig. 7** shows the RA of MoE-CNN trained by AdvMoE vs. the model scale parameter \(r\) at different values of \(N\). Two insightful observations can be drawn. **First**, there exists an MoE regime (_e.g._, \(N<8\) and \(r\in[0.5,0.9]\)), in which AdvMoE can outperform AT (Dense) (_i.e._, \(r=1\)) by a substantial margin. This shows the benefit of MoE in adversarial robustness. However, if the number of experts becomes larger (_e.g._, \(N=10\)), the increasing diversity of MoE pathways can raise the difficulty of routers' robustification and thus hampers the performance of AdvMoE (see \(N=10\) and \(r=0.8\) in **Fig. 7**). **Second**, there exists an
\begin{table}
\begin{tabular}{l|c|c c c c|c|c c c} \hline \hline
**Method** & **Backbone** & **RA-PGD** (\%) & **RA-AA** (\%) & **SA** (\%) & **GFLOPS**(\%) & **Method** & **Backbone** & **RA-PGD** (\%) & **RA-AA** (\%) & **SA** (\%) & **GFLOPS** (\%) \\ \hline \multicolumn{10}{c|}{**AT (Dense)**} \\ \(\sim\)AT (S-Dense) & & 50.13\(\pm\)0.13 & 44.72\(\pm\)0.15 & 82.99\(\pm\)0.11 & 0.54 & \(\sim\)**4** (Dense)** & & 51.75\(\pm\)0.12 & 45.13\(\pm\)0.12 & 83.54\(\pm\)0.15 & 5.25 \\ \(\sim\)AT (S-Dense) & & 48.12\(\pm\)0.09 & 42.24\(\pm\)0.13 & 80.18\(\pm\)0.11 & 0.14 (74\%) & \(\sim\)AT (S-Dense) & & 50.66\(\pm\)0.13 & 44.14\(\pm\)0.10 & 82.24\(\pm\)0.10 & 1.31 (57\%) \\ \(\sim\)AT (Sparse) & ResNet-18 & 47.93\(\pm\)0.17 & 42.11\(\pm\)0.11 & **80.46\(\pm\)0.13** & 0.14 (74\%) & \(\sim\)AT (Sparse) & WRN-28-10 & 48.95\(\pm\)0.14 & 43.97\(\pm\)0.11 & 82.44\(\pm\)0.17 & 1.31 (57\%) \\ \(\sim\)AT (MoE) & & 45.75\(\pm\)0.14 & 40.20\(\pm\)0.18 & 78.84\(\pm\)0.15 & 0.15 (72\%) & \(\sim\)AT (MoE) & & 46.73\(\pm\)0.14 & 44.11\(\pm\)0.23 & 7.42\(\pm\)0.17 & 1.75 (67\%) \\ \multicolumn{10}{c|}{**AbvMoE**} \\ \hline \multicolumn{10}{c|}{**AT (Dense)**} \\ \(\sim\)AT (S-Dense) & & 27.23\(\pm\)0.08 & 23.11\(\pm\)0.06 & 58.21\(\pm\)0.12 & 0.54 & \(\sim\)AT (Dense) & & 27.90\(\pm\)0.13 & 23.45\(\pm\)0.11 & 57.60\(\pm\)0.09 & 5.25 \\ \(\sim\)AT (S-Dense) & & 26.41\(\pm\)0.16 & 22.11\(\pm\)0.13 & 57.02\(\pm\)0.14 & 0.14 (74\%) & \(\sim\)AT (S-Dense) & & 26.30\(\pm\)0.10 & 22.23\(\pm\)0.13 & 56.80\(\pm\)0.08 & 1.31 (75\%) \\ \(\sim\)AT (Sparse) & ResNet-18 & 26.13\(\pm\)0.14 & 21.89\(\pm\)0.11 & 57.24\(\pm\)0.12 & 0.14 (74\%) & \(\sim\)AT (Sparse) & WRN-28-10 & 25.83\(\pm\)0.16 & 21.97\(\pm\)0.09 & 57.39\(\pm\)0.14 & 1.31 (57\%) \\ \(\sim\)AT (MeE) & & 27.22\(\pm\)0.42 & 16.33\(\pm\)0.25 & 53.34\(\pm\)0.61 & 0.15 (72\%) & \(\sim\)AT (MoE) & & 22.94\(\pm\)0.55 & 17.87\(\pm\)0.24 & 53.39\(\pm\)0.49 & 1.75 (67\%) \\ \hline \multicolumn{10}{c|}{**AvMoE**} \\ \(\sim\)**10.13** & **23.33\(\pm\)**0.06 & **57.73\(\pm\)**0.11 & 0.15 (72\%) & \(\sim\)**AvMoE** & & **28.82\(\pm\)**0.14 & **23.75\(\pm\)**0.12 & **57.56\(\pm\)**0.17 & 1.75 (67\%) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Robustness overview evaluated with AutoAttack [68] (**RA-AA**) on various datasets and model backbone architectures. Other settings strictly follow Tab. 1. The values of RA-PGD, SA, and GFLOPS are repeated from Tab. 1 for better comparison.
Figure 6: The distribution of the intersection of union (IoU) scores of the input-specific pathways generated by AdvMoE (a) and AT (MoE) (b) vs. the static mask found by AT (Sparse). The distribution over the clean test set and the adversarial test set is plotted for AT (MoE) and AdvMoE on setting (ResNet-18, CIFAR-100). Other settings are aligned with Tab.1.
_ineffective_ MoE regime (_e.g._, \(N\geq 8\) and \(r<0.5\)), in which the performance of AdvMoE largely deviates from that of AT (Dense). In this regime, each expert consists only of a small number of channels, which restricts its robust training ability. Accordingly, both the increasing diversity of MoE pathways (large \(N\)) and the limited capacity per pathway (small \(r\)) could impose the difficulties of AT for MoE-CNN. In our experiments, we choose \(r=0.5\) and \(N=2\), which preserves the diversity of MoE pathways (_i.e._, inference efficiency) and retains the effectiveness of robust training.
Performance with different model scales.To make sure the observations and conclusions from Tab. 1 are consistent across different values of the model scale parameter \(r\), we repeated the experiments on (CIFAR-10, ResNet-18) and (CIFAR-10, WRN-28-10) using \(r\in\{0.2,0.5,0.8\}\) to cover the {sparse, medium, dense} regimes with respect to Dense (\(r=1.0\)). **Fig. 8** summarizes the obtained experiment results. As we can see, AdvMoE yields consistent robustness improvements over all the baselines, including Dense. And the improvement rises as the model scale \(r\) increases. This is not surprising as more parameters will be used when processing one input. Yet, a clear drawback brought by the larger model scale \(r\) is the increase of inference cost, evidenced by the GFLOPS numbers. When \(r\) turns to be large (like \(r=0.8\)), the efficiency benefit brought by the pathway sparsification from MoE gradually vanishes. Thus, a medium sparsity (\(r=0.5\)) is a better choice to balance the trade-off between performance and efficiency, which is thus adopted as our default setting.
Extended study: AdvMoE for ViT.To explore the capability of our proposal AdvMoE on ViT-based MoE models (MoE-ViT), **Tab. 3** presents additional results following the recently published SOTA baseline [30] for MoE-ViT. As we can see, AdvMoE is also applicable to MoE-ViT and can boost robustness over the SOTA baseline by over \(1\%\) RA improvement, while achieving a similar level of SA. Thus, although our work focuses on robust training for MoE-CNN, it has the promise of algorithmic generality to other MoE-based architectures. We defer a more comprehensive study in the future.
Additional experiments.We conduct ablation studies on (1) robustness evaluation using AutoAttack [68] (consistent findings can be drawn as PGD attacks), (2) attacks steps used in AT, and (3) additional explorations towards the coupling effect between the number of experts and the model scales. We refer readers to Appendix C for detailed results.
## 6 Conclusion
In this work, we design an effective robust training scheme for MoE-CNN. We first present several key insights on the defense mechanism of MoE-CNN by dissecting adversarial robustness through the lens of routers and pathways. We next propose AdvMoE, the first robust training framework for MoE-CNN via bi-level optimization, robustifying routers and pathways in a cooperative and adaptive mode. Finally, extensive experiments demonstrate the effectiveness of AdvMoE in a variety of data-model setups. Meanwhile, we admit that the AdvMoE requires roughly twice the computational capacity compared to the vanilla AT baseline due to alternating optimization that calls two back-propagations per step. Addressing this efficiency concern presents a meaningful avenue for future work.
## Acknowledgement
The work of Y. Zhang, S. Chang and S. Liu was partially supported by National Science Foundation (NSF) Grant IIS-2207052 and Cisco Research Award. The work of Z. Wang is in part supported by the US Army Research Office Young Investigator Award (W911NF2010240).
Figure 8: Robustness comparison of models trained with different methods under various model scale settings. Results higher than that of AT (Dense) are marked with \(\star\). Other setups are aligned with Tab. 1. Please refer to Appendix C for exact numbers and GFLOPS comparisons.
\begin{table}
\begin{tabular}{c|c c c} \hline \hline Method & RA (\%) & SA (\%) & GFLOPS (\#) \\ \hline SOTA[30] & 44.63 & 61.72 & 0.27 \\ AdvMoE & **45.93** & 61.67 & 0.27 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance on robust training for MoE-ViT with the setup (ImageNet, DeiT-Tiny). Other settings follow Tab. 1.
Figure 7: Performance of AdvMoE under CIFAR-10 using ResNet-18 as the backbone network for different values of expert number \(N\) and model scale \(r\). The black dash line denotes the performance of Dense (_i.e._\(r=1\)). |
2303.10276 | Unleashing the Potential of Spiking Neural Networks by Dynamic
Confidence | This paper presents a new methodology to alleviate the fundamental trade-off
between accuracy and latency in spiking neural networks (SNNs). The approach
involves decoding confidence information over time from the SNN outputs and
using it to develop a decision-making agent that can dynamically determine when
to terminate each inference.
The proposed method, Dynamic Confidence, provides several significant
benefits to SNNs. 1. It can effectively optimize latency dynamically at
runtime, setting it apart from many existing low-latency SNN algorithms. Our
experiments on CIFAR-10 and ImageNet datasets have demonstrated an average 40%
speedup across eight different settings after applying Dynamic Confidence. 2.
The decision-making agent in Dynamic Confidence is straightforward to construct
and highly robust in parameter space, making it extremely easy to implement. 3.
The proposed method enables visualizing the potential of any given SNN, which
sets a target for current SNNs to approach. For instance, if an SNN can
terminate at the most appropriate time point for each input sample, a ResNet-50
SNN can achieve an accuracy as high as 82.47% on ImageNet within just 4.71 time
steps on average. Unlocking the potential of SNNs needs a highly-reliable
decision-making agent to be constructed and fed with a high-quality estimation
of ground truth. In this regard, Dynamic Confidence represents a meaningful
step toward realizing the potential of SNNs. | Chen Li, Edward Jones, Steve Furber | 2023-03-17T23:18:20Z | http://arxiv.org/abs/2303.10276v3 | # Unleashing the Potential of Spiking Neural Networks by Dynamic Confidence
###### Abstract
This paper presents a new methodology to alleviate the fundamental trade-off between accuracy and latency in spiking neural networks (SNNs). The approach involves decoding confidence information over time from the SNN outputs and using it to develop a decision-making agent that can dynamically determine when to terminate each inference.
The proposed method, Dynamic Confidence, provides several significant benefits to SNNs. 1. It can effectively optimize latency dynamically at runtime, setting it apart from many existing low-latency SNN algorithms. Our experiments on CIFAR-10 and ImageNet datasets have demonstrated an average 40% speedup across eight different settings after applying Dynamic Confidence. 2. The decision-making agent in Dynamic Confidence is straightforward to construct and highly robust in parameter space, making it extremely easy to implement. 3. The proposed method enables visualizing the potential of any given SNN, which sets a target for current SNNs to approach. For instance, if an SNN can terminate at the most appropriate time point for each input sample, a ResNet-50 SNN can achieve an accuracy as high as 82.47% on ImageNet within just 4.71 time steps on average. Unlocking the potential of SNNs needs a highly-reliable decision-making agent to be constructed and fed with a high-quality estimation of ground truth. In this regard, Dynamic Confidence represents a meaningful step toward realizing the potential of SNNs. Code.
## 1 Introduction
Deep artificial neural networks (ANNs) have achieved remarkable success in computer vision tasks [32, 46, 39]. These improvements have been accompanied by ever-growing model complexity and neural network depth. Though the classification accuracy has improved dramatically, the latency and energy cost have also increased, which poses a challenge for real-world AI applications on edge devices, such as mobile phones, smartwatches, and IoT hardware.
Deep spiking neural networks (SNNs) are promising to offer power and latency advantages over counterpart deep artificial neural networks during inference. However, when their power (e.g. averaged spike counts per inference) or latency (e.g. averaged inference time steps per inference) are strictly constrained, SNNs suffer huge accuracy degradation. In recent years, much scholarly attention has been paid to the conflict between accuracy and latency in SNNs, growing the
Figure 1: The upper-bound performance of SNNs when fully utilizing dynamic strategies at runtime, as shown by the red curves in **a.** ResNet18 on CIFAR-10 and **b.** ResNet-50 on ImageNet. The black curves represent baseline SNN performance without dynamic strategies. Additional figures with other settings can be found in the Supplementary Material. **c.** The diagram of the proposed Dynamic Confidence, which can be implemented on-the-fly.
field of low-latency SNNs (also known as fast SNNs). The primary goal of fast SNN research is achieving equivalent or slightly-lower accuracy than the baseline ANN accuracy with as few inference time steps as possible. With the reduction in the number of inference time steps, the spike counts required for each inference can potentially be reduced as well, if the firing rates can be kept the same when latency decreases. In hardware that can take scale power with spike processing operations, this research on low-latency SNNs can contribute to reducing the power requirements of SNNs.
In this paper, we detail a runtime optimization method called Dynamic Confidence and show that it can effectively reduce the inference latency in low-latency SNNs on CIFAR-10 and ImageNet, without the need to sacrifice accuracy or to increase firing rates. Our paper has several new contributions:
* We propose a method (Dynamic Confidence) to decode a temporal series of confidence values from the inference results of an SNN, and utilize this confidence information to dynamically guide inference. It is the first study to formulate confidence in SNNs and introduce the formal concept of dynamic strategies to SNNs.
* Dynamic Confidence can further reduce the inference latency by 40% on average on the state-of-the-art low-latency SNNs (QFFS[33] and QCFS[3]) on CIFAR-10 and ImageNet. By reducing the inference latency and maintaining the same firing rates, the power consumption of SNN inference is reduced consequently.
* We provide a method to calculate the tight upper-bound performance that an SNN can achieve when applying dynamic strategies at runtime. Our findings shed light on the vast untapped potential within current SNNs, providing valuable insights to answer the fundamental question of why to use SNNs instead of ANNs.
* Dynamic Confidence is a lightweight, parameter-insensitive, on-the-fly method. These features make Dynamic Confidence extremely easy to be applied on a variety of low-latency SNN algorithms whenever further latency and power reductions are desirable.
## 2 Motivation
* A main research method adopted in current SNN studies is improving SNNs by leveraging knowledge from ANNs. Some successful examples of this include, but are not limited to, straight-through estimators [41], quantization [33], Backpropagation through time (BPTT) [2], network architectures [35], neural architecture search [30], and neural network simulation tools [49]. The primary concern during this process is how to make ANN knowledge applicable to SNNs, which usually requires additional optimization of spike dynamics in SNNs to ensure smooth knowledge transfer. The study presented in this paper also follows this research method. Specifically, this study is intended to explore whether confidence, a key concept in Bayesian neural networks and interpretable AI can be applied to SNNs to help on reduce inference latency as well as inference power. Furthermore, there is an investigation into the distinctiveness of confidence in SNNs versus confidence in ANNs and how to accurately capture this dissimilarity.
* Dynamic strategies have received considerable critical attention in ANNs due to their effectiveness in bringing considerable speedup and computational savings without degrading prediction accuracy [50]. In contrast, there is not any research studying the application of dynamic strategies to SNNs and evaluation of possible speed and power gains on nontrivial datasets such as CIFAR-10 and ImageNet. This paper aims to fill this gap and report the initial results of applying a dynamic strategy to SNNs. Furthermore, we emphasize the huge potential of this direction.
* Information is in the central position in SNN research. For example, a primary concern in directly-trained SNNs is whether better spatial-temporal information representation and computing can be achieved by surrogate gradients. This paper also emphasizes the significance of the information, but from a different perspective. Instead of seeking more spatial-temporal information by replacing ANN-to-SNN conversion with spatial-temporal training by surrogate gradients, we stick to using ANN-to-SNN conversion but try to seek if any information in the SNN output is missed by other research and has not been fully exploited. Our results show that confidence in SNN outputs contains valuable information and can be leveraged to improve SNN performance on latency and power.
## 3 Related Work
**Spiking Neural Networks (SNNs)**. Spiking neural networks (SNNs) are biologically-inspired models that have time-evolving states, unlike artificial neural networks (ANNs). SNNs use spikes, binary pulses that model action potentials in biology, to transmit information between neurons. Because data processing in SNNs need only take place when spikes are received, SNNs can be implemented in an event-based manner and can benefit from improved power efficiency over ANNs, especially when sparsity can be exploited in hardware [29, 6, 28].
The potential for increased efficiency and biological plausibility that SNNs promise has led to ANN-to-SNN conversion being an active area of research. Converting
ANN activations to spike rates in SNNs has proven possible [10] with marginal loss in accuracy when data-based normalisation techniques are applied [47]. The significant downsides to these kinds of rate-coding approaches are that they require long integration times, which can impact the potential latency benefits of SNNs.
**Low-latency SNNs**. SNN latency optimizations have received increasing scholarly attention in recent years [45, 9, 3, 33], and they can be coarsely divided to two categories: methods based on ANN-to-SNN conversion, and methods based on surrogate gradients. This research falls into the first category. Deng and Gu combine threshold balancing and soft-reset to correct conversion errors and show a significant reduction in the number of SNN simulation time steps needed [9]. The QCFS method [3] uses a quantized activation function to achieve ultra-low latency in SNNs. The QFFS method [33] proposes to reduce accuracy loss in low-latency SNNs by compressing information and suppressing noise.
**Dynamic Strategies**. Dynamic strategies [50, 24, 4, 14, 1, 23, 36] are network optimization algorithms that are input-dependent and can dynamically decide which part of the network to execute at runtime on a per-input basis. The basic assumption in a dynamic strategy is that in real-world applications, examples are not equal for a given network model. Novel examples require the full model capacity to generate a reliable inference result, while simple examples can be solved confidently with fewer computing resources. Dynamic strategies optimize the inference of an ANN by taking both the model and input examples into considerations, and are promising for achieving fast and energy-efficient edge computing.
Compared to these studies on dynamic strategies, our method does not require any modifications to the internal structure of standard models or the addition of heavy auxiliary sub-networks as a dynamic controller to generate instructions. Instead, the proposed Dynamic Confidence can simply be calculated based on the neural network output during inference, bringing lower overhead and higher portability than other dynamic strategies. Also, most of the existing dynamic strategy research optimizes the number of channels and layers, while our method implicitly optimizes the activation precision, thanks to the event-based nature of SNNs.A visualization illustrating the dimensions of dynamic strategies can be found in the Supplementary Material.
## 4 SNN preliminaries
The detailed equations that describing spiking neuronal dynamics and ANN-to-SNN conversion are provided in the Supplementary Material. The following sections focus on introducing the main ideas of the proposed method in a comprehensive manner.
## 5 Approach
The goal of our method is to allocate computing resources (specifically, inference latency) dynamically for each inference by using a confidence metric determined by model output. By this way, the time steps required for each inference are heterogeneous, and the averaged time steps per inference can potentially be reduced compared to using homogeneous time steps for all inferences. The dynamic resource allocation mentioned above can be modeled as a decision-making agent with two main points to consider:
1. The decisions made regarding the allocation of computing resources to the model needs to facilitate fast and efficient inference while maintaining a model accuracy that is as high as possible. In other words, there is a minimal trade-off of accuracy for decreased latency.
2. The decision-making mechanism itself needs to be lightweight and intelligible so as not to add unnecessary overhead and complexity to the model and to enable it to allow users to balance accuracy against latency.
In our approach, we calculate the confidence associated with network output at runtime and use this confidence information to decide when to end inference in advance to get lower average latency per example without accuracy degradation. Section 5.1 describes the preliminary confidence concept, the quality of confidence, and confidence calibration. Section 5.2 explains the difference between confidence in SNNs and ANNs, highlights the challenges that must be addressed before utilizing confidence in SNNs, and presents our proposed solution. Section 5.3 presents how to construct a decision-making agent and use a Pareto Front to compute its threshold, which can be done with minimal cost.
### Confidence
In multi-class classification problems, for a given input example \(X\), a label \(Y\), and an artificial neural network \(f(\cdot)\), the outputs are \(f(X)=(\hat{Y},\hat{P})\). \(\hat{Y}\) is the predicted output label and \(\hat{P}\in[0,1]\) is the confidence of this prediction. Confidence \(\hat{P}\) is an estimate of the ground truth inference correctness likelihood \(P(\hat{Y}=Y)\), and it is usually calculated by Softmax in the multi-class classification problems discussed above. In some application scenarios where the safety and trustworthiness of the neural network models are central, such as medical diagnosis and autonomous driving, the confidence of the prediction is as crucial as its accuracy. The confidence can also be used to understand the interpretability and explainability of the neural network and can interact with other probabilistic models such as Bayesian neural networks.
Recognising the significance of \(\hat{P}\), there is a growing body of literature that delivers calibration algorithms to
improve the reliability of \(\hat{P}\)[42]. Note that calibration can only improve the quality of confidence; a perfectly calibrated model is impossible to achieve. Our approach applies temperature scaling [16], a post-training calibration algorithm, to guarantee \(\hat{P}\) to be of high quality and to keep the overhead of calibration low.
### Dynamic Confidence in Spiking Neural Networks
After conducting ANN-to-SNN conversion, an ANN model \(f(\cdot)\) will be converted into an SNN model \(f_{s}(\cdot)\). Unlike a non-spiking neural network that only generates a single pair \((\hat{Y},\hat{P})\), a spiking neural network \(f_{s}(\cdot)\) with inputs \(X_{t}\) will generate a series of outputs \((\hat{Y}_{t},\hat{P}_{t})\) during inference, where \(t\in\mathcal{T}=\{1,\ldots,T\}\) represents a time step during SNN inference. One feature of SNNs is that their prediction accuracy in the test set will increase with more time steps [45]. This feature is associated with the event-based computing nature of SNNs: in each time step, only a fraction of the information is represented by spikes and processed by spiking neurons in an event-based manner; with more time steps, more information is accumulated and more reliable decision can be made. This well-known SNN feature means that: \(\hat{Y}_{t_{2}}=Y\) is more likely to happen than \(\hat{Y}_{t_{1}}=Y\) when \(t_{2}>t_{1}\). Here, we go one more step further and state that: If \(\hat{P}_{t}\) is a good estimate of \(\hat{Y}_{t}=Y\), \(\hat{P}_{t_{2}}\) should also be more likely to be higher than \(\hat{P}_{t_{1}}\) when \(t_{2}>t_{1}\).
To further clarify these, we plot the trend of \(\hat{Y}_{t}=Y\) and \(\hat{P}_{t}\) over time \(t\) in Figure 2.a., averaged on 10,000 test samples in CIFAR-10. The blue curve (the averaged \(\hat{Y}_{t}=Y\)) actually corresponds to the traditional accuracy-vs-latency response curve in SNNs [45]. The orange curve (the averaged \(\hat{P}_{t}\)) is an estimate of the correctness likelihood \(P(\hat{Y}_{t}=Y)\), averaged on 10,000 test samples. We can see that both these curves show a monotonic increase with time \(t\) with minor fluctuations, which is in line with the statements above. The same trend of these two curves suggests that \(\hat{P}_{t}\) is a good estimator of \(\hat{Y}_{t}=Y\): A low confidence value \(\hat{P}_{t}\) indicates a low likelihood of correctness \(\hat{Y}_{t}=Y\), and vice versa. Note that during SNNs inference, \(\hat{Y}_{t}=Y\) is unknown because it needs to access the ground truth \(Y\). However, this ground truth information can be implicitly fetched by estimating it by the confidence information \(\hat{P}_{t}\). This provides a huge opportunity on optimizing SNNs by utilizing confidence.
One obstacle must be overcome before optimizing SNNs with confidence. As shown in Figure 2.a., the orange curve saturates to 100% quickly with time \(t\). In other words, an overconfident averaged \(\hat{P}_{t}\) is produced after only a few time steps, after which there is little room for further improvement. We call this problem confidence saturation. Confidence saturation has a detrimental effect on our proposed method, as we want the \(\hat{P}_{t}\) to act as a high-quality estimate of \(P(\hat{Y}_{t}=Y)\). A saturated \(\hat{P}_{t}\) can no longer provide a good estimate of \(P(\hat{Y}_{t}=Y)\) since the saturated confidence is too trivial to distinguish from each other and may be more vulnerable to fluctuations. The saturated confidence will also make the decision-making agent we are constructing later more sensitive to its parameter, which will be discussed in the following sections.
The confidence saturation problem is caused by the format of the SNN outputs. The outputs of a rate-coded SNN are accumulated spikes [33] or, in some cases that need higher output precision, the membrane potential of integrate-but-fire neurons [34, 3]. In both situations, the output logits of SNNs \(f_{s}(X_{t})\) increase with \(t\) continuously, which causes the saturated confidence \(\hat{P}_{t}\) after applying Softmax to the SNN outputs. To prevent saturated confidence we introduce a scalar regulation parameter \(\alpha\) to restrict the scale of the output logits in the SNNs, and calculate confidence according to
\[\hat{P}_{t}=\max(\frac{e^{\mathbf{z}_{t}^{i}/\alpha}}{\sum_{j=1}^{K}e^{ \mathbf{z}_{t}^{j}/\alpha}})\quad for\ i=1,2,\ldots,K. \tag{1}\]
\(\mathbf{z}_{t}\) is the output logits of the SNN, that is
\[\mathbf{z}_{t}=f_{s}(X_{t}) \tag{2}\]
There is no strict rule for the choice of \(\alpha\), just that it should avoid confidence saturation \(\hat{P}_{t}\asymp 1\) before the end of the SNN simulation at time \(T\). Empirically, \(\alpha\) is set to the same value as the number of simulation time steps \(T\) when SNNs are built by QCFS [3], and \(\alpha\) is set as \(2^{b}-1\) when SNNs are built by QFFS [33], where \(b\) is the bit precision in all hidden layers. We plot the averaged \(\hat{P}_{t}\) after applying scaling factor \(\alpha\) in Figure 2.b.. It can be seen from this figure that after scaling the average \(\hat{P}_{t}\) is softened.
### Optimizing Latency by Pareto Front
#### 5.3.1 Optimization Targets
Before conducting any optimization, we first clarify the targets of SNN latency optimization research. Certainly, the primary target of this type of research is achieving
Figure 2: The trend of the averaged \(\hat{Y}_{t}=Y\) and the averaged \(\hat{P}_{t}\) over time \(t\) on the test set of CIFAR-10. **a.**\(\hat{P}_{t}\) saturates to 100% quickly with time. **b..** After introducing a scalar regulation parameter \(\alpha\), \(\hat{P}_{t}\) is softened.
low latency in an SNN. On the other hand, we do not wish this low-latency SNN to have a low accuracy or to even be non-functional, which means that high accuracy should be considered as one of the optimization objectives as well. Hence, the goals of SNN latency optimization research should be at least twofold: achieving low latency and maintaining high accuracy. This is a typical multi-objective optimization problem, which allows a Pareto Front to be used to find efficient solutions.
#### 5.3.2 Proposed Method and Upper Bound
In our proposed method, we dynamically optimize SNN latency based on the estimated correctness likelihood \(\hat{P}_{t}\) for each example, while trying to ensure that \(\hat{Y}_{t}=Y\). To accomplish this goal, we develop a simple decision-making agent that determines the optimal time step for terminating inference on a given sample. At runtime, a decision-making agent is placed at the output layer and it compares whether the estimated correctness likelihood \(\hat{P}_{t}\) is higher than a confidence threshold \(th_{c}\). If it is, a decision to terminate SNN inference will be made. For example, if \(th_{c}=0.6\) and \(\hat{P}_{t}=[0.1,0.3,0.5,0.7,0.9]\) at the first 5 time steps for a given example, the SNN will be terminated in advance at the fourth time step as \(\hat{P}_{t}=0.7\), higher than \(th_{c}=0.6\). Different examples will have different termination times.
The optimal performance that can be achieved by our proposed dynamic strategy can be calculated in the following way: Firstly, assuming the ground truth label \(Y\) is accessible during inference. In Dynamic Confidence, this corresponds to the assumption that our formulated confidence \(\hat{P}_{t}\) is a perfect estimate of \(P(\hat{Y}_{t}=Y)\). Secondly, the SNN inference will always be terminated for a given example at \(t\) as soon as \(\hat{Y}_{t}=Y\). This scenario represents the perfect capture of the correct termination time by the constructed decision-making agent. This upper-bound performance fully exploits dynamic strategies at runtime and represents the best possible performance that a given SNN can achieve.
We compared this upper-bound performance to the baseline performance on CIFAR-10 and ImageNet as displayed in Figure 1. On CIFAR-10, the optimal solution achieved 95.1% accuracy in 1.28 time steps and 96.7% accuracy in 1.43 time steps, which is substantially better than the baseline that took 4 time steps to achieve 94.11% accuracy. On ImageNet, the baseline performance was 72.68% accuracy in 14 time steps, and it only surpassed 70% accuracy in 4 time steps. In contrast, the optimal performance achieved 82.47% accuracy in only 4.71 time steps. This significant performance gap suggests that there is potential for further improvements in SNNs by enabling runtime optimization.
#### 5.3.3 Calculating \(th_{c}\) by Pareto Front
The value of the confidence threshold \(th_{c}\) is crucial for the final inference performance. Intuitively, a high \(th_{c}\) is a conservative dynamic strategy. It makes \(\hat{P}_{t}>th_{c}\) happen less frequently and fewer examples will terminate early, limiting the latency improvements. In contrast, a low \(th_{c}\) is a radical dynamic strategy, and making it easy for \(\hat{P}_{t}\) to surpass \(th_{c}\) and more examples will terminate early even though their output confidence \(\hat{P}_{t}\) is low, bringing low latency but large accuracy drops. In other words, the selection of \(th_{c}\) reflects the fundamental latency-accuracy trade-off in SNNs. To compute the precise confidence threshold \(th_{c}\), we propose using a low-cost Pareto Front method.
The optimization problem we are solving is essentially finding a series of \(th_{c}\) values that achieve the highest accuracy for each latency. Computing a Pareto Front requires the search space to be finite, but there are infinitely many choices of \(th_{c}\) in \([0,1]\). To address this, we use a searching resolution of \(0.1\) to discretize the search space of \(th_{c}\), which is sufficient to get an appropriate \(th_{c}\) that significantly reduces SNN inference time according to our experimental results. This suggests that our developed decision-making agent is highly robust to its parameter \(th_{c}\). After discretizing the search space, there are only 11 settings of \(th_{c}\). Then we draw the accuracy-vs-latency curve of \(f_{s}(\cdot)\) with these 11 \(th_{c}\) in Figure 3 (The setting of \(th_{c}=0\) is the same as \(th_{c}=0.1\) on CIFAR-10 and they are both terminated at the first time step with a very low accuracy of 78.76%, and the
Figure 3: The Pareto Front of ResNet-18 on CIFAR-10 as well as the baseline performance (without a dynamic strategy) and the optimal solution (Assuming the ground truth is accessible). Note that unlike the general form of Pareto Front, SNNs have a temporal dimension so their outputs are curves instead of points. The different settings of \(th_{c}\) have different accuracy-vs-latency response curves.
setting of \(th_{c}=1\) is equivalent to the baseline that does not apply any dynamic strategy. Thus, we only show 8 candidate solutions in this figure.), by which a series of Pareto efficient choices of \(th_{c}\) can be obtained. Then a \(th_{c}\) which can give the lowest latency and no accuracy loss compared to the baseline solution is chosen from these 8 candidate solutions and used in Dynamic Confidence. For example, \(th_{c}=0.6\) will be selected in Figure 3.
## 6 Experiment
### Experimental Setup
**Datasets**. The proposed Dynamic Confidence is validated on CIFAR-10 and ImageNet. CIFAR-10 [31] contains 60,000 images, divided into 10 classes. There are 50,000 training images and 10,000 test images. The size of each image is 32x32 pixels and their format is RGB. ImageNet [8] is a large object recognition dataset, and the version we use in our experiments is ILSVRC-2012. ILSVRC-2012 has over one million labeled RGB examples, and 1000 object classes. It has 50,000 images for validation.
**Network Architectures**. For CIFAR-10 we experiment with VGG-16 and ResNet-18 and for ImageNet we experiment with VGG-16 and ResNet-50. Max-pooling is replaced by average-pooling to facilitate ANN-to-SNN conversion.
**QCFS and QFFS**. One of the essential features of the proposed Dynamic Confidence can be applied to a wide range of ANN-to-SNN conversion methods on the fly including the standard data-based normalization [10] and 99.9% data-based normalization [47], resulting in significant reductions in inference latency. However, this paper is focused on exploiting the fast-response potential of SNNs, so we only highlight the results on ANN-to-SNN conversion methods that achieve ultra-low inference latency. Specifically, Dynamic Confidence is demonstrated on SNNs built using QCFS [3] and QFFS [33], both of which have demonstrated ultra-low latency with competitive accuracy on nontrivial datasets. For example, QCFS and QFFS have reported latencies of 64 and 4 time steps, respectively, achieving 72% accuracy on ImageNet. Both methods reduce SNN latency by reducing the information in the ANN model (refer to [33] for a comprehensive introduction to why reducing ANN activation precision can lead to a reduction in SNN latency) and suppressing noise in the SNN model. QCFS achieves noise suppression by simulating longer to amortize the negative impact of noise, while QFFS generates negative spikes to correct noise. Some recent low-latency SNN algorithms [19, 20] also follow these two methods to amortize or correct noise.
In our experiments, we duplicate QCFS and QFFS by quantizing the activation in ANNs to 2-bit by LSQ [12] in all hidden layers and keeping the output layer at full precision. All weights are in full precision. We adopt the soft-reset IF neuron and its variant in SNNs as described in [3, 33], and the neurons in the output layer only accumulate input currents [34]. Detailed equations about LSQ, QCFS, and QFFS are provided in Supplementary Material.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline
**Dataset** & **Architecture** & **Method** & **Acc(ANN)(\%)** & **Acc(SNN)(\%)** & **Averaged time steps** & **Latency saving(\%)** \\ \hline \multirow{6}{*}{CIFAR-10} & \multirow{4}{*}{VGG-16} & QCFS & 92.41 & 92.50 & 30 & \\ & & **QCFS + Dynamic Confidence** & 92.41 & 92.50 & 12.69 & 58\% \\ \cline{3-6} & & QFFS & 92.41 & 92.41 & 6 & \\ \cline{2-6} & & **QFFS + Dynamic Confidence** & 92.41 & 92.41 & 3.17 & 47\% \\ \cline{2-6} & & QCFS & 93.79 & 94.27 & 27 & \\ \cline{2-6} & & **QCFS + Dynamic Confidence** & 93.79 & **94.27** & 11.51 & 57\% \\ \cline{2-6} & & QFFS & 93.79 & 94.11 & 4 & \\ \cline{2-6} & & **QFFS + Dynamic Confidence** & 93.79 & 94.11 & **2.52** & 41\% \\ \cline{2-6} & & Best Reported[3] & 95.52 & 93.96 & 4 & \\ \cline{2-6} & & Best Reported[3] & 93.12 & 93.14 & 4 & \\ \hline \multirow{6}{*}{ImageNet} & \multirow{4}{*}{VGG-16} & QCFS & 72.40 & 73.30 & 74 & \\ & & **QCFS + Dynamic Confidence** & 72.40 & **73.30** & 49.54 & 33\% \\ \cline{2-6} & & QFFS & 72.40 & 72.52 & 4 & \\ \cline{1-1} \cline{2-6} & & QFFS & 72.40 & 72.52 & **2.86** & 29\% \\ \cline{1-1} \cline{2-6} & & QCFS & 72.60 & 70.72 & 128 & \\ \cline{1-1} \cline{2-6} & & **QCFS + Dynamic Confidence** & 72.60 & 70.72 & 81.81 & 36\% \\ \cline{1-1} \cline{2-6} & & QFFS & 72.60 & 73.17 & 6 & \\ \cline{1-1} \cline{2-6} & & **QFFS + Dynamic Confidence** & 72.60 & 73.17 & 4.42 & 26\% \\ \hline \multirow{6}{*}{ImageNet} & \multirow{4}{*}{ResNet-50} & Best Reported[3] & 74.29 & 72.85 & 64 & \\ & & Best Reported[3] & 71.88 & 72.10 & 4 & \\ \hline \hline \end{tabular}
* A concurrent research [20] based on QCFS reported a higher accuracy. However, note that Dynamic Confidence is a runtime latency optimization method and can also apply to this SNN algorithm to further reduce its latency.
\end{table}
Table 1: Latency advantages brought by Dynamic Confidence in 8 different experimental settings.
**Dynamic Confidence**. As described in the approach section, there are three steps to configure Dynamic Confidence:
1. Calibration using an ANN.
2. Scaling the output logits by \(\alpha\) in the SNN converted from the calibrated ANN.
3. Calculating the confidence threshold \(th_{c}\) from the Pareto Front.
Steps 1 and 3 are conducted on the same validation set. The validation set is collected randomly from the training set and its size is 5,000 on CIFAR-10 and 50,000 on ImageNet.
During inference, the confidence is calculated in SNN outputs and sent to the decision-making agent whose threshold is \(th_{c}\). A binary decision of terminating the inference in advance is made if the confidence value surpasses \(th_{c}\).
**Training configurations**. The quantized ANNs are fine-tuned on the pre-trained full-precision ANN models for 60 epochs, with a momentum of 0.9 and a weight decay of \(2.5\times 10^{-5}\), and a loss function of cross entropy. The initial learning rate is 0.01 with an exponential learning rate decay. The batch size is 32. The training is implemented in PyTorch.
### Speed and power advantages by using Dynamic Confidence
While compromising accuracy can lead to even greater latency gains (by adopting a lower \(th_{c}\), related results are shown in the Supplementary Material), our primary focus in the following sections is to optimize latency without sacrificing accuracy. Table 1 reports how much the average latency can be reduced by Dynamic Confidence in 8 different settings respectively. Note that after applying Dynamic Confidence to SNNs, the inference latency is heterogeneous for different examples, so the averaged latency is not necessarily an integer. The results show that Dynamic Confidence can bring a latency reduction of 41% to 58% on CIFAR-10, and a latency reduction of 26% to 36% on a more challenging dataset ImageNet without sacrificing accuracy. These substantial improvements in latency suggest the obvious benefits of adopting Dynamic Confidence to allow heterogeneous terminating time. Moreover, note that both QCFS and QFFS already provide ultra-fast solutions on rate-coded SNNs. Even though, applying Dynamic Confidence on top of these two fast SNN solutions can still render significant latency reduction.
The power cost of an SNN is roughly kept stable for each time step for a given SNN. Hence, reducing the averaged time steps required by an SNN at runtime can also bring nontrivial gains in power efficiency. For example, by reducing the averaged latency from 30 time steps to 12.69 time steps in CIFAR-10; the power consumption of SNN simulations can be roughly reduced by 58%. The overhead of applying Dynamic Confidence may dilute this gain in power a little bit, which is discussed in Section 6.5.
### Comparing to state-of-the-art
State-of-the-art performance of fast SNNs on CIFAR-10 and ImageNet is listed in Table 1. From this table, we can see that on CIFAR-10, the performance achieved by applying Dynamic Confidence in QFFS (94.11% in 2.52 time steps) outperforms the state-of-the-art results both in accuracy and latency. On ImageNet, we report an accuracy of 72.52% in 2.86 time steps, outperforming state-of-the-art results as well.
Not all neuromorphic hardware and SNN algorithms support the negative spikes used in QFFS. Thus, we also benchmark Dynamic Confidence on QCFS which does not use negative spikes. As shown in this table, even without negative spikes, Dynamic Confidence can still achieve an accuracy of 73.30% on ImageNet with an averaged latency of 49.54 time steps, outperforming the state-of-the-art method (72.85% in 64 time steps) by a significant margin.
### Latency distributions after enabling heterogeneous terminating time
We record the distributions of terminating time in Figure 4, to visualize how many samples are terminated by Dynamic Confidence early and facilitate a better understanding of Dynamic Confidence. The purple histogram shows the percentage of samples terminated at a time point \(t\) and the green histogram is the total terminated samples before \(t\) and at \(t\). The dataset is CIFAR-10 and the network architecture is ResNet-18. It is apparent that the terminating time varies for different samples. There is a significant percentage of samples terminated at the time point \(10\) and \(11\) (26% and 29%), and up to 94.97% samples are terminated before the time point \(22\). Even though Dynamic Confidence terminates such a large percentage of samples at very early time steps, its inference accuracy is still guaranteed to be as high as 94.27%. This suggests that the terminating decision made by Dynamic Confidence is highly reliable.
### Overhead of Dynamic Confidence
Dynamic Confidence has been demonstrated to provide significant improvements in terms of latency, as discussed in previous sections. Also, Dynamic Confidence is on-the-fly so it is a promising tool for further optimizing latency and spike counts on other low-latency SNN algorithms as well. However, to ensure that its benefits are maximized, it is important to be mindful of the potential overhead introduced by implementing Dynamic Confidence, as excessive costs can offset the benefits it provides. The overhead of Dynamic Confidence at the configuration phase is trivial, which is illustrated in the Supplementary Material. This section
focuses on analyzing the overhead of Dynamic Confidence at runtime, particularly in the context of real-world applications where it holds greater significance.
At runtime, the main computational overhead of Dynamic Confidence is confidence calculation by Softmax, whose computational complexity is \(\mathcal{O}(TL)\). \(T\) is the time step during SNN simulation and \(L\) is the number of output neurons. Benefiting from the rapid developments of low-latency SNN algorithms, \(T\) has been reduced significantly in recent years. For example, when conducting Dynamic Confidence with QFFS and CIFAR-10, \(T\) can be as small as \(4\). As for \(L\), since the main application scenario of SNNs is small-scale edge-computing tasks, \(L\) is limited in most cases. For instance, \(L\) is \(10\) in CIFAR-10 and \(11\) in DVS-Gesture. Therefore, compared to the 26% to 58% gains in latency reported in previous sections, the runtime overhead of Dynamic Confidence is insignificant, especially when applying Dynamic Confidence on low-latency SNNs and edge AI applications. While low overhead is crucial to our discussion, it can be justifiable to tolerate higher overhead if a dynamic strategy can achieve performance that approaches the upper bound of an SNN, such as the 82.47% accuracy on ImageNet in 4.71 time steps depicted in Figure 3.
## 7 Towards greater opportunities
**Visualizing and approaching the "potential" of SNNs**. Section 5.3.2 presents a method to calculate the tight upper-bound performance of an SNN, which opens up numerous exciting possibilities. With this approach, we can visualize the potential of an SNN when its correct terminating time is accurately captured and compare it with that of another SNN. Figure 1 depicts that SNNs can achieve highly competitive accuracy (in some cases, even surpass the capacity of the original ANN model) in just a few time steps. This potential can be explained by viewing the SNN as an ensemble network. The input-output response curve of spiking neurons in an SNN is highly dependent on its terminating time point. By stopping the simulation at different time steps, we effectively select a different subnetwork with a distinct input-output response curve. For example, if an SNN simulates 10 time steps, it is essentially an ensemble of 10 subnetworks. These ten subnetworks share the same weights, but their input-output response curve is different. For different input samples, we can select the most appropriate subnetwork from these 10 subnetworks to do inference, simply by terminating this SNN at the corresponding time point. Therefore, the temporal dimension makes SNNs a highly efficient ensemble method.
Since the huge potentials are already here, the key question is how to unleash this potential in SNNs. In this paper, we exemplify a method to partly unlock this potential by 1). formulating a high-quality estimation of the ground truth and 2). constructing a highly-reliable decision-making agent.
**Implementing on neuromorphic hardware and temporal-coded SNNs.** To improve the implemented performance of Dynamic Confidence it may be desirable to use an alternative mechanism to the Softmax function. One option is to use a \(k\)-winners-take-all (k-WTA) spiking neural architecture whereby lateral inhibition between output neurons gates outputs. This could make Dynamic Confidence more amenable to implementation directly on neuromorphic hardware. Dynamic Confidence is possible to be applied to SNNs with timing-based encoding methods such as rank order coding [15], latency coding [43], and Time-to-First-Spike Coding [44], and to SNNs with surrogate gradients training methods to further leverage the temporal information in SNNs. In timing-based encoding methods, they may need a new metric than Softmax to represent confidence. It's worth noting that, even when using a superior metric, the upper-bound performance achieved with this metric still aligns with our calculations.
## 8 Conclusion
This study critically examines the ways of applying dynamic strategies to spike-based models to optimize inference latency by Dynamic Confidence. Essentially, we formulate confidence in SNNs and use it to decide whether terminate inference or wait more time steps to accumulate more evidence and get a more reliable prediction. The major challenge of introducing the concept of confidence to SNNs is that, unlike confidence in ANNs, confidence in SNNs evolves with time, so some modifications need to be
Figure 4: The distributions of terminating time after applying Dynamic Confidence on CIFAR-10. The network architecture is ResNet-18. We can see that instead of a fixing inference latency for each sample, different samples can have heterogeneous terminating times when adopting Dynamic Confidence.
applied to leverage the information carried by confidence. A simple decision-making agent is then constructed to decide when to stop inference. Dynamic Confidence improves the performance of SNNs in the aspect of latency and power for exploring their potential on becoming a strong candidate for low-power low-latency edge computing.
|
2310.12559 | Application of quantum neural network model to a multivariate regression
problem | Since the introduction of the quantum neural network model, it has been
widely studied due to its strong expressive power and robustness to
overfitting. To date, the model has been evaluated primarily in classification
tasks, but its performance in practical multivariate regression problems has
not been thoroughly examined. In this study, the Auto-MPG data set (392 valid
data points, excluding missing data, on fuel efficiency for various vehicles)
was used to construct QNN models and investigate the effect of the size of the
training data on generalization performance. The results indicate that QNN is
particularly effective when the size of training data is small, suggesting that
it is especially suitable for small-data problems such as those encountered in
Materials Informatics. | Hirotoshi Hirai | 2023-10-19T08:10:12Z | http://arxiv.org/abs/2310.12559v1 | # Application of quantum neural network model to a multivariate regression problem
###### Abstract
Since the introduction of the quantum neural network model, it has been widely studied due to its strong expressive power and robustness to overfitting. To date, the model has been evaluated primarily in classification tasks, but its performance in practical multivariate regression problems has not been thoroughly examined. In this study, the Auto-MPG data set (392 valid data points, excluding missing data, on fuel efficiency for various vehicles) was used to construct QNN models and investigate the effect of the size of the training data on generalization performance. The results indicate that QNN is particularly effective when the size of training data is small, suggesting that it is especially suitable for small-data problems such as those encountered in Materials Informatics.
## 1 Introduction
Recently, there has been a surge of interest in quantum machine learning (QML), a technique to tackle machine learning problems through the use of
quantum computing and quantum information processing [1; 2]. The Harrow-Hassidim-Lloyd (HHL) method [3] is capable of solving the linear equation \(Ax=b\) for an \(N\times N\) matrix \(A\) on a time scale of \(O(poly(\log N))\), which is expected to be exponentially faster than the conjugate gradient method (\(O(N)\)), which is the most efficient classical algorithm currently available. Quantum linear regression [4; 5] and quantum support vector machines [6] have been suggested as QML techniques that capitalize on this. However, these methods necessitate a great deal of quantum gates and cannot be executed on a noisy intermediate scale quantum (NISQ) device, thus we must wait for the advent of a fault-tolerant quantum computer (FTQC). On the other hand, a quantum neural network (QNN) method [7], also known as quantum circuit learning (QCL) [8], has been proposed as an algorithm for a NISQ device. It is a quantum-classical hybrid algorithm based on the variational quantum algorithm. The QNN attempts to reduce the discrepancy between the output of the quantum circuit and the labeled data by adjusting the circuit parameters to their optimal values. The advantage of QNN is that it is able to utilize high-dimensional quantum states as trial functions, which are difficult to generate on a classical computer [8]. Another advantage is that the unitarity of quantum circuits serves as a regularization to prevent overfitting [8]. In a conventional neural network model, a regularization term is added to the cost function to limit the norm of the learning parameter and reduces the expressibility of the model to prevent overfitting. On the other hand, in a QNN model, the norm of parameters is automatically limited to 1 due to unitarity, so it can be said that the regularization function is inherently provided. The exploration of QNN models is a relatively new field, and previous studies on the development of regression models have been restricted to basic single-dimensional functions [8; 9; 10]. To clarify the performance of QNN models in practical multivariate regression analysis problems, we constructed QNN models on the Auto-MPG data set (mileage per gallon data for various vehicles, 392 valid data excluding missing data) [11] and investigated the effect of the training data size on generalization performance.
Method
### Auto-MPG dataset
In order to evaluate the effectiveness of QNN models in multivariate regression, the Auto-Mpg data set [11] from the UCI Machine Learning Repository was used. This data set consists of 392 instances related to the city-cycle fuel consumption in miles per gallon (MPG). It includes seven attributes that can be used to predict the MPG: cylinders (number of cylinders), displacement (displacement), horsepower (horsepower), weight (vehicle weight), acceleration (time required for acceleration 0-60 mph), model year (year of manufacture), and origin (country of production). Although some attributes are discrete-valued, they were not encoded using one-hot encoding (encoded as separate binary variables), but they were standardized and normalized (-1,1) in the same way as continuous-valued attributes and used as input values to the model. We conducted three experiments with different sizes of the training data set (1/5\(\times\)392=78, 2/5\(\times\)392=156 and 4/5\(\times\)392=312, and the remaining data were used as test data) to study the effect of size on generalization performance [12].
### QNN models
The QNN architecture is composed of three components: an encoder which transforms classical data into a quantum state, an ansatz which is a quantum circuit with learning parameters, and a decoder which converts the quantum state into an output value. Here, the Ry rotation gate (the Ry gate acting on each qubit initialized to \(|0\rangle\)) was used as the encoder. The rotation angle was set to \(\theta\)=arctan(\(x\))+\(\pi/2\) for the scaled attribute \(x\). The arctangent allows the scaled attribute to be uniquely converted to a rotation angle even if the value is outside the range (-1,1) when the scaler is used for the test data. In this study, we constructed the 7-qubit model with each attribute encoded in one qubit (circuit width \(w=1\)) and the 14-qubit model with each attribute encoded in two qubits (\(w=2\)), as shown in Fig. 1. For an ansatz, we used the circuit with Ry rotation gates and CNOT gates in linear configuration (see Fig. 1). The learning parameters are the rotation angles of the Ry rotation gates. We built the model by connecting \(d\) of the depth 1-blocks (\(d\) is referred to as the depth of the circuit). The sum of the Z-axis projections of each qubit, \(\sum_{i}\sigma_{z}^{i}\), was used as a decoder.
The QNN models were implemented with Pytket [13], a Python module for quantum computing, and the quantum circuit calculations were performed using state vector calculations with the Qulacs [14] backend, a quantum computing emulator. The mean squared error (MSE) between the teacher data and the predictions was used as a cost function. The Powell method was used to optimize the learning parameters.
### Classical NN models
A conventional neural network (NN) model was also constructed for comparison. We employed the NN model consisting of an input layer with 7 nodes, two hidden layers with 100 nodes each, and an output layer with a single node. The ReLU was used for the activation functions. PyTorch [15] was used to build and train the NN model. The Adam optimizer [16], an extended version of stochastic gradient descent, was used with a learning rate of 0.02 over 10000 epochs. L2 regularization was applied to prevent overfitting. The training data were divided into two parts, with 80% used for training and the remaining 20% used for validating the L2 regularization weight parameter.
Figure 1: Quantum circuits (ansatz) used for QNN models, the left side: 7-qubit model (\(w=1\)), the right side: 14-qubit model (\(w=2\)).
Results and discussion
The \(R^{2}\) (coefficient of determination) for the training data (expressivility) for each QNN model are shown in Fig. 2.
Figure 2 also shows the results of the classical NN model) for comparison. The QNN model with a deeper (larger \(d\)) or wider (larger \(w\)) circuit has stronger expressivility. The \(R^{2}\) of QNN models decreases as the size of the training data increases. This is the same behavior as the classical NN model with regularization. On the other hand, the classical NN model without regularization shows a perfect fit (\(R^{2}=1\)) at any data size. These results indicate that the automatic regularization worked in the QNN models.
Figure 2: \(R^{2}\) values for the training data.
The \(R^{2}\) for the test data (generalization performance) for each QNN model and the classical NN models are shown in Fig. 3.
QNNs have been shown to have superior generalization performance compared to classical NN models when the amount of training data is limited. As the size of the training data increases, the gap between the QNN and the classical NN models narrows, and the smallest QNN model (\(d=3\), \(w=1\)) performs worse than the classical NN model when the largest data size. The smallest QNN model has limited expressibility due to its limited number of learning parameters. Conversely, a QNN model with a deeper or wider circuit has enough expressibility, and its generalization performance is superior to that of classical NN models. This distinction is particularly evident when the amount of training
Figure 3: \(R^{2}\) values for the test data.
data is limited and the QNN model is considered to be especially effective for small datasets. This advantage has also been confirmed in classification problems [17]. Thus, QNNs are considered to be particularly promising for Materials Informatics (MI) problems with a small number of data [18, 19]. The use of QNNs offers a great advantage in that they do not need to be regularized and can still achieve excellent generalization performance without the need for hyperparameter tuning. Circuit width and depth may be adjustable parameters, but it appears that they should be set to the same level as the number of attributes. Since the problems handled by typical MI do not require a large number of attribute variables and can be adequately computed by emulation on a classical computer without using an actual quantum computer, it may be used as a quantum-inspired algorithm. As a challenge, QNNs have been thought to require more time to train than classical NN models. However, a recent study [7] has shown that, with the use of a specific ansatz, QNNs can be more trainable than classical NNs, raising high expectations for their future progress.
## 4 Conclusion
We constructed QNN models on the Auto-MPG data set to clarify the performance of QNN models in practical multivariate regression analysis problems. Compared to classical NN models, QNNs showed better generalization performance when the size of the training data was limited. The results suggest that QNN is particularly effective when the data size is small, suggesting that it is especially suitable for small data problems such as those encountered in Materials Informatics.
|
2303.00703 | Nearest Neighbors Meet Deep Neural Networks for Point Cloud Analysis | Performances on standard 3D point cloud benchmarks have plateaued, resulting
in oversized models and complex network design to make a fractional
improvement. We present an alternative to enhance existing deep neural networks
without any redesigning or extra parameters, termed as Spatial-Neighbor Adapter
(SN-Adapter). Building on any trained 3D network, we utilize its learned
encoding capability to extract features of the training dataset and summarize
them as prototypical spatial knowledge. For a test point cloud, the SN-Adapter
retrieves k nearest neighbors (k-NN) from the pre-constructed spatial
prototypes and linearly interpolates the k-NN prediction with that of the
original 3D network. By providing complementary characteristics, the proposed
SN-Adapter serves as a plug-and-play module to economically improve performance
in a non-parametric manner. More importantly, our SN-Adapter can be effectively
generalized to various 3D tasks, including shape classification, part
segmentation, and 3D object detection, demonstrating its superiority and
robustness. We hope our approach could show a new perspective for point cloud
analysis and facilitate future research. | Renrui Zhang, Liuhui Wang, Ziyu Guo, Jianbo Shi | 2023-03-01T17:57:09Z | http://arxiv.org/abs/2303.00703v1 | # Nearest Neighbors Meet Deep Neural Networks for Point Cloud Analysis
###### Abstract
Performances on standard 3D point cloud benchmarks have plateaued, resulting in oversized models and complex network design to make a fractional improvement. We present an alternative to enhance existing deep neural networks without any redesigning or extra parameters, termed as **S**patial-**Neighbor** Adapter (**SN-Adapter**). Building on any trained 3D network, we utilize its learned encoding capability to extract features of the training dataset and summarize them as prototypical spatial knowledge. For a test point cloud, the SN-Adapter retrieves \(k\) nearest neighbors (\(k\)-NN) from the pre-constructed spatial prototypes and linearly interpolates the \(k\)-NN prediction with that of the original 3D network. By providing complementary characteristics, the proposed SN-Adapter serves as a plug-and-play module to economically improve performance in a non-parametric manner. More importantly, our SN-Adapter can be effectively generalized to various 3D tasks, including shape classification, part segmentation, and 3D object detection, demonstrating its superiority and robustness. We hope our approach could show a new perspective for point cloud analysis and facilitate future research.
## 1 Introduction
3D vision has wide usage in robotics and AI. Many methods have been proposed to tackle 3D tasks, including object recognition [26, 27, 13, 2, 43, 51] and scene-level understanding [1, 36, 4, 54, 22, 10]. Existing 3D methods are built upon the learnable deep neural networks and benefit from their abilities to process the irregular point clouds. Starting from the concise PointNet [26], later researches upgrade it with hierarchical architectures [27, 2], point-based convolutions [21, 32, 44], attention mechanisms [15], and so on [40, 43].
Recent works focused on inserting complicated modules or excessively increasing network parameters to boost benchmark scores. This trend has not only harmed the efficiency during training and inference, but gradually saturated the benchmarks. As examples for shape classification on ModelNet40 [41], CurveNet [43] delicately explores a set of spatial curves for aggregating local geometry, which leads to 10\(\times\) slower training and 20\(\times\) slower inference compared to PointNet++ [27]. PointMLP [2] brings +11.9M parameters for only +0.5% accuracy boost, which increases 19\(\times\) more model scales than its elite version [2]. Therefore, we ask the question: _could we boost the performance of existing 3D networks at the least cost, even without additional parameters or re-training?_
We develop a non-parametric adapter module by retrieving 3D prototypical knowledge from the spatial neighbors, named **SN-Adapter**. It refers to the idea of \(k\) Nearest Neighbors algorithm (\(k\)-NN) and can directly enhance the existing trained 3D deep neural networks without extra training. As shown in Figure 1, our SN-Adapter is implemented in two steps: pre-construction of 3D prototypical knowledge and inference-time enhancement by interpolation. Specifically, we theoretically split a trained network into two parts. The first is the feature extractor that encodes an input raw point cloud into high-dimensional representations. The second, usually the last linear layer of the network, is named the 3D classifier, which categorizes the encoded vectors with classification logits. Using a trained extractor, we first obtain all the high-dimensional features of point clouds from the training dataset. For different 3D tasks, we summarize the features as various forms of prototypical spatial knowledge, e.g., sample-wise, part-wise, and object-wise prototypes in the top-right of Figure 1. During inference, the SN-Adapter is appended to the feature extractor and utilizes \(k\)-NN to retrieve 3D knowledge from the pre-constructed prototypes. Finally, we linearly interpolate the classification logits concurrently produced from the SN-Adapter and the trained 3D classifier, by which the original 3D network can be improved with marginal extra costs.
Through experimental analysis, we observe the enhancement is resulted from the complementary characteristics between the trained 3D classifier and our SN-Adapter: the former is learned to fit the training set but the latter reveals the feature-level similarities among 3D prototypes. By ex
tensive experiments, our SN-Adapter is verified to widely improve the performance of existing methods on different 3D tasks, such as +1.34% classification accuracy on ModelNet40 [41], +0.17% segmentation mIoU on ShapeNetPart [46], and +7.34% detection AR on ScanNetV2 [7].
Our main contributions are summarized as follows:
1. We propose SN-Adapter, a plug-and-play module to assist 3D deep neural networks via \(k\)-NN for better point cloud analysis.
2. By retrieving knowledge from the pre-constructed spatial prototypes, SN-Adapter efficiently improves the already trained models without any parameters or retraining.
3. We conduct complete experiments on various 3D benchmarks to demonstrate the effectiveness and robustness of our approach.
## 2 Related Work
Deep Learning for 3D Point Clouds.Point cloud based shape classification of synthetic data [41] and real-world data [34] have been widely studied by PointNet [26], PointNet++ [27] and so on [43, 40, 2, 21, 13, 53, 49, 52]. Part segmentation [46] and scene segmentation [7, 30] ask for the per-point classification, the methods [22, 10, 36, 4, 54] of which normally extend feature decoders upon the classification networks to densely propagate the extracted features. 3D object detection has wide usages in _e.g_., autonomous driving [5, 24, 20, 17] and robotics [29, 6, 23]. Our SN-Adapter can be generalized to all 3D tasks, including shape classification, part segmentation, and 3D object detection, demonstrating our robustness for point cloud analysis.
Feature Adapters in Computer Vision.The feature adapter is a light-weight module to efficiently adapt large-scale pre-trained models for downstream tasks. Motivated by the adapter in NLP [16], CLIP-Adapter [12], Tip-Adapter [48], and CoMo [47] introduce visual adapters using CLIP for few-shot image classification: freeze the pre-trained parameters of CLIP and only fine-tune the adapters of two-layer MLP. Follow-up works have successfully applied adapters to tasks such as 3D open-world learning [50, 55], image captioning [31], object detection [11], semantic segmentation [28], and video analysis [38]. Compared to previous works, our SN-Adapter is efficient, non-parametric, and aims at the tasks for 3D point clouds. We leverage the idea of \(k\) nearest neighbors to enhance the trained 3D networks without re-training.
Figure 1: **The Pipeline of SN-Adapter by Two Steps: (a) and (b).** We divide the already trained deep neural network into the feature extractor and the 3D classifier, whose weights are frozen without fine-tuning. In **a)**, we extract the 3D training features and construct the spatial prototypical knowledge. In **b)**, we introduce SN-Adapter to conduct \(k\)-NN retrieval from the task-specific prototypes for non-parametric enhancement.
Nearest-Neighbor Algorithm.Nearest-Neighbor Algorithm memorizes the training data and predicts labels based on the \(k\) nearest training samples (\(k\)-NN). Comparing with neural networks \(k\)-NN is still favored for its simplicity and efficiency. Models based on nearest-neighbor retrieval are able to provide strong baselines for many tasks, such as image captioning [8, 9], image restoration [25], few-shot learning [39], and representation learning [3, 37]. Besides computer vision, Nearest-Neighbor Algorithm also plays an important role for some language tasks, _e.g_., language modeling [14, 19] and machine translation [18, 33]. Different from the above domains, for the first time, we explore how to augment existing deep neural networks with Nearest-Neighbor Algorithm for 3D point cloud analysis and propose an SN-Adapter with spatial prototypical knowledge retrieval.
## 3 Method
In this section, we respectively illustrate how our proposed Spatial-Neighbor Adapter (SN-Adapter) benefits the three 3D tasks: shape classification, part segmentation, and 3D object detection.
### Shape Classification
Task Description.Given a trained 3D network for classification, we theoretically divide it into two parts: the feature extractor \(\Phi(\cdot)\) and the 3D classifier \(\Theta(\cdot)\). The feature extractor takes as input a raw point cloud \(\{p_{i}\}_{i=1}^{N}\) of \(N\) points and outputs its \(C\)-dimensional global feature \(f\in\mathbb{R}^{C}\). The 3D classifier then maps \(f\) into classification logits of \(K\) categories, \(l^{cls}\in\mathbb{R}^{K}\), which denote the predicted probability for each category. We formulate them as
\[l^{cls}=\Theta(f);\ \ f=\Phi(\{p_{i}\}_{i=1}^{N}). \tag{1}\]
Normally, \(\Phi(\cdot)\) is invariant to the permutation of points with a pooling operation to capture the global characters, and \(\Theta(\cdot)\) corresponds to the last linear projection layer of the network.
Sample-wise Spatial Prototypes.For shape classification, we construct the sample-wise spatial prototypes to retrieve 3D knowledge for each test point cloud. First, we utilize the trained feature extractor \(\Phi(\cdot)\) to obtain the global features of all \(M\) samples from the training set, denoted as \(F^{cls}\in\mathbb{R}^{M\times C}\). As each training sample is only represented by a single global vector, we are affordable to store all \(M\) features \(F^{cls}\) as the spatial prototypes for reserving complete prior 3D knowledge, denoted as \(Proto^{cls}\in\mathbb{R}^{M\times C}\). To further explore the spatial distributions of different point clouds, we also obtain a global positional vector for each training sample by averaging the 3D positional encodings [35] of all input points, which are directly added to \(Proto^{cls}\). Then during inference, we extract the global feature \(f\) of the test point cloud, and linearly interpolate the two classification logits predicted by the 3D classifier and our SN-Adapter, formulated as
\[l^{cls}=\Theta(f)+\gamma\text{SN-Adapter}(f,\ Proto^{cls}), \tag{2}\]
where \(\gamma\) denote the relative weights between two logits.
SN-Adapter.Analogous to all 3D tasks, SN-Adapter conducts \(k\)-NN algorithm to aggregate \(k\)-nearest spatial knowledge and adopts Euclidean distance as the distance metric between \(f\) and \(Proto^{cls}\). We represent the retrieved \(k\)-nearest prototypes as \(\mathcal{N}\) and the category set as \(\mathcal{C}\). Then, the predicted probability of category \(c\in\mathcal{C}\) in the logits is calculated as
\[Prob(c|f)=\frac{\sum_{pt\in\mathcal{N}_{c}}1/d(f,pt)}{\sum_{c\in C}\sum_{pt\in \mathcal{N}_{c}}1/d(f,pt)}, \tag{3}\]
where \(\mathcal{N}_{c}\) denotes the retrieved prototypes of the \(c\) category, and \(d(f,pt)\) denotes the distance between test point cloud's feature \(f\) and the prototype \(pt\).
### Part Segmentation
Task Description.Part segmentation task requires the network to classify each point in the input point cloud. The \(\Phi(\cdot)\) is developed as an encoder-decoder architecture and outputs the extracted features \(\{f_{i}\}_{i=1}^{N}\) for all the \(N\) points. We formulate this as
\[\{l_{i}^{seg}\}_{i=1}^{N}=\{\Theta(f_{i})\}_{i=1}^{N};\ \ \ \{f_{i}\}_{i=1}^{N}=\Phi(\{p_{i}\}_{i=1}^{N}), \tag{4}\]
where \(l_{i}^{seg}\in\mathbb{R}^{K}\) denotes the classification logits of the \(i\)-th point. Here, \(\Theta(\cdot)\) is shared for every point and maps the point feature into logits of \(K\) part categories.
Part-wise Spatial Prototypes.We construct the part-wise spatial prototypes to retrieve 3D knowledge for every single point of the test point cloud. Considering the classification logits are to be made for each point, we need to extract and memorize the features \(F\) of all \(N\) points from \(M\) training samples as prototypical knowledge. However, it would be overloaded to store the \(F^{seg}\in\mathbb{R}^{M\times N\times C}\), let alone the \(k\)-NN retrieval. Therefore, for each training sample, we propose to obtain its part-wise prototypical features by conducting average pooling on the points of the same part category, denoted as Part_Pooling\((\cdot)\). For example, a point cloud of a chair is annotated as three parts: leg, seat, and back. Then, we only need to store three prototypical features for this training sample, whose dimension is \(\mathbb{R}^{3\times C}\). After the pre-construction, we acquire the spatial prototypical knowledge for part segmentation, \(Proto^{seg}\), which is
space-efficient and still in the same order as \(Proto^{cls}\), formulated as
\[Proto^{seg}=\text{Part\_Pooling}(F^{seg})\in\mathbb{R}^{M\times P\times C}, \tag{5}\]
where \(P\) is the maximum part category number of an object in the dataset, which is no more than six in ShapeNetPart [46]. During inference, after extracting the features \(\{f_{i}\}_{i=1}^{N}\), we combine the two classification logits for each \(N\) point of the test point cloud, formulated as
\[\{l_{i}^{seg}\}_{i=1}^{N} =\{\Theta(f_{i}) \tag{6}\] \[+\gamma\text{SN-Adapter}(f_{i},\ Proto^{seg})\}_{i=1}^{N}.\]
### 3D Object Detection
Task Description.Taking a scene-level point cloud as input, the 3D object detector learns to localize and classify the objects in the 3D space. The detector would first utilize the \(\Phi(\cdot)\) to extract the scene-level 3D features and group the features for each object proposal, denoted as \(\{f_{i}\}_{i=1}^{O}\), where \(f_{i}\in\mathbb{R}^{C}\) and \(O\) represents the proposed object number of the scene. Then, several parallel MLP-based heads are adopted to predict the category, 3D position and other attributes for each object proposal. We formulate the main process as
\[\{l_{i}^{det}\}_{i=1}^{O} =\{\Theta_{cls}(f_{i})\}_{i=1}^{O}; \tag{7}\] \[\{p_{i}^{det}\}_{i=1}^{O} =\{\Theta_{pos}(f_{i})\}_{i=1}^{O};\] \[\{f_{i}\}_{i=1}^{O} =\Phi(\{p_{i}\}_{i=1}^{N}),\]
where \(\Theta_{cls}(\cdot)\) and \(\Theta_{pos}(\cdot)\) are responsible for predicting the classification logits \(l_{i}^{det}\in\mathbb{R}^{K}\) and the 3D position \(p_{i}^{det}\in\mathbb{R}^{3}\), which are shared for all object proposals. After this, the Non-Maximum Suppression (3D NMS) is applied to discard the duplicated predictions in the 3D space, which is significant to the final evaluation metric.
Object-wise Spatial Prototypes.We construct the object-wise spatial prototypes to retrieve 3D knowledge for each object proposal in the test point cloud. We first leverage the trained 3D detector to obtain the extracted object features and predicted 3D positions for all training samples, denoted as \(F^{det},P^{det}\in\mathbb{R}^{M\times O\times 3}\). On top of that, we adopt positional encodings [35] based on trigonometric functions to embed \(P^{det}\) and add them onto \(F^{det}\). This provides \(F^{det}\) with sufficient 3D positional information of objects and facilitates the \(k\)-NN retrieval for SN-Adapter. We then calculate the spatial prototypical knowledge for 3D object detection as
\[Proto^{det}=F^{det}+\text{PE}(P^{det})\in\mathbb{R}^{M\times O\times C}, \tag{8}\]
where PE denotes the positional encodings function.
During inference, for each object proposal we acquire its predicted \(f_{i},l_{i}^{det},p_{i}^{det}\) and aggregate them likewise via positional encodings. The SN-Adapter retrieves spatial knowledge from nearest neighbors and enhances the classification logits predicted by \(\Theta_{cls}\), formulated as
\[\{l_{i}^{det}\}_{i=1}^{O} =\{\Theta_{cls}(f_{i}) \tag{9}\] \[+\gamma\text{SN-Adapter}(f_{i}+\text{PE}(p_{i}^{det}),\ Proto^{det} )\}_{i=1}^{O}.\]
Our SN-Adapter is inserted before the 3D NMS operation, which could rectify some 'false' classification made by \(\Theta_{cls}\) and effectively avoid the removal of 'true' bounding boxes.
## 4 Analysis
### Quantative Analysis
Here, we take PointNet [26] for shape classification on ModelNet40 [41] as the example. First, we show the performance of individual PointNet's 3D classifier and SN-Adapter compared to the interpolated one in Figure 2: the interpolated model achieves higher accuracy for most categories. Though SN-Adapter performs much worse than the learned 3D classifier on some categories, it could reversely enhance to achieve a better 3D classifier by interpolation. Specifically, we present the statistic for the interpolated prediction whose individual predictions of 3D classifier and SN-adapter are inconsistent. As shown in Table 1, when
\begin{table}
\begin{tabular}{c c c c} \hline \hline PointNet & SN-Adapter & Interpolation & Number \\ \hline ✓ & ✗ & ✓ & 72 \\ ✓ & ✗ & ✗ & 36 \\ ✗ & ✓ & ✓ & 68 \\ ✗ & ✓ & ✗ & 10 \\ ✗ & ✗ & ✓ & 1 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Statistic of sample numbers where individual models produce different predictions. ✓ and ✗ denote correct and wrong predictions, respectively.**
Figure 2: **Comparison of individual PointNet, SN-Adapter, and the interpolated model for different categories. We show the the overall accuracy (OA) of 40 categories on ModelNet40 [41].**
the original PointNet is wrong, but SN-Adapter is correct, our SN-Adapter can help rectify nearly 90%, 68/(68+10), of the predictions. More surprisingly, we observe that, even if both PointNet and SN-Adapter are wrong, the interpolated one still obtains the correct results, demonstrating the implicit complementary knowledge between the learned 3D classifier and the spatial prototypes.
To further illustrate the complementarity of SN-Adapter, we present the predicted classification logits before softmax function, for the cases where SN-Adapter corrects the false prediction of PointNet. As shown in Figure 3 (a), the PointNet's predicted values of 'night-stand' and 'table' are close, indicating it is difficult for PointNet to distinguish them. In contrast, the SN-Adapter could produce more discriminative values between the two categories and address the ambiguity of PointNet by an ensemble with a large weight \(\gamma\). As for Figure 3 (b), when PointNet confidently predicts the wrong category, our SN-Adapter can put the final prediction back on track with the confidence score for the correct category. Figure 3 (c) shows that when both of their predictions are wrong, the interpolation of SN-Adapter can still contribute to the right answer.
### Qualitative Analysis
Why does the \(k\)-NN retrieval work for point cloud analysis? For one, due to the difficulty of data acquisition, the 3D community lacks large-scale high-quality training datasets, and existing methods can only learn from limited samples. In this situation, the representative 3D prototypes become much more significant since the construction of prototypes are not overly dependent on data distribution and can well represent the typical features of a category. In contrast, the 3D classifiers of deep neural networks greatly suffer from long-tail distributions of training data. That is to say, when the 3D samples of some categories are insufficient during training, the learnable classifier would not form the prediction preference of those unusual categories and fail to recognize them for testing. The \(k\)-NN upon spatial prototypes inherently overcomes such category imbalance via similarity-based retrieval, which hardly depends on the amount of training data.
### Theoretical Analysis
We start from the perspective of learned embedding space to illustrate how SN-Adapter boosts the learned deep neural networks. The \(k\)-NN algorithm of SN-Adapter is able to associate pre-constructed spatial prototypes in close proximity. These adjacent prototypes normally have the same ground-truth labels and share similar semantic knowledge. Spatially, the entire 3D space can be divided into many discrete spherical regions. We define a spherical region in the embedding space as \(N_{\epsilon}(x)=\{x^{\prime}||x^{\prime}-x||_{2}\leq\epsilon\}\), where \(x\) denotes the spherical center and \(\epsilon\) denotes its radius. The goal of SN-Adapter is based on the extracted feature of a test point cloud to retrieve clusters and then obtain representative knowledge from them.
For better retrieval performance, each spherical center prefers to have sufficiently pure spherical regions. In other words, the information of the representative prototypes should be convincing enough, formulated as \(\forall x^{\prime}\in N_{\epsilon}(x),gt(x^{\prime})=gt(x)\), where \(gt(\cdot)\) denotes the ground-truth label. We then define \(C(N_{\epsilon})\) and \(P(N_{\epsilon})\) as the coverage and purity of all spherical regions. The optimal \(C(N_{\epsilon})\) desires \(\epsilon\) to be large enough to cover the entire space, while the purity requires a smaller \(\epsilon\) to contain as few deviating prototypes as possible. Therefore, we need to consider the trade-off between both coverage and purity. Formally, we expect to obtain the specific \(\epsilon\) that satisfies \(\epsilon^{*}=max\{\epsilon:P(N_{\epsilon})\geq\alpha\}\), where \(\alpha\) serves as a threshold for \(P(N_{\epsilon})\) and also the maximum function that helps increase \(C(N_{\epsilon})\). In our experiments, we do not explicitly set the value \(\alpha\), but leverage an appropriate number of \(k\) nearest neighbors to implicitly obtain the optimal trade-off for better retrieving prototypical knowledge.
Figure 3: **Classification logits of PointNet, SN-Adapter and the interpolated model**. We report the numerical results of logits before the softmax function and denote different categories with different colors. We highlight the category of the highest value in the logits with a box and the ground-truth category with a check mark.
## 5 Experiments
### Shape Classification
SettingsWe evaluate our SN-Adapter on two widely adopted datasets for shape classification: ModelNet40 [41] and ScanObjectNN [34]. We select several representative methods and append SN-Adapter upon them: PointNet [26], PointNet++ [27], SpiderCNN [45], DGCNN [40], PCT [15], CurveNet [43], and PointMLP [2]. We set the last linear layer as \(\Theta(\cdot)\) and all the precedent layers as \(\Phi(\cdot)\). The overall accuracy (OA) and class-average accuracy (mAcc) are adopted as evaluation metrics. Note that, as our SN-Adapter requires no training time, we utilize simple loops to search for the best \(k\) within minutes.
PerformanceIn Table 2 and Table 3, we show the enhancement results of SN-Adapter on the two datasets, respectively. On ModelNet40 [41] of synthetic data, PointNet++ is boosted by +1.06% mAcc, which has surpassed the more complicated DGCNN by +1.30%. On ScanObjectNN [34] of real-world data, SN-Adapter shows stronger complementary characteristics to the trained networks, which boosts PointNet by 1.9% OA and PointNet++ by +1.3% OA. For the state-of-the-art PointMLP, our SN-Adapter improves it by +0.6% OA and +0.6% mAcc.
### Part Segmentation
SettingsFor part segmentation, we test our SN-Adapter on ShapeNetPart [46] dataset and select the four baseline models: DGCNN [40], PointNet++ [27], PointMLP [2] and CurveNet [43]. We follow other settings the same as the shape classification experiments and report the mean IoU across all instances in the dataset, denoted as mIoU\({}_{I}\).
PerformanceAs the part segmentation benchmark has long been saturated, a slight improvement for mIoU\({}_{I}\) is also worth mentioning. In Table 4, we observe the biggest improvement of +0.17% mIoU\({}_{I}\) is on PointMLP, compared to Curvenet's +0.11% and DGCNN's +0.09%. This indicates that the stronger feature encoder \(\Phi(\cdot)\) contributes to better part-wise prototypes for the retrieval of SN-Adapter.
### 3D Object Detection
SettingsFor 3D object detection on ScanNetV2 [7], we select VoteNet [10] and 3DETR-m [22] as the baseline models to test our SN-Adapter. We set the MLP-based classification head as \(\Theta(\cdot)\) and the scene-level feature extractor as \(\Phi(\cdot)\). The SN-Adapter is inserted after \(\Theta(\cdot)\) and before the 3D NMS. We report the mean Average Precision (AP\({}_{25}\)) and mean Average Precision (AR\({}_{25}\)) at 0.25 IoU threshold. For time efficiency, the hyperparameter \(k\) is simply set as 32 for the two detectors.
\begin{table}
\begin{tabular}{c c c} \hline \hline Method & mIoU\({}_{I}\) (\%) & \(k\) \\ \hline DGCNN [40] & 85.17 & - \\ + SN-Adapter & **85.26** & 22 \\ PointNet++ [27] & 85.40 & - \\ + SN-Adapter & **85.47** & 1 \\ PointMLP [2] & 85.69 & - \\ + SN-Adapter & **85.86** & 1 \\ CurveNet [43] & 86.58 & - \\ + SN-Adapter & **86.69** & 64 \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Part segmentation on ShapeNetPart [46].**
\begin{table}
\begin{tabular}{c c c c} \hline \hline Method & OA (\%) & mAcc (\%) & \(k\) \\ \hline PointNet [26] & 89.34 & 85.79 & - \\ + SN-Adapter & **90.68** & **86.47** & 21 \\ PointNet++ [27] & 92.42 & 89.22 & - \\ + SN-Adapter & **93.48** & **90.00** & 77 \\ DGCNN [40] & 92.18 & 89.10 & - \\ + SN-Adapter & **92.99** & **89.70** & 24 \\ PCT [15] & 93.27 & 89.99 & - \\ + SN-Adapter & **93.56** & **90.17** & 110 \\ CurveNet [43] & 93.84 & 91.14 & - \\ + SN-Adapter & **94.25** & **91.50** & 2 \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Shape classification on ModelNet40 [41] dataset.**
\begin{table}
\begin{tabular}{c c c} \hline \hline Method & OA (\%) & mAcc (\%) & \(k\) \\ \hline PointNet [26] & 68.2 & 63.4 & - \\ + SN-Adapter & **70.1** & **64.2** & 128 \\ SpiderCNN [45] & 73.7 & 69.8 & - \\ + SN-Adapter & **74.4** & **70.5** & 68 \\ PointNet++ [27] & 77.9 & 75.4 & - \\ + SN-Adapter & **79.2** & **76.2** & 16 \\ DGCNN [40] & 78.1 & 73.6 & - \\ + SN-Adapter & **78.9** & **74.0** & 140 \\ PointMLP [2] & 85.7 & 84.0 & - \\ + SN-Adapter & **86.3** & **84.6** & 5 \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Shape classification on ScanObjectNN [34] dataset.**
\begin{table}
\begin{tabular}{c c c} \hline \hline Method & AP\({}_{25}\) (\%) & AR\({}_{25}\) (\%) \\ \hline VoteNet [10] & 57.84 & 80.92 \\ + SN-Adapter & **58.46** & **83.74** \\
3DETR-m [22] & 64.60 & 77.22 \\ + SN-Adapter & **65.16** & **84.56** \\ \hline \hline \end{tabular}
\end{table}
Table 5: **3D object detection on ScanNetV2 [7].**
PerformanceTable 5 presents the enhanced detection performance by SN-Adapter. For AR\({}_{25}\), we significantly improve by +2.82% on VoteNet and +7.34% on 3DETR-m. This indicates that the spatial prototypical knowledge can effectively avoid the removal of false duplicated bounding boxes in 3D space. More specifically, some spatially neighboring boxes, which have incorrectly similar scores and should have been removed by 3D NMS, can be rectified and reserved as outputs.
### Ablation Study
Main hyperparametersWe here conduct the ablation study concerning two hyperparameters: \(\gamma\) and \(k\). We adopt PointNet [26] with SN-Adapter and experiment shape classification on ModelNet40 [42]. As the \(\gamma\) varies from 0 to 50 in Figure 4, the enhancement of SN-Adapter peaks around 8, but becomes harmful after 10. This indicates the SN-Adapter would adversely affect the baseline model under a too large proportion, and requires a proper interpolation ratio for best introducing the spatial prototypical knowledge. The results in Figure 5 show that our SN-Adapter is not very sensitive to \(k\) when it is large enough (over 80), which has already covered the most contributing prototypes to the final classification.
Distance metrics for retrieval.Different distance metrics for SN-Adapter affect the retrieval of nearest spatial prototypes, which further leads to different performance enhancement over the baseline models. We evaluate our SN-Adapter with different distance metrics for shape classification on ModelNet40 [42] and adopt three baseline models: PointNet [26], DGCNN [40], and CurveNet [43]. As reported in Table 6, for three baseline models, Euclidean distance performs better, which can be better reveal the point distribution in the 3D space.
Positional encodings.For shape classification, we equip the sample-wise \(Proto^{cls}\) with global positional vectors to preserve the spatial distributions of points. In Table7, we explore the best way to obtain such vectors concerning the encoding functions and pooling operations. We evaluate three baseline models: PointNet [26], PointNet++ [27] and PCT [15] for shape classification on ModelNet40 [42]. As reported, 'Sin/cos' encoding function has more advantages, which can bring favorable performance boost to the 'SN-Adapter without PE'.
3D Object DetectionWe insert our SN-Adapter into the trained object detectors before 3D NMS, and summarize the object-wise prototypes with positional encodings. We here explore the effectiveness of both insert position and positional encodings. In Table 8, we select 3DETR-m [22] as
\begin{table}
\begin{tabular}{l c c c} \hline \hline Metric & PointNet & DGCNN & CurveNet \\ \hline Manhattan & 90.36 & 92.63 & 94.00 \\ Chebyshev & 88.29 & 92.26 & 93.56 \\ Hamming & 88.37 & 92.22 & 93.72 \\ Canberra & 90.07 & 91.82 & 93.92 \\ Braycurtis & 90.24 & 92.46 & 94.04 \\ Euclidean & **90.68** & **92.99** & **94.25** \\ \hline \hline \end{tabular}
\end{table}
Table 6: **Different distance metrics for SN-Adapter on ModelNet40 [41] dataset with overall accuracy (OA) (%).**
\begin{table}
\begin{tabular}{c c c c c} \hline \hline PE & Pooling & PointNet & PointNet++ & PCT \\ \hline - & - & 90.11 & 93.19 & 93.52 \\ Fourier & Avg. & 89.99 & 93.11 & 93.52 \\ Fourier & Max. & 89.95 & 93.23 & 93.48 \\ Sin/cos & Avg. & 90.03 & **93.48** & **93.56** \\ Sin/cos & Max. & **90.68** & 93.15 & 93.48 \\ \hline \hline \end{tabular}
\end{table}
Table 7: **Different positional encodings (PE) and pooling operations** of SN-Adapter for on ModelNet40 [41] dataset with overall accuracy (OA) (%). ‘Fourier’ and ‘Sin/cos’ denote Fourier and trigonometric encoding functions [22], respectively. The first row denotes SN-Adapter without any positional encodings.
Figure 4: Ablation study of **interpolation ratio \(\gamma\)**.
Figure 5: Ablation study of **the number of nearest neighbors, \(k\)**.
our baseline on ScanNetV2 [7] dataset. As shown, if after 3D NMS, the SN-Adapter cannot bring noteworthy boosts, since the remaining 3D boxes filtered by NMS are already the most confident ones for the detector. Also, blending the positional encodings can improve the performance of SN-Adapter for introducing more positional knowledge into the prototypes.
Extra Costs of SN-Adapter.Besides the enhancement on scores, we explore if our SN-Adapter would cause too much extra time and memory costs over baseline models. We utilize a single RTX 3090 GPU with batch size 64 for testing and select two baseline models: DGCNN [40] for shape classification on ModelNet40 [41] and CurveNet [43] for part segmentation on ShapeNetPart [46]. As shown in Table 9, our non-parametric SN-Adapter can achieve superior performance-cost trade-off to enhance already trained networks without re-training.
### Visualization
In Figure 6, we visualize the results of CurveNet [43] with and without our SN-Adapter for part segmentation on ShapeNetPart [46] dataset. As shown, our SN-Adapter mainly improves the segmentation of points located in the connection areas between different object parts. Such points normally contain the semantic knowledge of both object parts and would confuse the learned 3D classifier of deep neural networks. In contrast, our SN-Adapter could alleviate such issue by retrieving from the prototypes to obtain better part-wise discrimination capability.
## 6 Conclusion
We propose Spatial-Neighbor Adapter (SN-Adapter), a plug-and-play enhancement module for existing 3D networks without extra parameters and re-training. From the pre-constructed prototypes, SN-Adapter leverages \(k\) nearest neighbors to retrieve spatial knowledge and effectively boost the 3D networks by providing complementary characteristics. **Limitations.** Although our SN-Adapter can be generalized to various tasks, _e.g_., shape classification, part segmentation, and 3D object detection, the performance enhancement for part segmentation is relatively lower than others. Our future work will focus on designing more advanced part-wise prototypes for better segmentation results.
\begin{table}
\begin{tabular}{l c c} \hline \hline Method & AP\({}_{25}\) (\%) & AR\({}_{25}\) (\%) \\ \hline
3DETR-m & 64.60 & 77.22 \\
3DETR-m + SN-Adapter & **65.16** & **84.56** \\ After 3D NMS & 64.62 & 78.19 \\ Without PE & 65.02 & 83.48 \\ \hline \hline \end{tabular}
\end{table}
Table 8: Ablation study of **SN-Adapter for 3D object detection** on ScanNetV2 [7]. For last two rows, we respectively insert SN-Adapter after 3D NMS and discard the positional encodings.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Method & Score (\%) & Latency & Memory \\ \hline DGCNN & 92.18 & 0.022s & 9.74 GiB \\ + SN-Adapter & **92.99** & 0.046s & 10.06 GiB \\ CurveNet & 86.58 & 0.607s & 10.93 GiB \\ + SN-Adapter & **86.69** & 0.834s & 11.50 GiB \\ \hline \hline \end{tabular}
\end{table}
Table 9: **The extra costs of SN-Adapter for time and memory.** We test on a single RTX 3090 GPU with batch size 64 and report the OA/ mIoU\({}_{I}\) for DGCNN/ CurveNet.
Figure 6: **Visualization of part segmentation without (a) and with (b) our SN-Adapter** on ShapeNetpart [46] dataset. We select CurveNet [43] as the baseline model and highlight the differences by red circles (Zoom in for a better view). |
2308.10892 | Bayesian polynomial neural networks and polynomial neural ordinary
differential equations | Symbolic regression with polynomial neural networks and polynomial neural
ordinary differential equations (ODEs) are two recent and powerful approaches
for equation recovery of many science and engineering problems. However, these
methods provide point estimates for the model parameters and are currently
unable to accommodate noisy data. We address this challenge by developing and
validating the following Bayesian inference methods: the Laplace approximation,
Markov Chain Monte Carlo (MCMC) sampling methods, and variational inference. We
have found the Laplace approximation to be the best method for this class of
problems. Our work can be easily extended to the broader class of symbolic
neural networks to which the polynomial neural network belongs. | Colby Fronk, Jaewoong Yun, Prashant Singh, Linda Petzold | 2023-08-17T05:42:29Z | http://arxiv.org/abs/2308.10892v2 | # Bayesian polynomial neural networks and polynomial neural ordinary differential equations
###### Abstract
Symbolic regression with polynomial neural networks and polynomial neural ordinary differential equations (ODEs) are two recent and powerful approaches for equation recovery of many science and engineering problems. However, these methods provide point estimates for the model parameters and are currently unable to accommodate noisy data. We address this challenge by developing and validating the following Bayesian inference methods: the Laplace approximation, Markov Chain Monte Carlo (MCMC) sampling methods, and variational inference. We have found the Laplace approximation to be the best method for this class of problems. Our work can be easily extended to the broader class of symbolic neural networks to which the polynomial neural network belongs.
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
## I Introduction
The development of a mathematical model is critical to understanding complex chemical, biological, and mechanical processes. For example, ordinary differential equation (ODE) models are used in the field of epidemiology to describe the spread of diseases such as flu, measles, and COVID-19 and in the medical field to describe the population dynamics of CD4 T-cells in the human body during an HIV infection. Developing a mathematical model with sufficient detail is important because it can be used to identify potential methods of intervention (such as a drug) for an undesired outcome (such as the propagation of a disease). Scientists devote years to the model development cycle, which is the process of finding a model that describes a process, using data to fit parameters to the model, analyzing uncertainties in the fitted parameters, and performing additional experiments to refine and validate the model. However, these mechanistic models are powerful due to their ability to directly explain the system with known first principles such as the interaction of forces, conservation of energy in the system (thermodynamics and heat transfer), and conservation of mass (transport processes). Based on the underlying assumptions of the model, scientists know where the model can and cannot be applied to make predictions about what will happen under certain scenarios. For these reasons, mechanistic models are preferred by scientists and engineers. However, since these models entail a long development time,
we need to develop new tools to accelerate and aid the model development cycle.
A relatively recent development in the system identification field is the method Sparse Identification of Nonlinear Dynamics (SINDy) [1; 2; 3], which is linear regression of time derivatives estimated from numerical differentiation methods against a list of candidate terms which the modeler believes could be in the system to determine the terms in an ODE model. SINDy has been shown to be very successful with recovering ODE equations from various fields including fluid dynamics [4], plasma physics [5], biological chemical reaction networks [6; 7], and nonlinear optical communication [8]. Like any method, SINDy is not perfect and has its flaws. For example, it has been shown that SINDy requires its training data to be observed at very close intervals of time [9].
The internet of things [10; 11] has led to an exponential growth in the amount of data being generated and stored. We have more data than can be effectively processed. For example, the emergence of robots that can speedup small-scale lab experiments in chemistry and biology [12; 13] has led to a substantially larger amount of more accurate experimental data. In the earth sciences, the growing number of satellites and in situ earth observation equipment stationed around the world [14] has led to a significant amount of data that must be processed and understood. The emergence of the GPU, along with more powerful CPUs, has allowed data-driven models such as deep learning [15] to emerge as a viable way to process and understand large amounts of data quickly.
Neural ordinary differential equations [16; 17; 18; 19; 20; 21; 22; 23; 24; 25] (ODEs) are a recent deep learning approach to data-driven modeling of time-series data and dynamical systems. In Neural ODEs (NODEs), a neural network learns the right hand side of a system of ODEs. The neural ODE is integrated forward in time from an initial condition to make a prediction. In contrast to SINDy, neural ODEs have less stringent requirements on the sampling rate, number of observed data points, and can handle irregularly spaced data points [9]. A cousin of the neural ODE is the physics-informed neural network [26; 27; 28; 29; 30; 31; 32] (PINN), which attempts to accomplish the same thing but with a different approach to loss functions.
Neural differential equations and physics-informed neural networks are two powerful tools because a large majority of science and engineering models are described in terms of differential equations. However, these tools suffer from the same major problem as the entire family of deep learning tools - they are black-box models that are not interpretable and cannot be generalized well to regimes of conditions outside of the region it was trained on. This is an issue for scientists and engineers who need reliable models.
In response to the need for interpretability and mechanistic models, symbolic neural networks have emerged. There has been a recent explosion in the introduction of various symbolic neural network architectures [9; 33; 34; 35; 36; 37; 38; 39; 40], which essentially embed mathematical terms within the architecture. Most of these architectures can be combined with neural differential equation or physics-informed neural network frameworks to recover interpretable symbolic equations [9] that the scientist can immediately use. This is referred to as symbolic regression with neural networks.
Most of these symbolic neural network approaches have been demonstrated on noiseless data only; however, real data is almost always noisy. Additionally, the scientist using the tool often requires uncertainty estimates for the inferred model parameters; however, most of these symbolic neural network approaches recover only point estimates for the model's parameters. Bayesian inference is one approach to handle noisy data for symbolic neural networks and symbolic neural ODEs. There has been a substantial amount of work on Bayesian neural networks [41] and some work on Bayesian neural ODEs [18; 42]. However, there is a lack of approaches attempting to find the optimal Bayesian inference method for symbolic neural networks and symbolic neural ODEs. In our work, we explore various Bayesian inference methods and provide clarity to which Bayesian methods are best suited for this class of problems. We evaluate the Laplace approximation, Markov Chain Monte Carlo (MCMC) sampling methods, and variational inference on our previously developed approach for symbolic regression with polynomial neural networks [9; 33] and polynomial neural ordinary differential equations [9]. Our code can easily be extended to the various other symbolic neural network architectures.
## II Methods
### Neural ODEs
Neural Ordinary Differential Equations [16] are neural networks that learn an approximation to time-series data, \(y(t)\), in the form of an ODE system. In many fields of science, the ODE system for which we would like to learn an approximation has the form
\[\frac{dy(t)}{dt}=f\left(t,y(t),\theta\right), \tag{1}\]
where \(t\) is time, \(y(t)\) is the vector of state variables, \(\theta\) is the vector of parameters, and \(f\) is the ODE model. Finding the exact system of equations for \(f\) is a very difficult and time-consuming task. With the help of the universal approximation theorem [43], a neural network (\(NN\)) is used to approximate the model \(f\),
\[\frac{dy(t)}{dt}=f\approx NN\left(t,y(t),\theta\right). \tag{2}\]
Neural ODEs can be treated like standard ODEs. Predictions for the time series data are obtained by integrating the neural ODE from an initial condition with a discretization scheme [44; 45; 46], in exactly the same way as it is done for a standard ODE.
### Learning Missing Terms from an ODE Model with Neural ODEs
When one doesn't know anything about the system's underlying equations, neural ODEs can learn the entire model:
\[\frac{dy(t)}{dt}=NN\left(t,y(t),\theta\right). \tag{3}\]
Often, parts of the model are known, \(f_{known}\), but the modeler doesn't know all of the mechanisms and terms that describe the entire model. In this case, we can have the neural ODE learn the missing terms:
\[\frac{dy(t)}{dt}=f_{known}\left(t,y(t),\theta\right)+NN\left(t,y(t),\theta \right). \tag{4}\]
Learning the missing terms does not require significant special treatment, apart from including the known terms in the training process.
### Polynomial Neural ODEs
Systems in numerous fields are expressed as differential equations with the right-hand side functions \(f\) as polynomials. Examples include gene regulatory networks [47] and cell signaling networks [48] in systems biology, chemical kinetics [49], and population models in ecology [50] and epidemiology [51]. Polynomial neural ODEs are useful for this class of inverse problems in which it is known a priori that the system is described by polynomials.
Polynomial neural networks [33; 52] are neural network architectures in which the output is a polynomial transformation of the input layer. Polynomial neural networks belong to the larger class of symbolic neural network architectures.
Polynomial neural Ordinary Differential Equations [9] are polynomial neural networks embedded in the neural ODE framework [16]. Since the output of a polynomial neural ODE is a direct mapping of the input in terms of tensor and Hadamard products without nonlinear activation functions, symbolic math can be used to obtain a symbolic form of the neural network. Due to the presence of nonlinear activation functions in conventional neural networks, a symbolic equation cannot be directly obtained from conventional neural networks and conventional neural ODEs.
### Obtaining Posterior Distributions for Weights and Biases
We will explore and compare three different approaches for estimating the posterior distributions of weights and biases of the polynomial neural network. The approaches include the Laplace approximation, Markov Chain Monte Carlo (MCMC) sampling, and variational inference. The following text outlines each of them.
#### ii.4.1 Approach #1: Laplace Approximation
The Laplace approximation [53] provides Gaussian approximations of the individual posteriors. The Laplace approximation is obtained by taking the second-order Taylor expansion around the maximum a posteriori (MAP) estimate found by maximum likelihood estimation (MLE). For the polynomial neural network, approximating the log posterior over the parameters (\(\theta\)), given some data (\(D\)) around a MAP estimate (\(\theta^{*}\)), yields a normal distribution centered around \(\theta^{*}\) with variance equal to the inverse of the Fischer information matrix (\(\mathcal{I}_{\theta}\)):
\[\theta\sim\mathcal{N}(\theta^{*},\mathcal{I}_{\theta}^{-1}). \tag{5}\]
Under certain regularity conditions, the Fisher information matrix can be calculated via either the Hessian
\[\mathcal{I}_{\theta_{i,j}}=-\operatorname{E}\left[\frac{\partial^{2}}{ \partial\theta_{i}\partial\theta_{j}}\log f(D,\theta)\right] \tag{6}\]
or the gradient
\[\mathcal{I}_{\theta_{i,j}}=\operatorname{E}\left[\left(\frac{\partial}{ \partial\theta_{i}}\log f(D,\theta)\right)\left(\frac{\partial}{\partial \theta_{j}}\log f(D,\theta)\right)\right] \tag{7}\]
of the log-joint density function [54]. Both the gradient and Hessian are computed with the JAX [55; 56] automatic differentiation tool. As expected, we were able to obtain the same results for both methods. However, we found the calculation of the Hessian to be computationally expensive and it can only be practical for polynomial neural networks with a small number of parameters. For this reason, we used the gradient to calculate the Fisher information.
The log-joint density function (\(\log f(D,\theta)\)) is defined by the log-likelihood (\(\log f(D|\theta)\)) and log-prior (\(\log p_{r}(\theta)\)):
\[\log f(D,\theta)=\log f(D|\theta)+\log p_{r}(\theta). \tag{8}\]
When the observed noise (\(y_{pred}-y_{known}\)) is normally distributed with variance \(\beta^{2}\), the log-likelihood is given by:
\[\log f(D|\theta)=-\frac{1}{2\beta^{2}}\sum_{i=1}^{n}(y_{pred}-y_{known})^{2}, \tag{9}\]
where \(y_{pred}\) is the predicted value by the polynomial neural network or polynomial neural ODE and \(y_{known}\) is the observed data. In the case of Gaussian priors on the weights and biases with covariance \(\alpha^{2}\), the log-prior is given by:
\[\log p_{r}(\theta)=-\frac{1}{2}\theta^{T}\alpha^{-2}\theta. \tag{10}\]
We assume that we do not know \(\beta^{2}\). We calculate it via the sample variance of \(y_{pred}-y_{known}\) at the MAP point estimate
found by MLE, with the constant term \(\frac{1}{2\beta^{2}}\) dropped. Since the MLE is an unbiased estimator, \(\beta^{2}\) can also be estimated by the mean squared error (MSE) loss [57].
The workflow for training Bayesian polynomial neural ODEs with the Laplace approximation is very similar to that for polynomial neural ODEs. Prior to the training process, the architecture is defined and the parameters in the network are initialized to values that yield initial coefficient values of the simplified polynomial in the range of \(10^{-5}\) to \(10^{-10}\).
The goal of the training process is to fit the neural ODE to the observed data for the state variables, \(y_{known}\), as a function of time. The neural ODE is integrated with a differentiable ODE solver to obtain predictions for \(y_{known}\), which we call \(y_{pred}\). We used gradient descent [58; 59] and Adam [60] to minimize the negative log-likelihood, with the constant term \(\frac{1}{2\beta^{2}}\) dropped.
For the training process, we batch our observed data into \(N_{t}\) batch trajectories consisting of a certain number of consecutive data points in the time series (\(y_{known}\)). For each iteration (epoch) of gradient descent, we simultaneously solve \(N_{t}\) initial value problems corresponding to each of the batch trajectories, to obtain the predictions (\(y_{pred}\)).
In theory, one can use any differentiable discretization scheme to integrate the neural ODE forwards in time. The simpler the integration scheme, the smaller the memory and compute time costs. One can also obtain gradients for the parameters through the use of the popular continuous-time sensitivity adjoint method [16]. Direct backpropagation through complicated integration schemes have high memory costs and numerical stability issues, therefore continuous-time sensitivity adjoint method is often used for these cases. However, the adjoint method is very slow. It takes a few hours to train neural ODEs with the adjoint method, whereas it only takes a few minutes to train a neural ODE with direct backpropagation through an explicit discretization scheme. It is also important to point out that neither of these two approaches are perfect and more work needs to be done on developing differentiable ODE solvers for neural ODEs. For example, neither direct backpropogation through an explicit scheme nor the continuous-time sensitivity adjoint method can handle obtaining gradients for stiff neural ODEs [61].
Since the examples we present are for non-stiff ODEs, we do not require the adjoint or any advanced integration methods, and are able to use the fourth-order explicit Runge-Kutta-Fehlberg method [62] to solve the neural ODE. The advantage of using this method is efficient direct backpropagation through the explicit ODE scheme [9], which is computationally faster than the continuous-time sensitivity adjoint method. After the training process has converged, we have obtained the MAP estimate (\(\theta^{*}\)) via MLE.
After obtaining \(\theta^{*}\), we can find the variance of the posterior by calculating the inverse of the Fisher Information Matrix. For overparameterized neural network models, the Fisher Information Matrix is often singular and cannot be inverted. In this case, an approximation to the inverse can be calculated by either the Moore-Penrose inverse [63] or by dropping the off-diagonal entries from the matrix [64]. We have had success with both of these methods for finding an approximation for the inverse of the Fisher information. For the case in which the matrix is invertible, the approximations have given similar results to the direct matrix inverse. All of our results calculate the inverse using the Moore-Penrose inverse [63]. We have prior experience using the Laplace approximation to obtain uncertainties for the output of a neural network. Based on our experience, the Moore-Penrose inverse can only be used on neural networks with less than 50,000 parameters. This is because it becomes too expensive to invert the singular Fisher information matrix.
#### ii.2.2 Approach #2: Markov chain Monte Carlo
This approach for obtaining posterior distributions for the weights and biases of the polynomial neural network draws from Markov chain Monte Carlo [65; 66; 67] (MCMC) methods for training Bayesian neural networks [68] (BNNs). The two MCMC sampling methods that we explored were Hamiltonian Monte Carlo (HMC) and The No-U-Turn-Sampler (NUTS).
Hamiltonian Monte Carlo [69; 70] (HMC) is a MCMC method that uses derivatives of the density function to generate efficient transitions. HMC starts with an initial set of parameter values. For a set number of iterations, a momentum vector is sampled and integrated following Hamiltonian dynamics [71] with the leapfrog [44] integrator with a set discretization time (\(\epsilon\)) and number of steps (L). Since the leapfrog integrator incurs numerical error [44], it is corrected by use of the Metropolis-Hastings [72; 73; 74; 75] acceptance algorithm, which helps to decide whether to accept or reject the new state predicted from Hamiltonian dynamics.
The No-U-Turn-Sampler [76] (NUTS) is an extension of HMC that automatically determines when the sampler should stop an iteration. The algorithm automatically chooses the discretization time and number of steps, which avoids the need for the user to specify these additional parameters. However, we have found this algorithm to be computationally more expensive than vanilla HMC for this class of problems.
The training process is slightly different than for the Laplace approximation. We still batched our observed data into \(N_{t}\) batch trajectories and simultaneously solved \(N_{t}\) initial value problems with the same fourth-order explicit Runge-Kutta-Fehlberg method. We used BlackJAX [77]'s sampling algorithms to do the MCMC inference. For both of these methods, we used the log-joint density defined in Equation 8.
#### ii.2.3 Approach #3: Variational Inference
In variational inference [78; 79; 80], we learn an approximation \(q(\theta)\) to our posterior \(p(\theta|D)\). Our approximation is assumed to belong to a certain family of probability density functions and the parameters of that family are optimized by minimizing the Kullback-Leibler (KL) divergence:
\[\text{KL}(q(\theta)\,||\,p(\theta|D))=\mathbb{E}_{q(\theta)}\left[\log\frac{q (\theta)}{p(\theta|D)}\right]. \tag{11}\]
We don't know the analytical form of the posterior so we cannot minimize the KL divergence directly, but we can use a trick called the Evidence Lower Bound (ELBO) [78, 79, 80]:
\[\text{ELBO}=\mathbb{E}_{q(\theta)}[\log p(D|\theta)]-\text{KL}(q(\theta)\,||\,p_ {r}(\theta)). \tag{12}\]
Maximizing the ELBO is mathematically equivalent to minimizing the KL divergence. The ELBO only contains the prior \(p_{r}(\theta)\) and likelihood \(p(D|\theta)\), which we can numerically calculate.
We wrote our own custom JAX code for variational inference. The neural ODEs are numerically integrated exactly the same way as was done for the Laplace approximation. We used a multivariate Gaussian distribution for the approximation \(q(x)\).
### Obtaining Posterior Distributions for Polynomial Coefficients
The polynomial neural network is a factorized form of a polynomial. To obtain a simplified form of the polynomial we must expand the equation and combine like terms. For the case where the neural network parameters are scalar point estimates, we have already done this [9] with the use of SymPy [81]. When our parameters are Bayesian probability distributions, we must use the rules for the product and sum of probability distributions. These rules depend on the type of probability distributions that are algebraically combined, which makes it challenging to compute for even a small number of parameters (weights and biases). We explored approximating the weights and biases as independent univariate Gaussian probability density functions (PDFs), for which there are known rules [82] for the mean and variance of the product and sum of univariate Gaussian PDFs. However, this approach did not work in all cases since the weights and biases are dependent on each other.
To avoid multiplying out probability density functions of the weights and biases to obtain posterior distributions for the polynomial coefficients, we used Monte Carlo sampling. We drew random samples from the posterior distributions \(w\sim P(w|D)\) and \(b\sim P(b|D)\) for the weights (\(w\)) and biases (\(b\)) given the data (\(D\)). For each sample, we used the approach of expanding the polynomial neural network for scalar point estimates [9]. After doing this for enough samples, we have an estimate of the posterior distribution \(c\sim P(c|D)\) of the polynomial coefficients (\(c\)).
### Strategies for Handling Large Amounts of Noise
Neural ODEs require initial conditions to generate predicted trajectories (\(y_{pred}\)) for the training process. When there is a large amount of observed noise in the training data, the known data points (\(y_{known}\)) cannot be used as initial conditions. When this is the case, we must use a time-series filtering or smoothing algorithm to find good initial conditions to use for the neural ODE training process. Example filtering algorithms include moving average [83] (MA), exponential moving average [84] (EMA), and Kalman filters [85]. Example smoothing algorithms include smoothing splines [86], local regression [87], kernel smoother [88], Butterworth filter [89], and exponential smoothing [90]. We applied all of these algorithms on noisy ODE time series data and found Gaussian process regression (GPR) [91] to be the most accurate approach. For brevity, we have chosen not to outline in detail the pros and cons of each of the possible algorithms. However, it is important to note that the optimal smoothing algorithm is dependent on the data and the underlying model that describes it.
Gaussian process regression assumes a Gaussian process prior, which is specified with mean function \(m(x)\) and covariance function or kernel \(k(x,x^{\prime})\):
\[f(x)\sim GP\left(m(x),k(x,x^{\prime})\right). \tag{13}\]
The rational quadratic, Matern, Exp-Sine-Squared kernel, and radial basis function kernels [92, 93, 94] were found to perform the best for our considered test problems and settings. We used the scikit-learn [95] Python library to perform our pre-processing with GPR. The hyperparameters of the kernels were optimized using MLE.
## III Results
We will start by evaluating the methodology outlined on univariate cubic regression with a polynomial neural network. Starting with this model demonstrates that we can recover accurate Bayesian uncertainties on a standard polynomial without any ODEs. Since this problem can be posed as a Bayesian linear regression model with a closed form solution, we can directly test the accuracy of the methods and make sure they work prior to moving on to ODEs.
We then move on to the following ODE models: the Lotka-Volterra deterministic oscillator, the damped oscillator, and the Lorenz attractor. These models are common toy problems for dynamical systems and neural ODEs. The Lotka-Volterra model is a fairly easy model to identify. The damped oscillator is more difficult. In our previous work, we have shown that the dampening effect makes the vector field hard to learn. Since the Lorenz attractor is chaotic and has high frequency oscillations, it is the most difficult model to learn. Since it is common in the sciences to have a partially incomplete model, we also demonstrate learning the missing terms from a partially known ODE model. For simplicity reasons, we have chosen to use the Lotka-Volterra model for learning the missing dynamics.
For each of the models outlined, we recover Bayesian posterior distributions for the model parameters and compare them to the known values. For the univariate cubic regression example, we plot the prediction along with confidence intervals and confirm that the confidence intervals capture the data well. For the ODE examples, we integrate the Bayesian ODE models from the known initial condition and compare it to the true trajectory. The criteria for choosing the best Bayesian
inference method are: ease of use, computational cost, and accuracy.
### Univariate Cubic Regression
Prior to studying dynamical systems with neural ODEs, we tested our Bayesian polynomial neural network inference method on basic polynomials. For the test case, we used the following third order univariate function:
\[f(x)=1+x+2x^{2}+4x^{3}. \tag{14}\]
The training data for the \(x\)-values consisted of 200 uniformly spaced data points in the range -1.25 to 1.25. The values of \(f(x)\) corresponding to the values of \(x\) were obtained by directly substituting the \(x\)-values into the function. We then added Gaussian noise with \(\mu=0\) and \(\sigma^{2}=9\) to the training data. We chose this level of noise to demonstrate our methodology on data with a high level of noise. For reproducibility and comparison purposes, we used a random seed of 989 for all of the results we will show.
The architecture from Ref. [9] was used for the Laplace approximation, the No-U-Turn Sampler (NUTS) method, and variational inference. The third order polynomial neural network we used had 1x10x10x10x10x1 neurons in each layer (180 total parameters). We experimented with changing the number of neurons in each hidden layer up to 200 and the results were similar. The extra parameters do not affect the posteriors significantly. For brevity, we do not show these results. We used the Python libraries JAX[55, 56] along with Flax[96] for our neural networks.
For MCMC with NUTS, we used the Python library BlackJAX[77] to perform sampling. Since we had no prior knowledge of the weights and biases of the polynomial neural network but knew they weren't large values, we used the noninformative Gaussian prior with zero mean and standard deviation of 100. The warmup was set to 500 steps and the number of steps taken following warmup was 500. It took approximately 10 minutes for the code to run on a basic GPU. The code can also execute on a CPU within practical time-frames (a few extra minutes over GPU execution time). Since our neural network has a relatively small number of parameters, we plotted the kernel density estimates for the posterior distributions of the weights and biases of the polynomial neural network prior to expanding out the terms with Monte Carlo (see Figure 1). Most of the posterior distributions are close to being unimodal and symmetric, which initially suggests that the Laplace approximation and variational inference with a multivariate Gaussian should work towards estimating the posteriors. The Laplace approximation approach takes significantly less time than MCMC - (1 minute vs 30 minutes). Our results from the section and the following section provide enough evidence to use the Laplace approximation.
Since this regression problem can also be posed as a Bayesian linear regression problem with a closed-form solution, we also solved it via simple Bayesian linear regression. For Bayesian linear regression, we write the model as \(y=XB\). The posterior distribution is defined by:
\[w,b\sim\mathcal{N}\left(B=(X^{T}X)^{-1}X^{T}y,\frac{\beta^{2}}{(X^{T}X)} \right), \tag{15}\]
where the noise of the data (\(\beta^{2}\)) can be approximated by the sample variance of \((XB-y_{known})\). This approach did not use any neural networks as its intended purpose was solely method validation.
In Figure 2, we show the kernel density estimates for the posterior distributions of the coefficients of the polynomial for the Laplace approximation, MCMC with NUTS, variational inference, and Bayesian linear regression. Figure 2 also shows the model predictions corresponding to the posterior distributions found by each of the methods along with 95% and 99.7% confidence intervals. All of the methods have very similar results. MCMC predicted slightly narrower posteriors than Bayesian linear regression, whereas the Laplace approximation predicted slightly wider posteriors; however, MCMC is the most computationally expensive of the methods and scales the worst as the number of model parameters increases. Variational inference had the narrowest predictions for the posterior distributions, which resulted in a narrower confidence interval for the function evaluation. Variational inference's confidence intervals were the only confidence intervals that failed to completely capture the true model data curve. Variational inference was also comparatively difficult to train. A notable amount of trial and error was required in order to guess plausible mean and covariance values to initialize the multivariate Gaussian approximation. Different initial mean and covariance matrices worked for each problem and there is no hyperparameter optimization that can be performed to speed this up. These problems will be addressed in future work. However, variational inference is still computationally cheaper than MCMC.
Figure 1: For the univariate cubic polynomial \(f(x)=1+x+2x^{2}+4x^{3}\), a third order Bayesian polynomial neural network was trained with the No-U-Turn-Sampler (NUTS) algorithm. The kernel density estimates for the posterior distributions of the weights and biases of the polynomial neural network are shown. The panes are sequentially ordered (left-to-right, top-to-bottom) from the first layer to the last layer in the neural network.
Figure 2: For the univariate cubic polynomial \(f(x)=1+x+2x^{2}+4x^{3}\), a third order Bayesian polynomial neural network was trained with a) the Laplace approximation, b) Markov Chain Monte Carlo with the No-U-Turn-Sampler (NUTS) algorithm, and c) Variational Inference. For comparision, d) Bayesian linear regression was also performed on the training data. The kernel density estimates for the posterior distributions of the polynomial coefficients are shown (left) along with their predictions and confidence intervals (right). For the left column, the true value of the parameters is shown in the legend. Each of the columns share the same legend.
### Lotka Volterra Deterministic Oscillator
Our first demonstration of Bayesian parameter estimates for polynomial neural ODEs is on the deterministic Lotka-Volterra ODE model [97; 98], which describes predator-prey population dynamics, such as an ecosystem of rabbits and foxes. When written as a set of first order nonlinear ODEs, the model is given by:
\[\frac{dx}{dt} =1.5x-xy, \tag{16}\] \[\frac{dy}{dt} =-3y+xy. \tag{17}\]
We generated our training data by integrating the initial value problem (IVP) with initial conditions \(x=1\) and \(y=1\) at \(N=100\) points uniformly spaced in time between 0 and 10. Since the Lotka-Volterra model is non-stiff, we used SciPy [99] and DOPRI5 [100], a fourth order accurate embedded method in the Runge-Kutta family of ODE solvers. We then generated 10 high-noise trajectories originating from the same initial value by adding zero-centered Gaussian noise with a standard deviation of 2 to the training data. This corresponds to a signal-to-noise ratio [101] (SNR or S/N) between 0.125 and 3.5. See Figure 3, ignoring the shaded GPR fit, to see the noisy training data. The architecture from Ref. [9] was used with 160 total parameters. As discussed in the methods section, we batched our data into \(N_{t}=89\) trajectories of consisting of 12 consecutive data points from the time series. We simultaneously solved these batch trajectories during each epoch using our own JAX based differentiable ODE solver for the multistep fourth order explicit Runge-Kutta-Fehlberg method [62], which allows us to directly perform backpropagation through the ODE discretization scheme.
All of the Bayesian neural ODE approaches that we explored require integrating the neural ODE from starting initial conditions and comparing the prediction to the true data; however, since the data is extremely noisy, we cannot use the observed data points as initial conditions. To generate good initial guesses, we used Gaussian process regression (GPR) on the noisy data prior to the model training process. Since the Lotka-Volterra model is oscillatory, we used the ExpSine-Squared kernel [94] (also referred to as the periodic kernel), scaled by a constant kernel, along with the white kernel (\(W_{k}\)):
Figure 3: The fitted Gaussian process regression model trained on the noisy Lotka Volterra Oscillator data was used as initial conditions for the neural ODE’s integration training trajectories.
\[k(x_{i},x_{j})=c^{2}\exp\left(-\frac{2\sin^{2}(\pi d(x_{i},x_{j})/p)}{l^{2}}\right) +W_{k}(\sigma_{GPR}^{2}). \tag{18}\]
where \(c^{2}\) is the constant for the constant kernel, \(d\) is the euclidean distance function, \(l\) is the length-scale, \(p\) is the periodicity, and \(\sigma_{GPR}^{2}\) is the variance of the Gaussian noise [94]. We used MLE to obtain values for all of these unknown hyperparameters. See Figure 3 for the GPR fit on the observed data.
In Figure 4, we show the kernel density estimates for the posterior distributions of the ODE's parameters for the Laplace approximation, MCMC, and variational inference with a multivariate Gaussian approximation. For comparison, we also show the inferred parameters obtained by vanilla Sequential Monte Carlo Approximate Bayesian Computation [102] (SMC ABC), a standard method used for inference of parameters in ODEs, for the ODE without any neural networks. We wrote our own JAX based ABC method, but we recommend StochSS [103, 104, 105] for those who'd like to use an existing toolkit. SMC ABC had the worst performance for the true parameter values. ABC predicted really wide posterior distributions for some of parameters that were far away from the true parameter values. The Laplace approximation, Markov Chain Monte Carlo, and variational inference predicted more similar posterior distributions for the ODE parameters. As in the case for the univariate cubic polynomial, variational inference predicted very narrow posterior distributions. MCMC resulted in very jagged posterior distributions.
In Figure 5, we show the predictive performance of the inferred parameters. For the parameters obtained from each of the methods, we integrated the ODE out to a final time 5 times that of the training data's time range. The mean predicted model along with 95% and 99.97% confidence intervals is shown along with the training data and true ODE model used to generate the training data. MCMC had the worst predictive performance; it predicted the oscillations to dampen over time. Variational inference had reasonable performance with only minor dampening of the oscillations over time, but its predicted posteriors weren't ideal. The Laplace approximation had the best predictive performance with only minor dampening and a minor phase shift, which further highlights its performance and usability since it is also the fastest and easiest method to train.
Figure 4: For the Lotka Volterra Oscillator, we show the kernel density estimates for the posterior distributions of the polynomial coefficients obtained with a.) the Laplace Approximation, b.) Markov Chain Monte Carlo, and c.) Variational Inference. For comparison purposes, we also show the case for d.) Approximate Bayesian Computation on a normal ODE. The true value of the coefficients is shown in the legend. The legend is shared for each of the columns.
Figure 5: For the Lotka Volterra Oscillator, we show the predictive performance of a Bayesian polynomial neural ODE trained using a) the Laplace Approximation, b) Markov Chain Monte Carlo, and c) Variational Inference. The solid red and blue dots indicate the training data, solid green lines indicate the true ODE model, dashed lines indicate the predictive mean model, and shaded regions indicate 95% and 99.75% confidence intervals.
### Damped Oscillatory System
Our next example is the deterministic damped oscillatory system. This model is a popular toy model for the field of neural ODEs [16; 106]. Damped oscillations appear in many fields of biology, physics, and engineering [107; 108]. One version of the damped oscillator model is given by:
\[\frac{dx}{dt} =-0.1x^{3}-2y^{3} \tag{19}\] \[\frac{dy}{dt} =2x^{3}-0.1y^{3}. \tag{20}\]
We generated our training data by integrating an initial value problem with initial conditions given by \((x_{0},y_{0})=(1,1)\) over the interval \(t\in[0,25]\) for 500 points uniformly spaced in time. Since the damped oscillator is also a nonstiff ODE system, we integrated the initial value problem with the same numerical methods as was done for the Lotka-Volterra model. We then generated 10 high-noise trajectories originating from the same initial value by adding zero-centered Gaussian noise with a standard deviation of 0.6 to the training data. This corresponds to an instantaneous signal-to-noise ratio [101] ranging from 0 to 2.1. See Figure 6, ignoring the shaded GPR fit, to see the noisy training data.
The architecture from Ref. [9] was used with 660 total parameters. For the training process, we created batches of trajectories consisting of 13 consecutive data points from the time series. The number of consecutive points to include was determined by trial and error and unfortunately varies from model to model. We simultaneously solve these batch trajectories during each epoch using our own JAX based differentiable ODE solver for the multistep fourth order explicit Runge-Kutta-Fehlberg method [62], which allows us to directly perform backpropagation through the ODE discretization scheme. We have previously discussed why we chose this approach in the methods section of the paper.
Prior to training our neural ODE, we used a smoothing algorithm to generate good initial values for our batch trajectories. One can use any smoothing/filtering algorithm, but we used Gaussian process regression (GPR). For this model, we had the best results with the use of a rational quadratic kernel [94] scaled by a constant kernel, along with a white kernel (\(W_{k}\)):
Figure 6: The fitted Gaussian process regression model trained on the noisy Damped Oscillator data was used as initial conditions for the neural ODE’s integration training trajectories.
\[k(x_{i},x_{j})=c^{2}\left(1+\frac{d(x_{i},x_{j})^{2})}{2\alpha l^{2}}\right)^{- \alpha}+W_{k}(\sigma_{GPR}^{2}). \tag{21}\]
where \(c^{2}\) is the constant for the constant kernel, \(d\) is the euclidean distance function, \(l\) is the length-scale, \(\alpha\) is the scale mixture parameter, and \(\sigma_{GPR}^{2}\) is the variance of the Gaussian noise [94]. We used MLE to obtain values for all of these unknown hyperparameters. See Figure 6 for the GPR fit on the observed data. As you can see in the figure, the GPR model fits the noisy data extremely well.
In Figure 7, we show the kernel density estimates for the posterior distributions of the ODE's parameters for a) the Laplace approximation, b) Markov Chain Monte Carlo, and c) variational inference with a multivariate Gaussian approximation. For comparison, we also show the inferred parameters obtained by vanilla Sequential Monte Carlo Approximate Bayesian Computation [102] (SMC ABC), a standard method used for inference of parameters in ODEs, for the ODE without any neural networks.
In Figure 8, we integrated the final Bayesian models out to a final time 5 times that of the training data's time range. The mean predicted model along with 95% and 99.97% confidence intervals is shown along with the training data and true ODE model used to generate the training data.
Generally speaking, we observed the same behavior for each of these methods as we did previously for the Lotka Volterra model. Approximate Bayesian computation had the widest posterior distributions. Variational inference had the narrowest posterior distributions and the confidence intervals in the trajectory prediction were too narrow to capture the true trajectory - the method is too confident about the inferred parameters. This time, Markov Chain Monte Carlo (MCMC) completely failed to learn an accurate enough model to predict the trajectory of the system beyond \(t=2\). We spent a large amount of time playing around with the best settings for MCMC for this model, but the method failed every time. The other methods did not require nearly as much time to get working results for. Given enough patience, MCMC will result in somewhat accurate results but the other methods are much easier to use. For this reason, we do not recommend using MCMC for neural ODEs. The Laplace approximation provided the most accurate parameter estimates as well as predictions for the trajectories of the system. It is also the fastest and easiest method to use. For these reasons, we recommend using the Laplace approximation over the other methods.
Figure 7: For the Damped Oscillator, we show the kernel density estimates for the posterior distributions of the polynomial coefficients obtained with a.) the Laplace Approximation, b.) Markov Chain Monte Carlo, and c.) Variational Inference. For comparison purposes, we also show the case for d.) Approximate Bayesian Computation on a normal ODE. The true value of the coefficients is shown in the legend. The legend is shared for each of the columns.
Figure 8: For the Damped Oscillator, we show the predictive performance of a Bayesian polynomial neural ODE trained using a) the Laplace Approximation, b) Markov Chain Monte Carlo, and c) Variational Inference. The solid red and blue dots indicate the training data, solid green lines indicate the true ODE model, dashed lines indicate the predictive mean model, and shaded regions indicate 95% and 99.75% confidence intervals.
### Lorenz Attractor
The Lorenz attractor [109] is an example of a deterministic chaotic system [110, 111] that came from a simplified model for atmospheric convection [112]:
\[\frac{dx}{dt} =\sigma(y-x), \tag{22}\] \[\frac{dy}{dt} =x(r-z)-y,\] (23) \[\frac{dz}{dt} =xy-bz. \tag{24}\]
The equations describe the two-dimensional flow of a fluid with uniform depth between an upper and lower surface, given a temperature gradient. In the equations, \(x\) is proportional to the intensity of convective motion, \(y\) is proportional to the difference in temperature between the rising and falling currents of fluid, and z is proportional to the amount of nonlinearity within the vertical temperature profile [112, 109]. \(\sigma\) is the Prandtl number, \(r\) is the Rayleigh number, and \(b\) is a geometric factor [112, 109]. Typically, \(\sigma=10\), \(r=28\), and \(b=\frac{8}{3}\). For our example, we use these values for the parameters. Since the discovery of the Lorenz model, it has also been used as a simplified model for various other systems such as: chemical reactions [113], lasers [114], electric circuits [115], brushless DC motors [116], thermosyphons [117], and dynamos [117].
Due to the chaotic nature of this system and the high frequency of oscillations, we required more training data for this example than for the previous examples shown. We generated our training data from initial conditions \((x_{0},y_{0},z_{0})=(1,1,1)\) over time interval \(t\in[0,30]\) for 900 points uniformly spaced in time. We then generated 10 high-noise trajectories originating from the same initial value by adding zero-centered Gaussian noise with a standard deviation of 2 to the training data. The architecture from Ref. [9] was used with 231 total parameters. The training process was exactly the same as the previous two examples. This time, we batched our data into training trajectories consisting of two adjacent data points. This number was found through trial and error, but we hypothesize that the trajectory length needs to be shorter for this example due to the high frequency oscillations. For this example, we used the same GPR kernel as was used for the damped oscillator (see Equation 21).
Figure 9 shows the kernel density estimates for the various Bayesian inference methods. Since the level of noise is smaller for this example, the posterior estimates are narrower. There is also very little difference between the predicted Bayesian uncertainties. Figures 10, 11, and 12 show the trajectory predictions for the Laplace approximation, Markov Chain Monte Carlo, and variational inference respectively. All of the methods' 95% confidence intervals were able to capture the true trajectory. In terms of accuracy, there is no clear winner for the Lorenz attractor. However, in terms of speed, the Laplace approximation is the best choice.
Figure 1: The \(\alpha\)-dependence of the \
Figure 10: \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus \(\langle\cdot,\cdot\cdot\rangle\) versus \(\langle\cdot,\cdot\rangle\) versus
Figure 1: The \(\alpha\)-dependence of the \
### Learning Missing Terms from a Partially Known ODE Model
It is common for scientists to have an incomplete model of their system - one in which they are confident about certain processes undergoing the system, but there are mechanisms they aren't aware of. Rather than learn the whole system from scratch, we can incorporate the known parts of our model into the neural ODE and have the neural ODE suggest additional components of the ODE model given the observed data. Incorporating the known ODE model into the neural ODE framework is done simply by adding the output of the known equation to that of the neural ODE output - no special treatment is required aside from that (see Equation 4).
We will use the Lotka Volterra Oscillator to demonstrate the ability of polynomial neural ODEs to learn missing terms from a partially known ODE model. We also show that Bayesian uncertainties can be obtained for the parameters in the terms that the neural ODE suggests including. As a reminder, the Lotka Volterra model is given by:
\[\frac{dx}{dt} =\left\|\mathbf{1.5}\mathbf{x}\right\|-xy, \tag{25}\] \[\frac{dy}{dt} =-3y+\left\|\mathbf{x}\mathbf{y}\right\|. \tag{26}\]
For this experiment, the highlighted terms are the ones we do not know. The goal will be to recover these terms along with posterior distributions for the values of the parameters. We used the same training data, GPR model for the initial conditions, and training process as was previously used in the Lotka Volterra example. The only difference was including the known ODE model (see Equation 4).
Figure 13 shows the posterior distributions recovered for all of the candidate terms to include in the final ODE model. The neural ODE was able to identify the missing terms with few false terms. Most of the terms that are not in the true model are predicted to be close to zero. As was seen in the previous examples, variational inference provides very narrow posterior distributions and MCMC provides results between the Laplace approximation and variational inference.
## IV Conclusion
This work addressed the problem of how to handle noisy data and recover uncertainty estimates for: (1) symbolic regression with deep polynomial neural networks and (2) polynomial neural ODEs. More broadly, we also helped to answer the question of how to handle noisy data and perform Bayesian inference on the general class of symbolic neural networks and symbolic neural ODEs.
We compared the following Bayesian inference methods: (a) the Laplace approximation, (b) Markov Chain Monte Carlo (MCMC) sampling methods, and (c) variational inference. We do not recommend using Markov Chain Monte Carlo for neural ODEs. Using MCMC for neural ODEs requires a substantial amount of patience, it is the most computationally expensive method, and we showed that the results are not encouraging. A substantial amount of development work needs to be devoted towards addressing the challenges of using MCMC for neural ODEs in an effective manner. Variational inference is also challenging to use - some time is spent deciding the mean and covariance matrix to use for initialization of the parameters. This process can be sped up by first obtaining point estimates for the parameters and using the values obtained to initialize the mean matrix. Using this approach made variational inference a viable option to implement. However, the posterior estimates generated by variational inference's posterior are consistently too narrow: it is too confident about its estimates.
The Laplace approximation is the easiest to implement and the fastest method. The main challenge associated with the Laplace approximation for neural networks is inverting the Fisher information matrix; however, most of the models in this class of problems are small enough that this is not an issue. Based on our experience, we recommend having no more than 50,000 parameters if you plan on using the Laplace approximation for a neural network and want to use the exact or pseudo inverse of the Fisher information matrix. We were initially skeptical about the Laplace approximation because it makes a Gaussian approximation for all of the parameters. However, we have shown that this approximation is not problematic when the polynomials are multiplied out. We have shown that the Laplace approximation has high accuracy. For these reasons, we recommend using the Laplace approximation for this class of problems.
Figure 13: It is common for a domain expert to understand part of the system’s underlying mechanisms, but have in incomplete model. Given an incomplete model, a neural ODE can learn the missing terms from the ODE model that best fit the observed data. We have removed two of the terms from the Lotka Volterra model and tested the neural ODE’s ability to learn the missing terms. We show the kernel density estimates for the posterior distributions of the polynomial coefficients obtained with a.) the Laplace Approximation, b.) Markov Chain Monte Carlo, and c.) Variational Inference. The true value of the coefficients is shown in the legend. The legend is shared for each of the columns.
Acknowledgements
The authors acknowledge research funding from NBIBIB Award No. 2-R01-EB014877-04A1 (grant 2-R01-EB014877-04A1 to LRP). This work has benefited from our participation in Dagstuhl Seminar 22332 "Differential Equations and Continuous-Time Deep Learning" [25]." Many thanks to the organizers of this seminar: David Duvenaud (University of Toronto, CA), Markus Heinonen (Aalto University, FI), Michael Tiemann (Robert Bosch GmbH - Renningen, DE), and Max Welling (University of Amsterdam, NL). This work has also benefited from our participation in the University of Bonn's Hausdorff School: "Inverse problems for multiscale models." Many thanks to the organizers of this summer school: Lorenzo Contento, Jan Hasenauer, and Yannik Schalte. We'd like to thank Alexander Franks (UC Santa Barbara) for giving us the idea of using Monte Carlo for obtaining posterior distributions for the polynomial coefficients. We'd also like to thank Michael Tiemann and Katharina Ott (Robert Bosch GmbH - Renningen, DE) for recommending that we try the Laplace approximation on the polynomial neural ODEs. Lastly, we'd like to thank Patrick Kidger (Google) for recommending that we switch from PyTorch to JAX and providing resources for making the switch, as JAX has allowed us to do so much more.
Use was made of computational facilities purchased with funds from the National Science Foundation (CNS-1725797) and administered by the Center for Scientific Computing (CSC). The CSC is supported by the California NanoSystems Institute and the Materials Research Science and Engineering Center (MRSEC; NSF DMR 2308708) at UC Santa Barbara.
This work was supported in part by National Science Foundation (NSF) awards CNS-1730158, ACI-1540112, ACI-1541349, OAC-1826967, OAC-2112167, CNS-2100237, CNS-2120019, the University of California Office of the President, and the University of California San Diego's California Institute for Telecommunications and Information Technology/Qualcomm Institute. Thanks to CENIC for the 100Gbps networks.
The content of the information does not necessarily reflect the position or the policy of the funding agencies, and no official endorsement should be inferred. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
|
2303.03544 | Expressivity of Shallow and Deep Neural Networks for Polynomial
Approximation | This study explores the number of neurons required for a Rectified Linear
Unit (ReLU) neural network to approximate multivariate monomials. We establish
an exponential lower bound on the complexity of any shallow network
approximating the product function over a general compact domain. We also
demonstrate this lower bound doesn't apply to normalized Lipschitz monomials
over the unit cube. These findings suggest that shallow ReLU networks
experience the curse of dimensionality when expressing functions with a
Lipschitz parameter scaling with the dimension of the input, and that the
expressive power of neural networks is more dependent on their depth rather
than overall complexity. | Itai Shapira | 2023-03-06T23:01:53Z | http://arxiv.org/abs/2303.03544v2 | # Expressivity of Shallow and Deep Neural Networks for Polynomial Approximation
###### Abstract
We delve into the required number of neurons for a Rectified Linear Unit (ReLU) neural network to approximate multivariate monomials. Our investigation establishes an exponential lower bound on the complexity of any shallow network approximating the product function \(\textbf{x}\mapsto\prod_{i=1}^{d}x_{i}\) over a general compact domain. Moreover, we demonstrate this lower bound does not apply to normalized \(\mathcal{O}(1)\)-Lipschitz monomials (or equivalently, by restricting to the unit cube). These results suggest that shallow ReLU networks suffer from the curse of dimensionality when expressing functions with a Lipschitz parameter scaling with the dimension of the input, and that the expressive power of neural networks lies in their depth rather than the overall complexity.
## 1 Introduction
Shallow neural networks, and by extension multi-hidden layer networks, are universal approximators. For every continuous non-polynomial activation function, the space of all shallow one-hidden layer networks is dense in \(C(K)\) in the uniform topology, for any compact \(K\subset\mathbb{R}^{d}\)(Pinkus (1999)). Density, however, does not imply an efficient scheme for approximation, namely the required number of neurons and trainable parameters. Indeed, the minimum number of neurons required to \(\varepsilon\)-approximate a large class of functions can be exponential in the dimension of the input. Specifically, for the unit ball of the Sobolev space of order \(r\) and dimension \(d\), Maiorov et al. (1999) show a lower bound of order \(\mathcal{O}(\varepsilon^{-\frac{d-1}{r}})\) and that the set of functions for which this lower bound holds is of large measure.
This lower bound reflects the so-called curse of dimensionality: the number of computational units necessary for approximation scales exponentially with the problem dimension. In this regard, shallow networks offer no superior order of approximation compared to other functional approximation schemes, such as polynomial approximation (Pinkus (1999)).
In contrast, _deep_ networks featuring multiple hidden layers have proven effective in high-dimensional applications such as image recognition and natural language processing. In light of this empirical success, numerous studies have examined the approximation capabilities of deep versus shallow networks. It has been suggested that the expressive power of neural networks is attributed to their depth. When compared to shallow networks of equivalent size, deeper networks may offer greater expressivity and can efficiently capture functions that would demand exponentially-wide shallow networks (refer to section 3 for a review). Possibly the most striking illustration of this is the _depth separation phenomena_: studies such as Eldan and Shamir (2016); Safran and Shamir (2017); Daniely (2017), and Venturi et al. (2021) demonstrate that certain functions can be efficiently represented using depth two networks, yet require exponentially wider networks for approximation by shallow, one-hidden-layer networks.
A particularly interesting case study for the complexity gap between deep and shallow neural networks involves the class of homogeneous multivariate polynomials of \(d\) variables, represented by
the monomials:
\[p_{d}(x_{1},..,x_{d}):=\prod_{i=1}^{d}x_{i}\]
i.e. the \(d\)-product function. Deep networks can approximate \(p_{d}\) efficiently. Monomials are _compositionally sparse_, implying they can be expressed by recursive compositions of the low-dimension function \((x,y)\mapsto xy\). Consequently, a tree-like network architecture with \(\log d\) layers can approximate \(p_{d}\) with a linear number of neurons (Mhaskar et al. (2016); Poggio et al. (2017)). This result prompts the following question:
_Can an \(\mathcal{O}(1)\)-layer network approximate \(\textbf{x}\mapsto\prod_{i=1}^{d}x_{i}\) over a general compact domain \([-k,k]^{d}\) with only \(\text{poly}(d)\) neurons?_
Lin et al. (2017) and Rolnick and Tegmark (2017) studied this problem in the context of _exact approximation_, that is, how many neurons are necessary to have the property that for any precision \(\varepsilon>0\), there exists a weight assignment that approximate \(p_{d}\). For this notion of approximation, they demonstrated that an exponential number of neurons is needed for a shallow network with a smooth activation function. However, it remained uncertain whether a smaller network could be constructed to approximate \(p_{d}\) arbitrarily well in the standard notion of approximation, where the number of neurons is allowed to vary with the level of accuracy \(\varepsilon\).
Blanchard and Bennouna (2021) recently constructed a two-layer ReLU network with \(\text{poly}(d)\) neurons that approximates the product function over \([0,1]^{d}\). This corresponds to the normalized monomial \(\textbf{x}\mapsto\frac{1}{k^{d}}\prod_{i=1}^{d}x_{i}\) over \([0,k]^{d}\), where \(k\) is a constant. However, we demonstrate that the choice of approximation domain \([0,1]^{d}\) (or equivalently, the normalization factor \(k^{-d}\)) hides a gap between the positive and negative answers to the question of the expressive power of ReLU networks in the context of homogeneous polynomials.
In this paper, we offer a characterization of the expressive power of ReLU networks for homogeneous polynomials. We demonstrate that over the domain \((1,k]^{d}\), the product function acts as an expansive map with an expansion constant that scales exponentially with the dimension. As a result, we prove that no \(\mathcal{O}(1)\)-layer ReLU network exists that can approximate \(p_{d}\) with at most \(\text{poly}(d)\) neurons. Conversely, for the normalized case where the Lipschitz parameter is independent of the dimension, we extend the result of Blanchard and Bennouna (2021). We show that \(p_{d}\) can be efficiently approximated by a one-hidden-layer shallow network with a smooth or ReLU activation function. In particular, we reveal that no depth separation exists between one- and two-hidden-layer networks when approximating the class of homogeneous polynomials.
Our main contributions can be summarized as follows:
* In Section 4, we demonstrate that the minimum number of neurons required by any ReLU \(\mathcal{O}(1)\)-layer network to approximate multivariate monomials **scales exponentially with the dimension** over \([-k,k]^{d}\) with \(k>1\). More specifically, we prove the following theorem: **Theorem 1**.: _If a ReLU network with \(L\) layers and at most \(n\) neurons in each layer \(\varepsilon\)-approximates \(\textbf{x}\mapsto\prod_{i=1}^{d}x_{i}\) over \([-k,k]^{d}\) for \(k>1\), then \(n=\exp(\mathcal{O}(\frac{d}{L}\ln(\varepsilon^{-1})))\). (See Theorem 3 for a formal statement)_
* In Section 5, we demonstrate that the normalized monomial \(\textbf{x}\mapsto k^{-d}\prod_{i=1}^{d}x_{i}\) over \([0,k]^{d}\) (equivalently, \(p_{d}\) over \([0,1]^{d}\)) can be **approximated using a one-hidden layer ReLU network with \(\text{poly}(d)\) neurons**
## 2 Preliminaries
### Feedforward Neural Networks
Let \(K\subset\mathbb{R}^{d}\) be a compact set and \((C(K),||\cdot||_{\infty})\) denote the space of all continuous functions on \(K\), equipped with the uniform norm: \(||f||=\max_{x\in K}|f(x)|\). In this work, we contemplate the standard model of feedforward neural networks, using linear output neurons and a non-linear
continuous activation function \(\sigma:\mathbb{R}\longrightarrow\mathbb{R}\) for the other neurons. Following the notation in Pinkus (1999), we denote by \(\mathcal{M}^{1}_{n}(\sigma)\) the set of all 1-hidden layer neural networks:
\[\mathcal{M}^{1}_{n}(\sigma)=\bigg{\{}\sum_{i=1}^{n}\nu_{i}\sigma(\textbf{w}_{i} ^{T}\textbf{x}+b_{i})\mid\nu_{i},b_{i}\in\mathbb{R},\textbf{w}_{i}\in\mathbb{R} ^{d}\bigg{\}}\]
Throughout this work, we adopt the convention of referring to \(f\in\mathcal{M}^{1}_{n}(\sigma)\) as _shallow networks_. For brevity, we also employ matrix notation \(\mathcal{M}^{1}_{n}(\sigma)=\textbf{A}_{1}\sigma(\textbf{A}_{0}\textbf{x}+ \textbf{b}_{0})\), where \(\textbf{A}_{0}\) is an \(n\times d\) matrix, \(\textbf{A}_{1}\) is a \(1\times n\) matrix and \(\sigma\) is applied element-wise to vectors. The total number of trainable parameters is \((d+2)n\). A deep neural network with \(L\) hidden layers is obtained by feeding the outputs of a given layer as inputs to the next:
\[\mathcal{M}^{L}_{n}(\sigma)=\bigg{\{}\textbf{A}_{L}(\sigma\cdots\sigma( \textbf{A}_{1}(\sigma(\textbf{A}_{0}\textbf{x}+\textbf{b}_{0}))+\textbf{b}_{1 })\cdots)\bigg{\}}\]
where \(n\) is the maximum number of neurons in each hidden layer. The constant \(L\) is referred to as the _depth_ of the network.
We will consider two types of activation functions: piece-wise linear activation, such as the popular rectified linear unit (ReLU) \(\sigma(x):=\max\{x,0\}\); and general smooth non-polynomial activation functions, such as the exponential function \(\exp(x):=e^{x}\).
### Approximation Complexity
We will consider \(L^{\infty}\)-error of approximation. We say that a network \(g\in\mathcal{M}^{L}_{n}(\sigma)\)\(\varepsilon\)-approximates a function \(f\) if \(||f-g||<\varepsilon\) in the uniform topology. We measure the complexity of the network by the number of neurons and non-zero weights in the network. We are interested in evaluating the number of neurons needed to approximate a given function within \(\varepsilon\), and especially how this number scales with the dimension of the problem \(d\) and with the accuracy level \(\varepsilon\).
Roughly speaking, we consider shallow networks to be inefficient in approximating a sequence of functions \((f_{d}:\mathbb{R}^{d}\longrightarrow\mathbb{R})_{d\in\mathbb{N}}\), if the minimum number of neurons needed to \(\varepsilon\)-approximate the sequence grows exponentially with \(d\). Conversely, for any fixed \(\varepsilon\), polynomial dependency on \(d\) is considered efficient. Note, however, that we do not require a polynomial dependence on \(\varepsilon^{-1}\) and \(d\) simultaneously. For the remainder of the paper, we will use \(\mathcal{O}\) notation which hides constants independent of \(d\).
## 3 Related Work
**Slow approximation by shallow networks in standard function spaces.** Several studies have shown that shallow networks are inefficient in approximating Sobolev functions. Maiorov et al. (1999) consider the rate of approximation by arbitrary ridge functions. They show a lower bound that scales exponentially with \(d\), thus demonstrating the inherent inefficiency of _any_ shallow-like approximation schemes. DeVore et al. (1989) proved that any continuous function approximator that \(\varepsilon\)-approximates functions from the unit ball in the Sobolev space of order \(r\) and dimension \(d\) needs at least \(\Theta(\varepsilon^{-\frac{d}{r}})\) parameters (note, however, that the optimal weight selection is generally not continuous). Pinkus (1999) and Yarotsky (2017) prove the existence of norm-one Sobolev functions that cannot be approximated efficiently by shallow network, for smooth and ReLU activation functions, respectively.
**Faster approximation by deep networks.** Several studies have shown that deeper networks perform better for a given number of neurons. This indicates that the expressive power of neural networks lies in their depth rather than the overall complexity. Telgarsky (2015) show a \(k\)-layer \(\mathcal{O}(1)\)-wide 1-dimensional ReLU network which oscillates \(\mathcal{O}(2^{k})\) times that cannot be approximated by a \(k\)-polynomial shallow network. Yarotsky (2017) show every \(f\in C^{2}([0,1]^{d})\) cannot be \(\varepsilon\)-approximated by a \(L\)-deep ReLU network with fewer than \(\mathcal{O}(\varepsilon^{-\frac{1}{2L}})\) neurons, demonstrating the efficiency of increased depth. Mhaskar et al. (2016) and Poggio et al. (2017) show that deep neural networks can efficiently express _compositionally sparse_ functions, i.e. functions that can be expressed by recursive compositions of low-dimension functions. Other authors considered the power of deeper networks of different types. The exponential benefit of depth was shown
by Delalleau and Bengio (2011) (networks consisting of sum and product nodes) and Cohen et al. (2016) (convolutional arithmetic circuit architecture that incorporates locality, sharing, and pooling).
**Separation gaps.** Several works have studied the gap in expressivity between one-hidden-layer and two-hidden-layer networks, and have proved the existence of functions that can be efficiently approximated by two-hidden-layer networks, but require exponential width shallow networks. Eldan and Shamir (2016) prove a separation gap for rapidly oscillating radial functions using ReLU networks. Similar separation gaps have been shown by Daniely (2017); Safran and Shamir (2017); Venturi et al. (2021).
**Flattening results.** Recent work studied the complexity cost of flattening deep networks into shallow ones. Safran et al. (2019) discuss flattening networks which approximate \(\mathcal{O}(1)\)-Lipschitz radial functions. Venturi et al. (2021) show that functions with an \(\mathcal{O}(1)\)-rate of oscillation can be approximated by one-hidden-layer networks.
**Approximation of the product function.** The product function is a special case of compositionally-sparse functions and thus can be approximated by deep networks, as shown in Mhaskar et al. (2016) and Poggio et al. (2017). Lin et al. (2017) and Rolnick and Tegmark (2017) studied the _exact_-approximation capabilities of shallow and deep networks in approximating this function. They proved that if \(m(\varepsilon)\) is the minimum number of neurons required by a smooth shallow network to \(\varepsilon\)-approximate \(p_{d}\), then \(\lim_{\varepsilon\to 0}m(\varepsilon)\) exists and equals to \(2^{d}\) (In Appendix B, we attached a slightly shorter proof). More recently, Blanchard and Bennouna (2021) constructed a two-hidden-layer ReLU architecture that \(\varepsilon\)-approximates the normalized \(p_{d}\) with \(\mathcal{O}(d^{\frac{3}{2}}\varepsilon^{-\frac{1}{2}}\ln\varepsilon^{-1})\) neurons.
## 4 The Inefficiency of Shallow Network on a General Compact Domain
In this section we provide an exponential lower bound on the complexity of a ReLU network that \(\varepsilon\)-approximate the multivariate monomial by counting the number of linear regions in which the network is linear.
Shallow networks with a piecewise linear activation function compute piecewise linear functions. ReLU activation functions, of the form \(\max\{\textbf{w}^{T}\textbf{x}+b,0\}\), operate in one of two modes - they either output \(0\) or a linear function of the input. The boundary between these two behaviors is the hyperplane \(H=\{\textbf{x}\mid\textbf{w}^{T}\textbf{x}+b=0\}\) which splits the input space \(\mathbb{R}^{d}\) into two pieces. A shallow network with \(n\) neurons forms an \(d\)-dimensional _hyperplane arrangement_\(\{H_{i}\subset\mathbb{R}^{d}\}_{i\in[n]}\). A _linear region_ of an arrangement is a connected component of \(\mathbb{R}^{d}\setminus\bigcup_{i\in[n]}H_{i}\)(Pascanu et al. (2013)). In every such region, the inference function of the network is affine linear.
An arbitrary non-linear function, when defined on a sufficiently large set, cannot be approximated by linear functions. For the sake of building intuition for the below result, consider the following example inspired by Liang and Srikant (2016) (Theorem 11). Suppose that \(f:\mathbb{R}^{d}\longrightarrow\mathbb{R}\) is strongly-convex with parameter \(m>0\) and \(T\) is a linear function that \(\varepsilon\)-approximates \(f\) over the domain \(K\). Then the error function \(g(x):=f(x)-T(x)\) is also strongly convex. If \(\textbf{x}_{0}\in\mathbb{R}^{d}\) is any point with \(\nabla g(\textbf{x}_{0})=0\), then by definition:
\[2\varepsilon>g(\textbf{y})-g(\textbf{x}_{0})=\frac{m}{2}||\textbf{y}-\textbf{x }_{0}||_{2}^{2}\]
that is, in order for \(T\) to \(\varepsilon\)-approximates \(f\), the domain \(K\) must have a relatively small diameter. By finding an upper bound on the number of linear regions generated by the network, we can derive a lower bound on the number of neurons required to approximate a general (not necessarily strongly convex) function \(f\).
The topic of counting the number of linear regions generated by a ReLU network has been addressed by several authors (Montufar et al. (2014);Telgarsky (2015);Pascanu et al. (2013). See also Zaslavsky (1975)). The function represented by the network can have a number of linear pieces that is exponential in the number of layers, but is at most polynomial in the number of neurons (Hanin and Rolnick (2019)). An upper bound for the case of \(d=1\) is given by Telgarsky (2015) (see also Yarotsky (2017) (Lemma 4)):
**Lemma 2** (Telgarsky (2015) (Lemma 2.1)).: _Consider \(f\in\mathcal{M}_{n}^{L}(\sigma)\), a \(1\)-dimensional ReLU network with \(L\) hidden layers and no more than \(n\) neurons in each hidden layer. The number of linear pieces in \(f\) is at most \((2n)^{L}\)._
Using Lemma 2, we can now prove the main result of this section:
**Theorem 3**.: _If \(k>1\) and \(\varepsilon>0\), then any network \(f\in\mathcal{M}_{n}^{L}(\text{ReLU})\) that \(\varepsilon\)-approximates the univariate monomial \(p_{d}(x)=\prod_{i=1}^{d}x_{i}\) on \(K:=[-k,k]^{d}\), must satisfy \(n>C_{1}\varepsilon^{-\frac{1}{2L}}d^{\frac{1}{2L}}\exp(C_{2}\frac{d}{L})\), where \(C_{1},C_{2}>0\) are constants independent of \(d\) and \(\varepsilon\). Specifically, \(n\) must scale exponentially with \(d\)._
Proof.: The shallow network \(f\)\(\varepsilon\)-approximates \(p_{d}\) in \(K\) and, in particular, approximates \(p\) along the direction \(\lambda(1,1,..,1)\). We define \(\tilde{p}_{d},\tilde{f}:[\frac{k+1}{2},k]\longrightarrow\mathbb{R}\) as the restrictions of \(p_{d}\) and \(f\) respectively along this direction:
\[\tilde{f}(x)=f(x,...,x)\quad\tilde{p}_{d}(x)=x^{d}\]
clearly \(|\tilde{f}(x)-\tilde{p}_{d}(x)|<\varepsilon\) for every \(x\in[\frac{k+1}{2},k]\). Additionally, \(\tilde{f}\) is a univariate shallow network with \(n\) neurons, obtainable from \(f\) by replacing the \(d\) input units with one input and modify the connection accordingly.
\(\tilde{f}\) is a continuous piece-wise affine linear function with \((2n)^{L}\) linear pieces. Therefore, the domain \([\frac{k+1}{2},k]\) is partitioned into at most \((2n)^{L}\) intervals for which \(\tilde{f}\) is linear. Hence, there exists an interval \([a,b]\subset[\frac{k+1}{2},k]\) with \(b-a\geq\frac{k-1}{2(2n)^{L}}\) such that \(\tilde{f}\) is linear on \([a,b]\) and \(\varepsilon\)-approximates \(x^{d}\). Denote by \(\tilde{h}\) the error function \(\tilde{h}:=(\tilde{p}_{d}(x)-\tilde{f}(x))\). Notice that: \(|\tilde{h}(x)|<\varepsilon\) for all \(x\in[a,b]\).
Consider the following three points \(a,b\) and \(\frac{a+b}{2}\in[a,b]\). It follows from the linearity of \(\tilde{f}\):
\[\begin{split} 4\varepsilon>\tilde{h}(b)+\tilde{h}(a)-2\tilde{h}( \frac{a+b}{2})&=\left(\tilde{p}_{d}(b)-\tilde{p}_{d}(\frac{a+b}{ 2})\right)+\left(\tilde{p}_{d}(a)-\tilde{p}_{d}(\frac{a+b}{2})\right)\\ &+\left(\tilde{f}(\frac{a+b}{2})-\tilde{f}(b)\right)+\left(\tilde {f}(\frac{a+b}{2})-\tilde{f}(a)\right)\\ &=\left(\tilde{p}_{d}(b)-\tilde{p}_{d}(\frac{a+b}{2})\right)+ \left(\tilde{p}_{d}(a)-\tilde{p}_{d}(\frac{a+b}{2})\right)\end{split} \tag{1}\]
Notice that on \([\frac{K+1}{2},K]\), the function \(\tilde{p}_{d}\) is strongly convex with parameter \(m:=d(d-1)(\frac{k+1}{2})^{d-2}\). It follows than:
\[\tilde{p}_{d}(b)+\tilde{p}_{d}(a)-2\tilde{p}_{d}(\frac{a+b}{2})\geq\frac{m}{ 4}(b-a)^{2}\]
It follows that: \(4\varepsilon>\frac{m}{4}(b-a)^{2}\) and using the fact that \(b-a\geq\frac{k-1}{2(2n)^{L}}\):
\[4\varepsilon>\frac{d(d-1)}{16}(\frac{k+1}{2})^{d-2}(k-1)^{2}(2n)^{-2L}\]
that is, exists \(C_{1},C_{2}>0\) independent of \(d\) and \(\varepsilon\) with: \(n>C_{1}\varepsilon^{-\frac{1}{2L}}d^{\frac{1}{2L}}\exp(C_{2}\frac{d}{L})\).
## Discussion
Theorem 3 tells us that \(\varepsilon\)-approximating the product function outside the unit cube requires exponentially wide \(\mathcal{O}(1)\)-deep ReLU networks. The proof relies on the fact that \(\textbf{x}\mapsto\prod_{i=1}^{d}x_{i}\) is expansive, with an expansion factor that scales exponentially with \(d\). In contrast, over \([0,1]^{d}\), \(p_{d}\) is \(1\)-Lipschitz, and this Lipschitz parameter does not scale with \(d\). Note that expressing \(p_{d}\) over this domain is equivalent to approximate the _normalized monomial_:
\[\textbf{x}\in[0,k]^{d}\mapsto\frac{1}{k^{d}}\prod_{i=1}^{d}x_{i} \tag{2}\]
Recently, Blanchard and Bennouna (2021) showed the following:
**Theorem 4** (Blanchard and Bennouna (2021), Proposition 3.2).: _For all \(\varepsilon>0\), there exists a two-hidden-layer ReLU network with \(\text{poly}(d)\) neurons that \(\varepsilon\)-approximates \(\textbf{x}\in[0,1]^{d}\mapsto\prod_{i=1}^{d}x_{i}\)_
Together, Theorems 3 and 4 suggest that the curse of dimensionality of shallow ReLU networks is closely related to the derivative of the objective function and how its magnitude scales with the dimension. Analogously, in a seminal work, Eldan and Shamir (2016) and Safran et al. (2019) investigated radial functions of the form \(f(\textbf{x})=\varphi(||\textbf{x}||)\). The former demonstrated that no one-hidden layer network can approximate \(f\) if \(\varphi\) is rapidly oscillating, with a Lipschitz parameter scaling polynomially with \(d\). Meanwhile, the latter established that if \(\varphi\) is \(\mathcal{O}(1)\)-Lipschitz, approximation by a shallow network is feasible with \(\text{poly}(d)\) neurons. In our framework, the proof of 4 relies on the observation that \(p_{d}(\textbf{x})=\exp(\sum_{i=1}^{d}\ln x_{i})\) and that \(\exp\) is \(\mathcal{O}(1)\)-Lipschitz if \(x_{i}\leq 1\).
## 5 Fast Approximation Using Shallow Networks For Normalized Monomials
Extending the findings of Blanchard and Bennouna (2021), in this section we raise the following question:
_Is there a depth separation between one and two-hidden-layer networks in the approximation of the normalized monomial?_
In other words, does a shallow counterpart to Theorem 4 with \(\text{poly}(d)\) number of neurons exist? In the following theorem, we address this question by constructing a shallow ReLU network with \(\text{poly}(d)\) number of neurons that \(\varepsilon\)-approximates the normalized \(p_{d}\).
**Theorem 5**.: _Let \(k>0\). For any \(\varepsilon>0\), there exists a shallow network \(\hat{f}\in\mathcal{M}^{1}_{n}(\text{ReLU})\) that \(\varepsilon\)-approximates \(p_{d}:[0,k]^{d}\longrightarrow\mathbb{R}\), defined by \(p_{d}(\textbf{x})=\frac{1}{k^{4}}\prod_{i=1}^{d}x_{i}\), with \(n=\text{poly}(d)\)._
The full proof can be found in Appendix A.1. This proof is constructive and is built upon the following observation: a polynomial \(q_{r}\) of degree \(r\), which depends solely on \(\varepsilon^{-1}\), exists that \(\varepsilon\)-approximates the univariate function \(x\mapsto e^{x}\). Thus, we have:
\[p_{d}(\textbf{x})=\exp\bigg{(}\sum_{i=1}^{d}\ln x_{i}\bigg{)}\approx q_{r} \bigg{(}\sum_{i=1}^{d}\ln x_{i}\bigg{)} \tag{3}\]
which reduces the problem to approximating polynomials of degree at most \(r\) in the \(d\) variables \((\ln x_{i})_{i=1}^{d}\). Observe that there are only \(\binom{d+r}{r}\leq(d+1)^{r}\) elements in the right-hand side of equation (3). This observation enables us to "flatten" the network in Theorem 4 into a one-hidden-layer network. As suggested by Theorem 3, the normalization factor in equation (2) is critical for the proof. Similar flattening results are shown in Safran et al. (2019) Theorem 1 (for Lipschitz radial functions) and in the more general Venturi et al. (2021) Theorem 11 (using Fourier networks). The crux of the proof is demonstrating that the product function meets the conditions of the latter result, with the remaining details presented in a simplified form for completeness.
**Proof Sketch.** Essentially, we construct a two-layer network akin to Theorem 4 with the \(\exp\) activation function and then employ the framework previously described to demonstrate that flattening to a shallow network is feasible with a polynomial cost in complexity.
The high-level strategy for this proof proceeds in several key stages. Initially, a two-hidden-layer network is constructed with the \(\exp:x\mapsto e^{x}\) activation function, such that \(\hat{f}(\textbf{x})=\exp\bigg{(}\sum_{i=1}^{n}\nu_{i}\exp(\textbf{w}_{i} \textbf{x})\bigg{)}\) approximates \(p_{d}\). Additionally, it is demonstrated that it is feasible to control the value of the coefficients within the network.
Following this, the proof establishes that the non-linearity of the second hidden layer can be approximated by a polynomial with a linear cost in \(\varepsilon^{-1}\). For a specific univariate polynomial \(q\), whose degree depends solely on \(\varepsilon\), \(\hat{f}\) is substituted by
\[q\bigg{(}\sum_{i=1}^{n}\nu_{i}\exp(\textbf{w}_{i}\textbf{x})\bigg{)} \tag{4}\]
It is important to note that the normalization factor in (2) is essential for this stage of the proof. We then leverages properties of the exponential function, specifically the fact that \(\exp(\mathbf{w}\mathbf{x})^{m}=\exp((m\mathbf{w})\mathbf{x})\), to demonstrate that (4) is a shallow \(\exp\) network. An analogous result for shallow Fourier neural networks is presented in Venturi et al. (2021) Lemma 33. The proof is completed by demonstrating that the \(\exp\) activation function in the shallow network can be replaced with the ReLU activation function.
## 6 Conclusion
We have established results describing the expressive power of \(\mathcal{O}(1)\)-ReLU-networks in the context of approximating the class of homogeneous multivariate polynomials.
**Deep vs shallow**. Our investigation provides further evidence that deep ReLU networks demonstrate superior efficiency in expressing homogeneous polynomials. The number of computational units necessary for expressing the product function diminishes significantly with increased depth, as evidenced by the lower bound in Theorem 3. With \(L=\log d\) layers, a deep network can efficiently express this function.
**The curse of dimensionality.** Our findings suggest that the product function can be efficiently expressed using a neural network if the network is sufficiently deep to exploit the computational structure of the function (Poggio et al. (2017)), or if it operates on a domain in which its Lipschitz constant does not grow with the dimension. This observation aligns with the surprising recent result in Safran et al. (2019). In their respective works, Daniely (2017) and Eldan and Shamir (2016) demonstrated that functions of the form \(x\mapsto\varphi(||x||)\) can be approximated by depth-two networks, leveraging the computational structure. However, unless \(\varphi\) is \(\mathcal{O}(1)\)-Lipschitz (Safran et al. (2019)), it cannot be expressed efficiently using a one-hidden-layer network. |
2302.12774 | Automated Lesion Segmentation in Whole-Body FDG-PET/CT with
Multi-modality Deep Neural Networks | Recent progress in automated PET/CT lesion segmentation using deep learning
methods has demonstrated the feasibility of this task. However, tumor lesion
detection and segmentation in whole-body PET/CT is still a chal-lenging task.
To promote research on machine learning-based automated tumor lesion
segmentation on whole-body FDG-PET/CT data, Automated Lesion Segmentation in
Whole-Body FDG-PET/CT (autoPET) challenge is held, and a large, publicly
available training dataset is provided. In this report, we present our solution
to the autoPET challenge. We employ multi-modal residual U-Net with deep super
vision. The experimental results for five preliminary test cases show that Dice
score is 0.79 +/- 0.21. | Satoshi Kondo, Satoshi Kasai | 2023-02-16T01:05:54Z | http://arxiv.org/abs/2302.12774v1 | # Automated Lesion Segmentation in Whole-Body FDG-PET/CT with Multi-modality Deep Neural Networks
###### Abstract
Recent progress in automated PET/CT lesion segmentation using deep learning methods has demonstrated the feasibility of this task. However, tumor lesion detection and segmentation in whole-body PET/CT is still a challenging task. To promote research on machine learning-based automated tumor lesion segmentation on whole-body FDG-PET/CT data, Automated Lesion Segmentation in Whole-Body FDG-PET/CT (autoPET) challenge is held, and a large, publicly available training dataset is provided. In this report, we present our solution to the autoPET challenge. We employ multi-modal residual U-Net with deep super vision. The experimental results for five preliminary test cases show that Dice score is 0.79 \(\pm\) 0.21.
Keywords:FDG-PET/CT, Lesion segmentation, Multi-modality
## 1 Introduction
Recent progress in automated PET/CT lesion segmentation using deep learning methods has demonstrated the feasibility of this task. However, tumor lesion detection and segmentation in whole-body PET/CT is still a challenging task. One bottleneck for progress in automated PET lesion segmentation is the limited availability of training data. To promote research on machine learning-based automated tumor lesion segmentation on whole-body FDG-PET/CT data, Automated Lesion Segmentation in Whole-Body FDG-PET/CT (autoPET) challenge is held, and a large, publicly available training dataset is provided.
In this report, we present our solution to the autoPET challenge. We employ residual U-Net with deep super vision with multi-modality fashion.
## 2 Proposed Method
The input data for lesion segmentation are whole-body PET/CT volumes, and two volumes, i.e., CT and SUV (Standardized Uptake Value) which is obtained from PET, are provided for each case. CT and PET volumes are acquired simultaneously on a
single PET/CT scanner in one session, thus CT and PET (SUV) volumes are anatomically aligned up to minor shifts due to physiological motion.
We use 3D encoder-decoder networks for the segmentation task. Our base model is residual U-Net with deep super vision [2]. Input two volumes are resampled in [2 mm, 2 mm, 3 mm] for x, y and z direction, respectively, at first. CT and SUV volumes are normalized. The minimum and maximum values are -100 and 250, respectively, for CT volumes. And the minimum and maximum values are 0 and 15, respectively, for SUV volumes. In the training phase, we randomly sample 3D patches from the input volumes. The size of a 3D patch is 48 x 48 x 32 voxels. We sample 12 patches from each volume. When the volume includes lesions, the ratio of positive and negative patches in the sampling for one input volume is 3:1. We do not apply any augmentation. The patches of CT and SUV are concatenated to one patches as 2 channel patches, and then the concatenated patches are fed into the segmentation network.
The loss function is a weighted summation of Dice loss and cross entropy loss. The weights for Dice and cross entropy losses are 1 and 0.5, respectively. We also employ deep super vision for loss calculation. Intermediate outputs from several layers in the decoder of the model are up-sampled, loss value is calculated for each up-sampled output, and then the loss values are aggregated. The number of layers used in the deep super vision is two.
We train multiple models. Each model is trained independently using different combinations of training and validate datasets, and the inference results are obtained by ensemble of the outputs from the models. The final likelihood score is obtained by averaging the likelihood scores from the models. We use five models in our experiments.
## 3 Experiments
The dataset for training consists of 1014 studies of 900 patients acquired on a single site. The dataset for preliminary evaluation consists of 5 studies.
Our method is implemented by mainly using PyTorch [3], PyTorch Lightning and MONAI libraries. We use three Nvidia RTX3090 GPUs for training.
For the training of the segmentation model, the optimizer is Adam [4] and the learning rate changes with cosine annealing. The initial learning rate is 0.001. The number of epoch is 300. The model taking the lowest loss value for the validation dataset is selected as the final model.
We evaluated our method with the evaluation system provided by the organizers of the autoPET challenge. There are three evaluation metrics. The first one is foreground Dice score of segmented lesions. The second one is volume of false positive connected components that do not overlap with positives, i.e., false positive volume. The final one is volume of positive connected components in the ground truth that do not overlap with the estimated segmentation mask, i.e., false negative volume.
The results of our submission are Dice score is 0.79 \(\pm\) 0.21, false positive volume is 0.29 \(\pm\) 0.66, and false negative volume is 14.27 \(\pm\) 17.31.
Conclusions
In this report, we presented our solution for the autoPET challenge. We employ multi-modal residual U-Net with deep supervision. The experimental results for five preliminary test cases show that Dice score is 0.79 \(\pm\) 0.21.
|
2301.04320 | Rethinking complex-valued deep neural networks for monaural speech
enhancement | Despite multiple efforts made towards adopting complex-valued deep neural
networks (DNNs), it remains an open question whether complex-valued DNNs are
generally more effective than real-valued DNNs for monaural speech enhancement.
This work is devoted to presenting a critical assessment by systematically
examining complex-valued DNNs against their real-valued counterparts.
Specifically, we investigate complex-valued DNN atomic units, including linear
layers, convolutional layers, long short-term memory (LSTM), and gated linear
units. By comparing complex- and real-valued versions of fundamental building
blocks in the recently developed gated convolutional recurrent network (GCRN),
we show how different mechanisms for basic blocks affect the performance. We
also find that the use of complex-valued operations hinders the model capacity
when the model size is small. In addition, we examine two recent complex-valued
DNNs, i.e. deep complex convolutional recurrent network (DCCRN) and deep
complex U-Net (DCUNET). Evaluation results show that both DNNs produce
identical performance to their real-valued counterparts while requiring much
more computation. Based on these comprehensive comparisons, we conclude that
complex-valued DNNs do not provide a performance gain over their real-valued
counterparts for monaural speech enhancement, and thus are less desirable due
to their higher computational costs. | Haibin Wu, Ke Tan, Buye Xu, Anurag Kumar, Daniel Wong | 2023-01-11T05:59:50Z | http://arxiv.org/abs/2301.04320v1 | # Rethinking Complex-Valued Deep Neural Networks for Monaural Speech Enhancement
###### Abstract
Despite multiple efforts made towards adopting complex-valued deep neural networks (DNNs), it remains an open question whether complex-valued DNNs are generally more effective than real-valued DNNs for monaural speech enhancement. This work is devoted to presenting a critical assessment by systematically examining complex-valued DNNs against their real-valued counterparts. Specifically, we investigate complex-valued DNN atomic units, including linear layers, convolutional layers, long short-term memory (LSTM), and gated linear units. By comparing complex- and real-valued versions of fundamental building blocks in the recently developed gated convolutional recurrent network (GCRN), we show how different mechanisms for basic blocks affect the performance. We also find that the use of complex-valued operations hinders the model capacity when the model size is small. In addition, we examine two recent complex-valued DNNs, i.e. deep complex convolutional recurrent network (DCCRN) and deep complex U-Net (DCUNET). Evaluation results show that both DNNs produce identical performance to their real-valued counterparts while requiring much more computation. Based on these comprehensive comparisons, we conclude that complex-valued DNNs do not provide a performance gain over their real-valued counterparts for monaural speech enhancement, and thus are less desirable due to their higher computational costs.
Haibin Wu, Ke Tan, Buye Xu, Anurag Kumar, Daniel Wong+Meta Reality Labs Research, USA Monaural speech enhancement, complex-valued neural networks, computational cost, deep learning
Footnote †: This work was done while H. Wu was a research scientist intern at Meta.
## 1 Introduction
Recent years have witnessed promising performance improvement of monaural speech enhancement models in the complex domain, given the importance of phase for speech quality [1, 2, 3, 4, 5, 6, 7, 8, 9]. A recent study [10] develops the key atomic components for complex-valued DNNs and claim that complex-valued parameters have various merits from computational, biological, and signal processing perspectives. Complex-valued DNNs, which operates with complex-valued arithmetic, seems to be advantageous for complex-domain speech enhancement, where DNNs are trained to learn complex spectrograms. Motivated by such an intuition, multiple efforts [3, 11, 12, 13, 14, 15, 16] adopted complex-valued DNNs for monaural speech enhancement. However, to the best of our knowledge, none of these studies has justified a performance gain provided by complex-valued DNNs over their real-valued counterparts with the same network structure and model size. Drude et al. [17] compared real- and complex-valued DNNs with fully-connected layers for beamforming, and found that the complex-valued DNN does not yield superior performance to the real-valued DNN while more computationally expensive. For monaural speech enhancement, despite the promising performance improvement produced by recent complex-valued DNNs, it remains unclear whether it is the complex-valued nature that fundamentally brings the merits.
A recent notable model named DCCRN [7] extends the convolutional recurrent network in [18] by replacing convolutional and LSTM layers with their complex-valued counterparts to estimate the ideal complex ratio mask. The DCCRN exhibits competitive performance over earlier works, which has drawn the community's attention to the efficacy of complex-valued DNNs for speech enhancement. However, we believe that it is premature to ascertaine the performance improvement to the use of complex-valued operations due to the lack of systematic comparisons between DCCRN and its real-valued counterpart, in which only the complex-valued layers are replaced by the corresponding real-valued layers while all other configurations remain unaltered, including input features, training targets, training objectives, network structure and model size. Without such apples-to-apples comparisons, it is difficult to justify the attribution of the improvement achieved by complex-valued DNNs.
This study presents a critical assessment by systematically examining complex-valued DNNs against their real-valued counterparts through comprehensive comparisons:
1. Based on the principles of complex-valued computation [10], we formulate complex-valued DNN atomic units for investigation, including linear layers, convolutional/deconvolutional layers, LSTM, and gated linear units. We compare their performance with that of their real-valued counterparts on monaural speech enhancement.
2. To comprehensively investigate complex-valued operations in different types of layer topology, we adopt GCRN - a real-valued DNN originally developed for complex-domain speech enhancement, which integrates a variety of layer types. We enumerate all the different versions of fundamental building blocks of GCRN, and show how different computing mechanisms in basic blocks affect the performance. We observe that the models with complex-valued components do not outperform the real-valued counterparts. In addition, given the fact that many real-world applications require a computationally efficient model, we conduct the same comparisons with a setting where the model size is very small. We find that, with such a setting, complex-valued operations even hinders speech enhancement performance compared to real-valued operations.
3. Two recent compelling models based on complex-valued operations, DCCRN [7] and DCUNET [3], have shown promising performance for monaural speech enhancement. In this work, we evaluate their real-valued versions with the same parameter count, and conduct investigation with different loss functions, learning rates and minibatch sizes, in terms of both enhancement performance and training stability. The experimental results reveal that the complex-valued versions do not outperform their real-valued counterparts while they have higher computational costs. This is consistent with the observation in [19].
## 2 Methodology
This section introduces the basic building blocks for complex-valued DNNs, followed by the case study design.
### Building blocks
#### 2.1.1 Linearity
Fully connected layers, convolution layers and deconvolution layers are composed of matrix multiplications. We omit the bias to simplify the description. Taking the input complex-valued feature matrix as \(X=X_{r}+jX_{i}\) and the complex-valued parameter matrix as \(W=W_{r}+jW_{i}\), the complex-valued output can be elaborated as:
\[Y=(X_{r}W_{r}-X_{i}W_{i})+j(X_{r}W_{i}+X_{i}W_{r}), \tag{1}\]
where \(Y\) denotes the output feature of the complex-valued layer, the subscripts \(r\) and \(i\) denote real and imaginary parts respectively.
#### 2.1.2 Activation function
Given a complex-valued representation \(z\), the activation function operates on the real and imaginary part independently as:
\[a=f(Re\ z)+jf(Im\ z), \tag{2}\]
where \(a\) is the output representation, \(Re\) and \(Im\) extract the real and imaginary parts respectively, and \(f\) denotes the activation function.
#### 2.1.3 Lstm
For LSTM layers, we have two versions:
**Quasi complex-valued LSTM** In [7], the complex LSTM operation is treated as two separate operations on the real and imaginary parts. To be specific, they initialize two real-valued sub-LSTM layers, namely LSTM\({}_{r}\) and LSTM\({}_{i}\), corresponding to the real and imaginary LSTM respectively. Given the input feature \(X=X_{r}+jX_{i}\), the output feature can be derived as:
\[F_{rr}=\text{LSTM}_{r}(X_{r}),F_{ir}=\text{LSTM}_{r}(X_{i}), \tag{3}\] \[F_{ri}=\text{LSTM}_{i}(X_{r}),F_{ii}=\text{LSTM}_{i}(X_{i}),\] \[F_{out}=(F_{rr}-F_{ii})+j(F_{ri}+F_{ir}),\]
where \(F_{out}\) is the output feature.
**Fully complex-valued LSTM** In addition to the quasi complex-valued LSTM, which does not perform complex-valued operations within sub-LSTM layers, we also investigate fully complex-valued LSTM, which totally follows the the arithmetic of complex numbers. Each matrix multiplication and activation function in this LSTM strictly follows the arithmetic in Sections 2.1.1 and 2.1.2.
#### 2.1.4 Gated linear unit
Gated linear unit [20] is a widely used layer topology, which consists of two separate convolutional layers and one gating operation. The two separate convolutional layers process the same input, and produce their outputs \(F^{(1)}\) and \(F^{(2)}\), respectively. A sigmoid function is applied to \(F^{(2)}\) to derive a gate, which is then element-wisely multiplied with \(F^{(1)}\) to yield the output of the gated linear unit. In a complex-valued gated linear unit, let \(F^{(1)}=F^{(1)}_{r}+jF^{(1)}_{i}\) and \(F^{(2)}=F^{(2)}_{r}+jF^{(2)}_{i}\) be the outputs of the two convolutional layers. We derive two gating mechanisms, i.e. separate gating and magnitude gating.
**Separate gating** For separate gating, we apply a sigmoid function to the real and imaginary parts of \(F^{(2)}\) separately, which amounts to a complex-valued gate. The real and imaginary parts of this gate are element-wisely multiplied with \(F^{(1)}_{r}\) and \(F^{(1)}_{i}\), respectively.
**Magnitude gating** Unlike separate gating, magnitude gating calculates a real-valued gate \(F^{(g)}\) from the magnitude of the complex feature map \(F^{(2)}\):
\[F^{(g)}=(\sigma(|F^{(2)}|)-0.5)\times 2, \tag{4}\]
where \(\sigma\) denotes the sigmoid function, and \(|\cdot|\) extracts the magnitude of a complex feature map. Since the magnitude is nonnegative, applying the sigmoid function to the magnitude always results in values ranging from 0.5 to 1. Hence we use an affine transformation to normalize the gating value to the range of 0 to 1. The resulting gate is applied to both real and imaginary parts of \(F^{(1)}\). Such magnitude gating preserves the phase of \(F^{(1)}\)[21].
### Case study design
In this section, we carefully design the case studies, and elaborate the rationales and objectives of each case study. In these case studies, all pairs of real- and complex-valued models for comparison have the same configurations, including input features, training targets, training objectives, network structure and model size.
**Basic Unit** This case study compares different complex layers defined in Section 2.1 with their real-valued counterparts, in terms of enhancement performance and computational costs. Specifically, we compare: 1) a model with a stack of three complex-valued linear layers and its corresponding real-valued model, where each of the two hidden layers has 406 units in the complex-valued model and 512 units in the real-valued model, respectively. Such a configuration ensures that the two models have almost the same number of parameters. Note that each hidden layer is followed by a rectified linear unit function; 2) quasi complex-valued LSTM, fully complex-valued LSTM, and real-valued LSTM, each of which contains three LSTM layers followed by a linear output layer. In these three models, each LSTM layer contains 732, 732 and 1024 units, respectively. The implementations described in Section 2.1.3 are adopted for quasi complex-valued LSTM and fully complex-valued LSTM; 3) DCUNET, a convolutional encoder-decoder model developed in [3], and its real-valued counterpart (RUNET), in which all complex-valued convolutional, deconvolutional and linear layers are replaced by their real-valued counterparts. Akin to 1) and 2), we slightly adjust hyperparameters (e.g. number of out channels in convolutional layers) for RUNET, such that its model size is almost the same as DCUNET. Note that all these models are trained to learn complex spectral mapping.
**GCRN** GCRN [5] is a representative model for our investigation, because it consists of different types of layers including convolutional/deconvolutional layers, gated linear units, LSTM layers, and linear layers. The original GCRN has two decoders, one for real part estimation and the other for imaginary part estimation. We instead use a single shared decoder for both real and imaginary parts, corresponding to two output channels in the last deconvolutional layer of the decoder. Such an architecture can be naturally converted into complex-valued versions for comparison by replacing each layer with their complex-valued counterpart. In this case study, we aim to investigate: 1) whether replacing specific layers of GCRN with their complex-valued counterparts can lead to better performance; 2) how the use of complex-valued operations affect speech enhancement performance when the model is constrained to a relatively small amount of parameters; 3) which gating mechanism in
Section 2.1.4 is the better choice, from both training stability and enhancement performance aspects. Note that regarding the bottleneck LSTM in GCRN, we adopt the quasi complex-valued LSTM for investigation.
**DCCRN** In [7], the performance gain achieved by DCCRN is attributed by the authors to the complex multiplication constraint, which they believe can help DNNs learn complex representations more effectively. However, they did not compare DCCRN with its real-valued counterpart using the same configurations. Thus it is difficult to justify the attribution of the performance improvement, which is likely due to either the use of complex-valued operations or other components in the model design. The objective of this case study is to show whether DCCRN can outperform its real-valued counterparts, with the same amount of parameters. Specifically, we adopt the "DCCRN-E" configuration, which achieves the best performance in [7]. To derive the corresponding real-valued version, we simply replace the complex-valued layers with their real-valued counterparts, and reduce the channel numbers in the encoder to [32, 64, 64, 64, 128, 256] to maintain the number of parameters.
## 3 Experiments
### Experimental setup
In our experiments, the Interspeech2020 DNS Challenge training speech dataset [22] is used to create our training, validation and test sets, which contains roughly 65000 speech signals uttered by 1948 speakers in total. We randomly split these speakers into three distinct sets for training, validation and test sets, which include 1753 (\(\sim\)90%), 97 (\(\sim\)5%) and 98 (\(\sim\)5%) speakers, respectively. Similarly, we partition the DNS Challenge noise dataset with around 65000 signals into 90%, 5% and 5% for training, validation and test sets, respectively. By randomly pairing speech and noise signals, we create a training set with 500000 noisy mixtures and a validation set with 1000 noisy mixtures, in both of which the signal-to-noise ratio (SNR) is randomly sampled between -5 and 5 dB. Following the same procedure, three test sets are created at different SNR levels, i.e. -5, 0 and 5 dB. Note that all speech and noise signals are randomly truncated to 10 seconds before mixing. We additionally use the synthetic test set released by DNS Challenge for evaluation.
All signals are sampled at 16 kHz. Short-time Fourier transform is performed to obtain spectrograms. We adopt the Adam optimizer to train all models. Multiple metrics are employed to measure the speech enhancement performance, including wide-band perceptual evaluation speech quality (WB-PESQ) [23], short-time objective intelligibility (STOI) [24], scale-invariant signal-to-distortion ratio (SI-SDR) [25], DNSMOS P. 835 [26] and NORESQA-MOS [27].
### Experimental results
**Basic Unit** In Table 1, 1). columns (1a), (1b), (1c) denote the fully complex-valued LSTM, quasi complex-valued LSTM and real-valued LSTM. 2). Real-valued LSTM has half as many MACs as both complex-valued LSTMs. Among the three models, the quasi complex-valued LSTM achieves the best performance, while its improvement over the real-valued LSTM is marginal. Columns (1d)
\begin{table}
\begin{tabular}{l c|c|c c c|c c|c c} \hline \hline & SNR & Noisy & (1a),C-LSTM & (1b),Quasi C-LSTM & (1c),LSTM & (1d),C-Linear & (1e),R-Linear & (1f),DCUNET & (1g),RUNET \\ \hline \multirow{3}{*}{STOI} & -5 dB & 0.69 & 0.85 & 0.86 & 0.86 & 0.61 & 0.61 & 0.85 & 0.85 \\ & 0 dB & 0.78 & 0.90 & 0.91 & 0.91 & 0.70 & 0.70 & 0.90 & 0.90 \\ & 5 dB & 0.85 & 0.94 & 0.94 & 0.94 & 0.76 & 0.76 & 0.94 & 0.94 \\ \hline \multirow{3}{*}{WB-PESQ} & -5 dB & 1.11 & 1.65 & 1.71 & 1.69 & 1.12 & 1.12 & 1.64 & 1.70 \\ & 0 dB & 1.15 & 1.95 & 2.02 & 2.00 & 1.17 & 1.28 & 1.92 & 2.00 \\ & 5 dB & 1.24 & 2.29 & 2.35 & 2.34 & 1.24 & 1.25 & 2.27 & 2.36 \\ \hline \multirow{3}{*}{SI-SDR (dB)} & -5 dB & -5.00 & 10.80 & 11.10 & 10.87 & 0.92 & 1.23 & 10.80 & 10.87 \\ & 0 dB & 0.05 & 13.62 & 13.92 & 13.78 & 4.69 & 4.94 & 13.79 & 13.86 \\ & 5 dB & 5.01 & 16.36 & 16.64 & 16.55 & 7.19 & 7.57 & 16.74 & 16.84 \\ \hline \(\#\) Para & - & - & 23.35 M & 23.35 M & 23.62 M & 0.59 M & 0.59 M & 3.10 M & 3.12 M \\ \hline \(\#\) MACs & - & - & 5.90 G & 5.90 G & 2.98 G & 119.59 M & 59.88 M & 56.69 G & 19.87 G \\ \hline \hline \end{tabular}
\end{table}
Table 1: Investigation of different basic units, where the number of multiply-accumulate (MAC) operations is measured on a 1-second signal.
\begin{table}
\begin{tabular}{l l|c c c|c c|c c|c c|c c} \hline \hline & & \multicolumn{2}{c|}{**STOI**} & \multicolumn{2}{c|}{**NB-PESQ**} & \multicolumn{2}{c|}{**SI-SDR (dB)**} & \multicolumn{2}{c|}{**\# Para**} & \multicolumn{2}{c}{**\# MACs**} \\ & & -5 dB & dB & 5 dB & 0 dB & 5 dB & -5 dB & 0 dB & 5 dB & 0 dB & 5 dB \\ \hline \multirow{3}{*}{(2a)} & Noisy & 0.69 & 0.78 & 0.85 & 1.11 & 1.15 & 1.24 & -5 & 0.01 & 5.01 & - & - \\ \hline (2a) & GCRN (real-valued model) & 0.84 & 0.90 & 0.94 & 1.57 & 1.87 & 2.24 & 8.30 & 11.29 & 14.13 & 9.25 M & 1.72 G \\ (2b) & GCRN + \(\bigtriangleup\) & 0.83 & 0.90 & 0.94 & 1.55 & 1.85 & 2.22 & 11.17 & 13.98 & 9.25 M & 2.57 G \\ (2c) & GCRN + \(\bigloup\) - Separate & 0.83 & 0.90 & 0.93 & 1.53 & 1.80 & 2.15 & 7.64 & 10.51 & 13.22 & 9.12 M & 1.72 G \\ (2d) & GCRN + \(\bigtriangleup\) - Magnitude & 0.83 & 0.90 & 0.94 & 1.56 & 1.85 & 2.23 & 7.66 & 10.63 & 13.43 & 9.12 M & 1.72 G \\ (2e) & GCRN + \(\bigtriangleup\) - Separate & 0.83 & 0.90 & 0.93 & 1.52 & 1.81 & 2.16 & 8.14 & 11.16 & 14.02 & 9.12 M & 2.57 G \\ (2f) & GCRN + \(\bigtriangleup\) - Magnitude & 0.84 & 0.90 & 0.94 & 1.56 & 1.87 & 2.24 & 7.89 & 10.89 & 13.76 & 9.12 M & 2.57 G \\ (2g) & GCRN + \(\bigtriangleup\) + \(\bigtriangledown\) & 0.83 & 0.93 & 1.53 & 1.83 & 2.20 & 7.95 & 10.96 & 13.88 & 8.83 M & 1.72 G \\ (2h) & GCRN + \(\bigtriangleup\) + \(\bigtriangledown\) - Magnitude & 0.83 & 0.90 & 0.94 & 1.54 & 1.85 & 2.23 & 7.67 & 10.75 & 13.79 & 8.83 M & 1.72 G \\ (2i) & GCRN + \(\bigtriangleup\) + \(\bigtriangledown\) - Separate & 0.82 & 0.89 & 0.93 & 1.52 & 1.80 & 2.15 & 7.62 & 10.70 & 13.52 & 8.83 M & 2.57 G \\ (2j) & GCRN + \(\bigtriangleup\) + \(\bigtriangledown\) - Magnitude & 0.83 & 0.90 & 0.94 & 1.57 & 1.88 & 2.27 & 7.65 & 10.87 & 13.82 & 8.83 M & 2.57 G \\ (2A) & GCRN \(\bigtriangledown\) (real-valued model) & 0.83 & 0.89 & 0.93 & 1.50 & 1.79 & 2.16 & 7.28 & 10.33 & 13.40 & 9.25 M & 1.72 G \\ (2j) & GCRN + \(\bigtriangleup\) + \(\bigtriangledown\) - Magnitude \(\bigtriangleup\) & 0.82 & 0.89 & 0.93 & 1.47 & 1.74 & 2.10 & 7.25 & 10.33 & 13.46 & 8.83 M & 2.57 G \\ \hline \hline \end{tabular}
\end{table}
Table 2: Investigation of different complex-valued components in GCRN. \(\bigtriangleup\), \(\bigtriangledown\) denote using the quasi complex-valued LSTM in the bottleneck, complex-valued convolutional layers, complex-valued deconvolutional layers, respectively. “- Separate” and “- Magnitude” denote using separate and magnitude gating mechanisms in GLUs, respectively, and \(\odot\) denotes the model performs complex ratio masking rather than complex spectral mapping originally used in [5].
\begin{table}
\begin{tabular}{l l|c
and (1e) denote the complex- and real-valued DNNs consisting of linear layers. Although the real-valued DNN only has half of the MAC number in the complex-valued DNN, it still produces slightly better performance than the latter. 3). (1f) and (1g) denote the DCNNET and its corresponding real-valued version respectively We see that the real-valued UNET outperforms DCNNET in terms of both enhancement performance and computational efficiency.
**GCRN** In Table 2, (2a) is the original real-valued GCRN. (2b)-(2j) are the models where some components are replaced by the corresponding complex-valued version. Moreover, (2A) and (2J) have the same model structure as (2a) and (2j), but are trained to perform complex ratio masking rather than complex spectral mapping. In Table 3, we reduce the model size to roughly 2 M and 0.6 M, where "CGCRN" denotes the same configuration as (2j). We can observe: 1). Replacing the components of GCRN with their complex-valued versions can not get any performance gain, as shown in (2a)-(2j). 2). In the comparison between the models trained for complex ratio masking, i.e. (2A) and (2J), the real-valued model performs slightly better than the complex-valued model. 3). Although the magnitude gating and separate gating lead to similar performance, the training loss curve of the former is much more stable than that of the latter. It is likely because the magnitude gating preserves phase information which could help stabilize the training. 4). In the small model setting, the real-valued models consistently outperforms the complex-valued counterparts. Furthermore, their performance gap increases as the model size becomes smaller.
**DCCRN** Tables 4 and 5 compare the DCCRN with its real-valued counterpart on our simulated test set and the DNS Challenge synthetic test set, respectively. The following observations are obtained: 1). With three different training objectives, i.e. SI-SDR, L\({}_{1}\) and MSE, the real- and complex-valued models yield almost identical performance in all the metrics on both datasets. Take, for example the -5 dB case with the SI-SDR training loss in Table 4. The STOI, WB-PESQ and SI-SDR improvements over noisy mixtures are 0.18, 0.67 and 16.01 dB for the complex-valued model, and 0.18, 0.69 and 16.06 dB for the real-valued model, respectively. 2) As shown in Table 5, the real-valued model models slightly better scores than the complex-valued model in both DNSMOS and NORESQA-MOS, i.e. two metrics that highly correlate with subjective quality scores. 3). We have also made comparisons under settings with different learning rates and minibatch sizes. We find that DCCRN is less robust than its real-valued counterpart against different learning rates. In addition, both models produce very similar performance with different minibatch sizes. However, we do not show these comparison results due to the page limit. 4). The real-valued model has only one-third of the MAC amount in the complex-valued model. Specifically, the number of MACs for the complex-valued model is 14.38 G, while it is only 4.84G for the real-valued model. Given that the two models yield almost the same performance, the complex-valued model is less efficient for real-world applications.
## 4 Concluding Remarks
Through the extensive experiments, we draw the following conclusions for monaural speech enhancement: 1). Complex-valued DNNs yield similar performance to their real-valued counterparts with the same number of parameters. 2). When the model size is relatively small, the use of complex-valued operations is detrimental to the enhancement performance. 3). The performance gain achieved by DCCRN and DCUNET is not attributed to the use of complex-valued operations. Furthermore, complex-valued DNNs require more MACs than their real-valued counterparts, without any performance gain.
A complex number multiplication can break into four real number multiplications. Based on our systematic comparisons, we believe that real-valued DNNs have the capacity to achieve comparable performance to their complex-valued counterparts with the same model size and network structure. Although complex-valued DNNs intuitively seem a more natural choice than real-valued DNNs for processing complex spectrograms, they are more computationally expensive and thus an inferior choice for real applications that are efficiency-sensitive. We believe that there is no sufficient evidence justifying the superiority of complex-valued DNNs over real-valued DNNs for monaural speech enhancement. This study demonstrates that it is more than nontrivial to rethink the efficacy of complex-valued operations in speech enhancement systems.
\begin{table}
\begin{tabular}{l c|c|c c|c c|c c} \hline \hline & \multicolumn{1}{c|}{Noisy} & \multicolumn{1}{c|}{DCCRN-SISDR} & \multicolumn{1}{c|}{DCCRN-Real-SISDR} & \multicolumn{1}{c|}{DCCRN-L\({}_{1}\)} & \multicolumn{1}{c|}{DCCRN-Real-L\({}_{1}\)} & \multicolumn{1}{c}{DCCRN-MSE} & \multicolumn{1}{c}{DCCRN-Real-MSE} \\ \hline & -5 dB & 0.69 & 0.87 & 0.87 & 0.86 & 0.86 & 0.85 & 0.85 \\ STOI & 0 dB & 0.78 & 0.92 & 0.92 & 0.91 & 0.91 & 0.90 & 0.90 \\ & 5 dB & 0.85 & 0.95 & 0.95 & 0.95 & 0.95 & 0.94 & 0.94 \\ \hline & -5 dB & 1.11 & 1.78 & 1.80 & 1.73 & 1.69 & 1.55 & 1.56 \\ WB-PESQ & 0 dB & 1.15 & 2.13 & 2.14 & 2.05 & 2.00 & 1.83 & 1.86 \\ & 5 dB & 1.24 & 2.51 & 2.54 & 2.43 & 2.38 & 2.16 & 2.19 \\ \hline & -5 dB & -5.00 & 11.01 & 11.06 & 8.36 & 8.28 & 8.09 & 8.18 \\ SI-SDR (dB) & 0 dB & 0.05 & 14.00 & 14.06 & 11.25 & 11.19 & 11.20 & 11.27 \\ & 5 dB & 5.01 & 16.99 & 17.05 & 14.36 & 14.27 & 14.41 & 14.54 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comparisons between real- and complex-valued versions of DCCRN with different training objectives. “Real” means the real-valued version of DCCRN. “SISDR”, “L\({}_{1}\)”, “-MSE” denote using SI-SDR, L\({}_{1}\) and mean squared error (MSE) losses for training, respectively, where both L\({}_{1}\) and MSE losses are computed on the clean and estimated real, imaginary and magnitude spectrograms.
\begin{table}
\begin{tabular}{l|c|c c c|c c|c c} \hline \hline & Noisy & \multicolumn{1}{c|}{DCCRN-SISDR} & \multicolumn{1}{c|}{DCCRN-Real-SISDR} & \multicolumn{1}{c|}{DCCRN-L\({}_{1}\)} & \multicolumn{1}{c|}{DCCRN-Real-L\({}_{1}\)} & \multicolumn{1}{c}{DCCRN-MSE} & \multicolumn{1}{c}{DCCRN-Real-MSE} \\ \hline STOI & 0.92 & 0.97 & 0.97 & 0.97 & 0.97 & 0.97 & 0.97 \\ WB-PESQ & 1.58 & 2.92 & 2.89 & 2.92 & 2.86 & 2.61 & 2.64 \\ SI-SDR (dB) & 9.23 & 19.60 & 19.54 & 17.11 & 17.13 & 17.33 & 17.55 \\ DNSMOS (OVRL) & 2.48 & 3.30 & 3.33 & 3.28 & 3.30 & 3.19 & 3.20 \\ NORESQA-MOS & 1.90 & 4.31 & 4.34 & 4.27 & 4.31 & 3.80 & 3.96 \\ \hline \(\#\) Para & - & 3.67 M & 3.64 M & 3.67 M & 3.64 M & 3.67 M & 3.64 M \\ \hline \(\#\) MACs & - & 14.38 G & 4.84 G & 14.38 G & 4.84 G & 14.38 G & 4.84 G \\ \hline \hline \end{tabular}
\end{table}
Table 5: Comparisons between real- and complex-valued versions of DCCRN with different training objectives on the DNS Challenge synthetic test set without reverberation. |
2306.13176 | Key Frame Extraction with Attention Based Deep Neural Networks | Automatic keyframe detection from videos is an exercise in selecting scenes
that can best summarize the content for long videos. Providing a summary of the
video is an important task to facilitate quick browsing and content
summarization. The resulting photos are used for automated works (e.g.
summarizing security footage, detecting different scenes used in music clips)
in different industries. In addition, processing high-volume videos in advanced
machine learning methods also creates resource costs. Keyframes obtained; It
can be used as an input feature to the methods and models to be used. In this
study; We propose a deep learning-based approach for keyframe detection using a
deep auto-encoder model with an attention layer. The proposed method first
extracts the features from the video frames using the encoder part of the
autoencoder and applies segmentation using the k-means clustering algorithm to
group these features and similar frames together. Then, keyframes are selected
from each cluster by selecting the frames closest to the center of the
clusters. The method was evaluated on the TVSUM video dataset and achieved a
classification accuracy of 0.77, indicating a higher success rate than many
existing methods. The proposed method offers a promising solution for key frame
extraction in video analysis and can be applied to various applications such as
video summarization and video retrieval. | Samed Arslan, Senem Tanberk | 2023-06-21T15:09:37Z | http://arxiv.org/abs/2306.13176v1 | # Dikkat Katmanli Derin Simir Aglari ile Anahtar Kare Tespiti
###### Abstract
Automatic keyframe detection from videos is an exercise in selecting scenes that can best summarize the content for long videos. Providing a summary of the video is an important task to facilitate quick browsing and content summarization. The resulting photos are used for automated works (e.g. summarizing security footage, detecting different scenes used in music clips) in different industries. In addition, processing high-volume videos in advanced machine learning methods also creates resource costs. Keyframes obtained; It can be used as an input feature to the methods and models to be used. In this study; We propose a deep learning-based approach for keyframe detection using a deep auto-encoder model with an attention layer. The proposed method first extracts the features from the video frames using the encoder part of the autoencoder and applies segmentation using the k-means clustering algorithm to group these features and similar frames together. Then, keyframes are selected from each cluster by selecting the frames closest to the center of the clusters. The method was evaluated on the TVSUM video dataset and achieved a classification accuracy of 0.77, indicating a higher success rate than many existing methods. The proposed method offers a promising solution for key frame extraction in video analysis and can be applied to various applications such as video summarization and video retrieval.
_Keywords -- Automatic Key Frame Extraction, Video Summarization, Deep learning, Learning With Attention Layers, Autoencoders._
## I Gris
Video analizi, gozetim, gelence ve e gitim gibi cok estiltila alanlardai uygulamalarla, son yillada giderek daha onemli bir aarstruma alan halh haling egldi. Anahtar kare cikarma, video analizinde ok onemli bir aadmr, cunku oemli olaylan veya, eylemleri yakalayan en biligilendrici karelerind seerek bir videonun iergicini ozetlemeye yardmci olur[1]. Diger bir deyisle, video anahtar kare cikarmann temel amaci, videoi iergiini temsil edecek video karelerini tespit etmektir.
Tespit edilecek ozet karelerin temsil ozellikleri elsalman alanlarna gore farkliklik posterebilmektedir. Ancak en genel kullanm senaryosu, video aksi 1eyrisindek farkli sahne ve vevre unsurlarnun tespit edilmesi veya genel aaktstasi nesne hareket hzlarndan daha farkl hizda hareket eden nesnelerlin tespit edilebildigi karelerin bulnumasdir.
Geleneskel anahtar kare cikartmaya yonelik yontemler genellikle, bulussal yontemlere veya belirli video iecrigiineb buyuk olclude bagli olabilen ve diger videolarda kullanlmak uzere genellenememeybilen el yapimu upgulamalara dayanr. But calalsmada, diskkatkat datmanna sahip derin bir otomatik kodlavslukullanarak anahtar kare cikarma icin yeni bir yaklasmm'erivuz. Kullanlacak model, anahtar kare cikarma icin
The _G_
ve gorenilen ozelliklerin yapsim analiz etrek icjn otomatik kodlayciralan da kullanabilir. Yeniden uretilebiliarliegvi e diger yontemlerle karslalstrmayu kolaylastrmak icin otomatik kodlaycialvin icin kullanlan mimarinin, egitim prosedurunun ve degerlendirme ilcumlerinin ayrnntli aciklamalarnn saglamak onemlidir.
### Derin Ogrenme Dikkat Kamanlar(Attention Layer)
Dikkat katmanlanl, goruntu ve video olusturma, metin ozetleme ve makine cevirisi gibi groverlende performanslarm artrmak icin otomatik kodlaycialara eklenebilen bir tur mekanizmadr[10].
Dikkat katmanlar, modelin ciktyla alaka duzeyine bagli olarak giri verlierinin farkh bollumlerine secici olarak odaklanmasam saglar. Bu, eikt icin onemine bagli olarak her giri gri dgesine bir agrlik atayarak ve bu agrliklar giri ogelerinin agrlikli bir toplamn insebalamak icin kullanarak yapnlr[11].
Otomatik kodlaycialarda, girdideki onemli ozellikleri secerek, vurgulayarak yeniden olusturlan ciktnn kalitesini artrmak icin diskat katmanlan kullanlabilir. Ornegin, goruntu olusturma, istencen ciktyn bagli olarak giri gornttsuintan farkh bolglelemine sceici olarak odaklamn3 icin diskat katmanl kullanlabilir. Metin ozetlemede, ozetin uzunibung ve iserigine bagli olarak giri belgesindeki onemi cumlelere veya anahtar sozciklere secici olarak odaklanmak icin dikkat katmanl kullanlabilir[12][13].
Otomatik kodlaycialarda kullanlabiliceek, kuresel dikkat, yerel dikkat ve kisisel dikkat gibi farkh t'urde dikkat katmanl varadr. Kuesel dikkat, tum grid dizisiin icin tek bir agrlik ssei hesalparken, yerel dikkat, girdi dizisinin farkh bollamleri icin farkh agrlik kumlerli hesalpar. Cok basli dikkat olarak dolinen kendi kendine dikkat, birden fazla agrlik setini paralel olarak hesalapayarak modelin giri verlierinin farkh volnerini yakalamasna olanak tanr.
Akademik calismalarda, otomatik kodlaycialardaki dikkat katmanlar, modelin goruntu ve video olusturma, metin ozetleme v makine gevrisi gibi estetili groverledki performansm atrtmak icin kullanlabilir. Aragtrmacilar, farkh iatrodi dikkat mekanizmalarnn kesfedebilir ve dogruluk, saskinnk veya F1 puan gibi olemlemleri kullanarak bunlarin etkililigini ogerlendirebilir[14].
### Kiumeleme Yontemleri
Kumeleme algoritmalar, veri noktalarm benzerliklerine grore gruplandrmak icin kullanlanl gozetemisz makine ogrenimi algoritmalarnn bir smnfidr. Kumelemenin amaci; grupe teiktleri hakkmda herhangi bir on biligi olmaskuzin, verilerde dogal gruplamalar veya kumeler bulmaktr[15].
Hiyerarsik kumeleme, bolm tabanli kumeleme, yoomuluk tabanli kumeleme ve model tabanli kumeleme dahil olmak uzere birkac tir kumeleme algoritmasi vardr. Bu algoritmalarnn bir, veri noktalarnn gruplandirmak icin farkli bir yaklasim kullanmr.
Hiyerarsik kumelemede, veri noktalar, mesafefierine veya benzerliklerine grore yinelemeli olarak kumeler halinde birleitstirliir. Bu, istenen sayda kumeyi elde etmek icin beliri bir yuksekklite kesilebelien dendrogram adu verilen agac benzeri bir yapylya sonuclanur[16].
K-means gibi bolm tabanli kumeleme algoritmalan, noktalar ile bir kume merkezleri kumesi arasmdaki mesafelere bagli olarak veri noktalarn sabit sayda kumeye boler. Merkezler, yakansamaya uslaslana kadar yinelemeli olarak guncellenir ve bu da verilerin kimlere bolminesijle sonuclanur[17].
DBSCAN gibi yogunluga dayali kumelame algoritmalar, veri alanunn yogun bolgelerinde bulunan veri noktalarnn daha distik yogunlukluk alanlarla ayarak gruplandnir. Bu, verilerin, yogunluguna bagli olarak degisen sekil ve boyutlarda kumelerle sonuclanr.
Gauss karsim modelleri (GMM) gibi model tabanli kumeleme algoritmalar, veri noktalarnn olaslik daglimlarnn, bir karsimndan uretiidigini varsayar ve dagulmalarn parametrelerini ve verilerdki kume saysimu tahmin etmek iginistatisikel sontemler kullanr.
Gorntu analizi, metin macneliligi ve sosyal agal anlizi gibi cestil alanlardaki verlieri analiz etmek icin kumeleme algoritmalar kullanlabilir. Aragtrmacilar, farkh t tutler kumeleme algoritmalarn kesfedebilir ve siluet katsayis, saflik veya entropi gibi metrikleri kullanark bunlarn etkililigini. Gierdi verileri, mesafe veya benzerlik olciusu vel agordinmann performansm optimize etmek icin kullanlanle herhangi bir inpparametre veya ayarlama yontemi dahil olmaskizere, kullanlal kumeleme algoritmasn net bir tammun saglamak onemlidir[18][19].
### Uzakk Metrikleri
Mesafe olemleri, ki veri noktasi arasundaki benzerligi veya farklhi olemek icin kullanl matematiksel sievelerdir. Makine ogrenimimde, mesafe olemleri genellikle ozellik vektorleri veya veri noktalarar arasundaki mesafeyi olemek icin kullanlvr ve genellikle kumeleme, smflandrma ve regresyongorevlerinde kullanlnr[20][21].
Oklid mesafesi, Manhattan mesafesi, kosinus mesafesi ver Mahalanobis mesafesi dahil olmak tizere estili mesafe olemleri veradr. Bu olemler her biri, ki veri noktasi arasundaki mesafeyi hesaplamak icin farkh bir formul kullanr.
Oklid mesafesi belki de en iyi bilinen mesafe metrigidir veiki veri noktasmda karsllik gelen ozellik degerleri arasundaki farklarn karelerinin toplamn karekoku olarak hesaplannr. Taksi mesafesi olarak dahilanen masafesi, karslik gelen ozellik degerleri arasmdaki mutlak farklamn toplamarak hesaplannr[22].
Kosinus mesafesi, iki vektor arasundaki aqmn kosinitsuni olcer ve genellikle iki belge veya metin paracagi arasundaki benzerligi olcmek icin dogal dil isleme gorevlerindekullanlnr[23]. Mahalanobis mesafesi, verilerdeki ozellik degerlerinin kovaryansm hesaba katar ve genellikle ozelliklery vokesk oranda iliskili odugunda kullanlnr.
Makine ogrenimi modellerinin performansm degerlendirmek, farkh modellerin veya algoritmalarnn performansm karslastmrn icin genellikle mesafe olemleri kullanlnr. Arastrmacilar, farkh tutde mesafe olemlerini
kesfedebilir ve dogruluk, F1 puan veya ROC egrisi altmadaki alan gibi olcimleri kullanarak bunlamn etkinligini degerlendirebilir. Matematiksel formul ve metrigin varsaymlarn veya smirlamalan da da hail olmak tizere, kullanlan mesafe metriginin net bir tanmm saglamak onemlidir[24].
### _Veri Kunesi ve lcerik_
TVSum (TV Ozeti), video ozeleteme arastrmasa icin halka acik bir veri kumesidir. Video ozeleteme algoritimalarnm degerlendirmek icin akademik makalenderde yaygn olarak Kullanlarn. Video ozeleteme, videonun en onemli iergicini yakalayan anahtar kareleri veya gekimleri secerek uzun bir videoenun ozlu ve temsili bir ozetiin otomatik olarak olusturma gorevidir[25].
TVSum veri seti, haberler, talk sovlar, spor, belgeseller ve gelence solvar dahil olmak tuzere estiti lturleri kapsayan 50 populer TV sovundan video klpler icerir. Veri kumesi, video ozeleteme algoritimalarim farkli baglamlarda degerlendirmek icin uygun halle getterin estiti gorultuler icerir. TVSum/daki her video klip, degerlendirme icin temel gerek sisevi goren, insanlar tarafindan olusturlan ozeletre islikendidirilir.TVSum veri kumesi asagdaki bilesenri iecirir.
**Videoolar:** Veri kumesi, birkac dakikadan birkac saate kadar degisen uzunluklarda MP4 formatnda video klipler icerir. Videolar, ook cesiti konlan v teurleri kapsar ve video ozeleteme arastrmasa icin estiti lierikler saglar.
**Insan tarafindan olusturlan ozeletz:** TVSum'daki her video, algoritmalar tarafindan olusturulan video ozeleterin kalitesini degerlendirmek icin temel gerek olarak kullanlan, insanlar tarafindan olusturulan bilden cozet ile lilsikelentrilir. Bu ozeletr, videolarn ana iceriginin kisa ve temsili aciklamalarm saglar.
**Video meta verileri:** TVSum ayrica her video icin soy adv, bolim baslig, yayn tarihi ve video siresi gibi bliglier dahil olmak tuzere metra veriler saglar. Bu meta veriler, baglamsal alan izijn veya videolam belltri kirterleg gore filtrelemek icin kullanlabililir.
TVSum veri seti, video ozeleteme algoritmalarm degerlendirmek ve kayaslamak icin akademik arastrmasalardra yaygn olarak Kullanlir. Arastrmacalarn F1 puan, ROUGE puanan veya video geri cagirma gibi ortak degerlendirmek cullerini kullanarak farkli algoritmalarm performansum karasilastrmasima olanak tanyan, insanlar tarafindan olusturulan zeferlee sahip standardlasttrlmns ve farkh videolar sunar.
### _Test Ortam Ve Onerlien Yontem_
### _D.1.1.Model_
Calsmada bir oto kodlayci kullanlmlstrr, girdi verilerini tipik darak daha dusik boyutlu bir gizli alana sikstrarak ve ardandan orijinal bicimine geri dondurerek yeniden yaplandirmak tizere eigtilmis bir tur sinir agidr.
Model mimarisi bir kodlayci ve bir kod zozucaden olusur. Kodlayci, (3, 64, 64) seklindeki giris verlierini alir, duzlestir verdrandan rleu'aktivasyon fonsiymaly birkac yogun katamada gevirir. Kodjaccumir ciktsi, gizli boyutunn (64) boyutuna sahip bir vektordur. Kod zozucu dana sonra skistrnlms gostreimi alr ve bir relu aktivasyon fonskiyonuna sahip yogun katamanlark kullanarak orijinal giris sekline geri dondurur.
Ayrica modelin kodimayci kismma dikkat katmann, elklemmistir. Dikkat katmann, kodlaycnn ciktsum alir, her zaman adim icim dikkat puanlarn hesalpar ve ardandan bu puanlar, dikkat agirkli bir temsil olusturmalx icin kodlayci ciktsuna augular. Ortaya elkakat agrlikat agirkli temsil daha sonra sabit boyutlu bir giktu intermek icin kuresel ortalama havuzlama kuklamanka kuklaman.
Model daha sonra Adam iyilestiriic ve ortalama kare hata kaybi sisevi kullanlarak derlenir ve dogrulanur.
#### D.1.2.Tespit Katmann
Bu adum, bir otomatik kodlayci ve kumeleme kullanan bir anahtar kare eikarma algoritmasm bir wygulamasdir. Algoritmann amacac, videodaki en onemli anlari ve olayalar yakalayan bir video sekansndan temsili kareler belirlemektir.
Ilk olarak, karelerli HSV renk uzayna donnsturerek, 64x64 piksel olarak yeniden boyutlandrarak ve piksel degerlerini 0 ila 1 aralalgunda normalestirerek en isler.
Ardmdan, ozellikleri ktimeler halinde gruplandirmak icin KMeans ktimeleem algoritmasm kullanur. Bu uygulamada ktime sayisi farkli sayilara ayarlanabilir, anck bu, belirli ugugulamaya bagli olarak ayarlanabilir.
Kumelame tamamlandiktan sorna algoritma, gzeilkesa aesmdan kume merkezine en yakn zareyi seerek her kume icin anahtar kareyi tanmlar. Bu anahtar kareler, orijimal dizidekli indeksleri ve ozellik gosterimlerlyle birlikte saklanr.
Genel olarak, bu algoritma, dizideki diger karelere benzerliklerine dayali olarak bir video diazisnden anahtar kareleri ayiklamann basit ama etkli bir yoludur. Ortaya gikan anahtar kareler, videoyu ozetlemek veya iergicinin temsili bir gorsel ozetemi saglamak gibi cestiti almaclar icin kululambalibir.
Burada kulanlan kumeleme algoritmalarnn kume sayssi parametresi aynn zamanda cikarlacak olan anahtar kare saysiam de beliflemektedir. Algoritmann saynar parametresine bagmmlibigolupdapund her vidocada aynn miklarda kare dolugu var saylacakttr. Bu durumada daha duragan videolarada anahtar kare sylasi daha az olacagamdan benzer karelerin tespitilie karslaslaslacakttr. Veya daha yuksek saydta tespitilie gereken durumlarda daha az sonuc uretilmis olacakttr. Bu durumlarm etkisini daha azaltmak icin tespitil denkarelern saysi sengelek kululamlanda shajskek beliflenip benzer karelerin temizlemensi isiemi yaphlazyus upyun gorulmistur. Bu islem icin bir uzaklik metriji ve uazklklar matrisi kulamlacakttr. Tespit edilen uazklklara gore birbirine yakin olan karelerden bir bulgu listesinden silinecek ve diger temsili tutulacakttr. Yakmlik surnn belirlerken isattistiskel given aralklan goz onnde bulundurulacak ve ortalama degerinden iki standart sapma daha disik uazklklta olan kareler birlestrilecektir.
\[\mu=\mathit{ordalama}\]
\[\sigma=\mathit{std.sapma}\]
\[uazklk<\mu-2*\sigma\rightarrow\mathit{birlestir} \tag{1}\]
### _Oicumleme Metodlam_
Oicumleme icin kulanlan veri setinin anahtar kareyjonde bir etiket bilgisi olmadigndan ciktunn degerlendirmesindeuzman gorus kullanlacakttr. Anahtar kare olarak behlerleneckk goruntunition video icerisande ver alan her sahneri tespit temesi, eger farkli sahneler yoksa goruntudecki buyuk degisiklikleri ve harekreleri seipt temesi beklenecketkir. Bu anlamda her video icin anahtar kareler icin kesti aralklari belirleneckek ve test giktsnun bu aralkta en az bir goruntu tespit edeblimis olmasb kellenecketkir. Elde edilecek veri ise her dogru tespit edilen ve tespit edilemeyen kareler icin hata matrisi ile degerlendirliecektir.
Hata matrisi, bir makine ogrenimi modelinin performansim degerlendirmek ijin kullanlan bir tablodur. Bir ikili smflandirma problemi icin gereck poztif (TP), gereck negtifi (TN), yanls pozitif (FP) ve yanlis negatif (FN) degerlendirmek inetem sindir.
Bir ikili smflandirma probleminde, pozitif ve negatif olnak tuzere iki olas sunf vardr. Hata matrisi, tahmin edilen smf etiketlerini gereck sumf etiketlerijke karslastmr ve her smuf icim dogru ve yanls tahminlerin saysusnn bir dokumnn saglar.
Matris tipik olarak dikey eksende gereck smf etiketleri ve yatayak eskende talmin edilen smf etiketleri ile bir kare olarak temsil edilir. Matrisi dort degeri asagdaklier temsil eder:
Gereck pozitifler (TP): aslinda pozitif olan ve pozitif olarak dopru bir sekilde talmin edilen orenklerin sayssi
Yanls pozitifler (FP): aslinda negatif olan anack yanlshkila pozitif olarak tahmin edilen orenklerin saysi
Yanls negatifler (FN): aslinda pozitif olan anack yanlshkila negatif olarak tahmin edilen orenklerin saysi
Gereck negatifler (TN): gereckte negatif olan ve dogru bir sekilde negatif olarak tahmin edilen orenklerin saysi
Matristeki degerlerler, dogruluk, kesinlik, geri cagirma ve F1 puan gibi bir makine ogrenimi modelinin performansim degerlendirmek icin yaygnamak kullananl estiil ojcitteri heaspalmalak icin kullanlablijli. Bu metrikler, modelin ne kadar iyi performansgerl diginetz dair icgoriler saglayabilir ve jijlestirme alanlarnn belirlenmesine yardmci olabilir.
## III Test
Kesinlik, modelin pozitif orenkleri dogru bir sekilde smifflandirma yetenegini gosterr. Bu durumda, sunf 0 icin kesinlik 0,96dir, yani model pozitif simflandirlan tim orenklerin %96% som dogru bir sekilde smflandrul. Anack anahtar kareeigt youtremleri dogas regi dengeszis smf problemlerl odiugundan, normal smf icin degerlendirme yapmak anlamsiz olacaktur. Bu sebeple tespit edilen karelerin dogruluklarm inecelemek gerekir. Smf l ivin kesinlik 0,78dir, yani model pozitif olarak smflandrulran orenklerin %78'ini orenkleri dogru bir sekilde tunmalma yetenegini gosterr. Smf 1 icin duyarlik 0,75tir, yani model smf l'e at tim orenklerin %75ini dogru bir sekilde tanmald. F1 puan, kesinlik ve duyarlik arasandaki dengeyi gosteren bir olcuttur. Bu durumda, smf l icin F1 puan
## IV Sonuc
Bu calismada, dikkat katmannna sahip derin bir otomatik kodlayici kullanan bir anahtar kare qikarma yontemi onerildi. Yontem, onerilen yaklasmum etkinligini posteren snnflandrma gorevinde 0,771ik bir basar ornan elde etti.
Yontem, once otomatik kodlayici kollayici kismni kullanarak video karelerinden ozelliklerini cikarnr ve arndndan K-means kumelemeyi kullanarak bu ozellikleri kumeler. Anahtar kareler daha sonra, kime merkzezine olan yakmnlklarna grore her bir kumeden secilir. Otomatik kodlayicikum kodlayici kismmda dikkat katmannn kullanlmast, video karelerindeki en belirgin ozelliklerin vurgulammasma yardmuc olur ve bu, nanahtar kare qikarma islemini doglujun utrrm.
Deneyel sonuclarmz, onerilen yontemin, anahtar kare cikarma icin meveut yontemler kadar iyi performans osterdigitalini ve video ozerleme ve eylem tanma gibi cestiltalandra potansivel viguamalara sahip doluguunu gostermektedir. Smflandrma gorevindeki 0.77 basar ornan, onerilen yontemin oldukca dogru ve etkili olduguunu gostermektedir. Farkli modellerin simulasyon snouclarmn arastrnlmast, bu makalede onerilen modelin iy bir performans ogsterebilecegini ortaya koymustur.
Onerilen yontemimz, genellikle bulussal yontemlere veya videoonn anlamsal ierigini yakalayamayan dusuk duzeyli ozellikere dayanan mevcut anahtar kare qikarma yontemlerinin snirlamalarnnismn cikar. Dikkat katmanna sahip derin bir otomatik kodlayici kullanarak, yontemimiz video karelerindekienen ozge parpan bliglieri yakalayan tist duzey ozellikleri ayklayabilir. Ayraca etentimaszdir, bu daan otekelkelabil ve kock estilti video analizi gorevlierne upygualbilir kilar.
Gelecekteki calsmalarmzda, ses ve hareket biliglieri gibi ek ozellikleri dahil ederek yontemimizin performansmun dda da gelsitrimennin yollan kesfedilecetkir. Yontemimizin etkinligini spor videolarn, haber videolar ve gozetleme videolarn gibi farkli video tulerti zuerinde de arastrmass planlanmaktadir. Onerilen yontemin video analizinde anahtar kare cikarma icin umut verici bir gozum sagladgi v video ozetleme ve video alma(retrieval) gibi cesium upyugulamalara uygulanabilecegi gorulmektedir.
Sonuc olarak, dikkat katmanna sahip derin bir otomatik kodlayici kullanan onerilen anahtar kare cikarma yontemi, video analizi iicin umut verici bir yaklasmdir ve cok estilti alanlarda potansivel uguymalara sahiptir. Bu yontemin aktif bir arastrrma alam olmaya devam edecegine ve performansim ve dogorulaguunut arrtrmak icin daha fazla jijlestirme yapulabilecegine inamyoruz.
Kaynaklar
|
2306.14619 | Verification of Neural Network Control Systems using Symbolic Zonotopes
and Polynotopes | Verification and safety assessment of neural network controlled systems
(NNCSs) is an emerging challenge. To provide guarantees, verification tools
must efficiently capture the interplay between the neural network and the
physical system within the control loop. In this paper, a compositional
approach focused on inclusion preserving long term symbolic dependency modeling
is proposed for the analysis of NNCSs. First of all, the matrix structure of
symbolic zonotopes is exploited to efficiently abstract the input/output
mapping of the loop elements through (inclusion preserving) affine symbolic
expressions, thus maintaining linear dependencies between interacting blocks.
Then, two further extensions are studied. Firstly, symbolic polynotopes are
used to abstract the loop elements behaviour by means of polynomial symbolic
expressions and dependencies. Secondly, an original input partitioning
algorithm takes advantage of symbol preservation to assess the sensitivity of
the computed approximation to some input directions. The approach is evaluated
via different numerical examples and benchmarks. A good trade-off between low
conservatism and computational efficiency is obtained. | Carlos Trapiello, Christophe Combastel, Ali Zolghadri | 2023-06-26T11:52:14Z | http://arxiv.org/abs/2306.14619v1 | # Verification of Neural Network Control Systems using Symbolic Zonotopes and Polynotopes
###### Abstract
Verification and safety assessment of neural network controlled systems (NNCSs) is an emerging challenge. To provide guarantees, verification tools must efficiently capture the interplay between the neural network and the physical system within the control loop. In this paper, a compositional approach focused on inclusion preserving long term symbolic dependency modeling is proposed for the analysis of NNCSs. First of all, the matrix structure of symbolic zonotopes is exploited to efficiently abstract the input/output mapping of the loop elements through (inclusion preserving) affine symbolic expressions, thus maintaining linear dependencies between interacting blocks. Then, two further extensions are studied. Firstly, symbolic polynotopes are used to abstract the loop elements behaviour by means of polynomial symbolic expressions and dependencies. Secondly, an original input partitioning algorithm takes advantage of symbol preservation to assess the sensitivity of the computed approximation to some input directions. The approach is evaluated via different numerical examples and benchmarks. A good trade-off between low conservatism and computational efficiency is obtained.
Reachability, neural networks, verification, symbolic zonotopes, polynotopes, nonlinear dynamics.
## I Introduction
The proliferation of data and access to ever-increasing computational power have fueled a renewed interest in deep neural-networks (NNs). These networks have shown a significant ability to address classification/estimation/control tasks that can hardly be formalized and designed from knowledge-based models. However, despite their impressive ability for solving complex problems, it is well known that NNs can be vulnerable to small perturbations or adversarial attacks [1, 2]. This lack of robustness (or fragility) represents a major barrier for their application to safety-critical system where safety assurances are of primary importance. For example, in Guidance, Navigation and Control of flight systems, one must ensure that some output/state trajectories remain inside a flight envelope when some inputs explore a given region. The above issues have fostered a large amount of works that analyze the sensitivity to local disturbances of NNs in isolation (open-loop), as well as the satisfaction of pre-/post- safety conditions [3]. Nevertheless, as reported in [4], reasoning about the safety verification of neural-network control systems (NNCSs), where the NN is used as a feedback controller, still remains a key challenge that requires tractable methods capable of efficiently integrating the heterogeneous components that make up the control loop.
This paper focuses on the reachability analysis of NNCSs which, in turn, allows for formal reasoning about the satisfaction of safety properties (reachability of a target set, or the avoidance of non-secure sets of states). A key challenge in NNCSs reachability analysis is to successfully retain the system-controller interplay by preserving (at each time instant) the dependencies between relevant variables. This, in fact, discourages a direct application of off-the-shelf verification tools which, although able to compute accurate output bounds for elements in isolation, return coarse approximations when iteratively concatenated for the analysis of closed-loop systems since most of (if not all) the I/O dependencies are quickly broken/lost during the computations [5, 6, 7]. Furthermore, effective NNCSs verification tools must be able to assess the system state during (relatively) large time intervals. The above issues motivate the development of computationally efficient analysis methods capable of capturing the interaction between the control loop elements while granting a good scalability both in the system dimensions and in the time horizon length.
Another relevant factor that should be taken into account is the size of the initial state set under study. Mainly, the performance of open- and closed-loop verification techniques that are based on (locally) abstracting the system non-linearities, deteriorates considerably for large initial sets. A common approach to address this issue, particularly in NNCSs verification problems where the number of dimensions is relatively small, is to recast the initial reachability problem into simpler subproblems that analyze a subset of the initial conditions [5, 7, 8, 9, 10, 11, 12, 13]. Nonetheless, the design of efficient and scalable partitioning strategies, specially in closed-loop verification schemes, remains also an open problem.
Related workPreserving dependencies for NNCS verification has spurred on some recent studies. In [14], the authors abstract the I/O mapping of a ReLU NN controller using a polynomial expression (plus an error interval). The polynomial rule is obtained by regression of I/O samples, whereas a sound error term is derived from solving a mixed-integer program (MIP). In a similar fashion, [15] uses Bernstein polynomials to abstract the NN controller. A theoretical and a sampled-based method is proposed to compute the error term based on the Lipschitz constant of the NN. Although both approaches preserve the system-controller interplay, they are computationally expensive, scaling poorly with the number of NN inputs while requiring to be iteratively repeated for each
output. In [9] a NN with differentiable activation functions is transformed into an equivalent hybrid system built upon Taylor models that retain dependencies. However, this approach is not applicable to ReLU functions and the number of states (resp. modes) of the hybrid automaton scales with the number of neurons (resp. layers).
Other approaches preserve system-controller dependencies by formulating the reachable set computation as an optimization problem. The work [16] proposes a semidefinite program for reachability analysis based on the abstraction of the NN non-linearities using quadratic constraints [17]. [11] relies on the tool CROWN [18], and preserves system-controller interaction by solving LP-programs. However, dependencies are broken from one sample to the next. In [10], the closed-loop is firstly abstracted as a conjunction of piecewise-linear functions, and then analyzed using ReLU NNs verification tools like [19], [20].
On the other hand, other works address the reachability problem by chaining different verification tools. In [5], the authors combine a polytopic abstraction of the dynamical system with the tool Sherlock [21], that is used to bound the NN controller outputs. The tool NNV [6], integrates the non-linear dynamics reachability tool CORA [22] with a star sets abstraction of ReLU NN controllers [23]. Besides, [7] combines validated simulation to soundly approximate the dynamical system with common tools for NN output bounding like DeepPoly [24]. In all the above works, dependencies are broken in the switch between the different tools. This latter issue is somehow palliated in [8], where the authors use second order zonotopes (i.e. zonotopes with generators matrix size \(n\times 2n\)) as an interface between system and NN controller analysis tools. Although capable to retain first order dependencies in the system-to-controller (and controller-to-system) set transformations, dependencies in the I/O mapping of the NN controller are broken.
Focusing on partitioning strategies, in [25] the gradient of a ReLU NN (open-loop) is used to decide the next input direction to be bisected, whereas in [26] a uniform grid of the initial set is employed. Other works propose a simulation-based splitting strategy. In [12], the bisection is guided by comparing the interval bound of Monte-Carlo samples with a guaranteed Interval Bound Propagation [27] of the initial subsets. Working in a similar fashion, [13] proposes a simulation-guided framework that unifies standard NN output bounding tools. The decision on the bisection order is based on the distance to the simulation samples enclosure. A closed-loop implementation of the latter algorithm is reported in [11].
ContributionsThis paper takes a new and original direction based on symbolic zonotopes (s-zonotopes) as a generic tool for the closed-loop verification of discrete-time NN controlled systems. The generators (matrix) representation of s-zonotopes enables to efficiently abstract the input-output mappings of the NN controller and non-linear physical system through (inclusion preserving) affine symbolic expressions. The evolution of the closed-loop system can then be bounded in a propagation-based fashion that benefits from the efficient computation of basic operations granted by s-zonotopes, while preserving system-controller linear dependencies. Besides, the computational complexity of the verification tool can be fixed by limiting (reducing) the number of independent symbols. Simulations show the good performance/computational efficiency trade-off granted by this approach.
Furthermore, two extensions are proposed. On the one hand, the use of polynomial symbolic expressions to abstract the input-output mapping of the loop elements is explored. In particular, symbolic polynotope (s-polynotope) structures [28] are used to enclose the NN activation functions graph via the non-convex sets that arise from the polynomial map of interval symbols. Polynomial abstractions enable to reduce the conservatism induced by linear relaxations, at the price of increasing the computation needs.
On the other hand, the symbols preservation throughout the control loop is exploited to develop a smart partitioning strategy of the initial conditions set. The proposed algorithm reasons upon the influence of the input symbols in the output set in order to select which dimension to bisect next, and upon the influence of the (independent) error symbols to assess the quality of each over-approximation.
StructureThe paper is organized as follows. Section II is devoted to some useful preliminaries. Section III introduces the problem statement. Then, Section IV analyzes the closed-loop verification using s-zonotopes. In Section V the use of s-polynotopes is investigated, whereas the input partitioning algorithm is detailed in Section VI. Section VII presents simulation results. Finally, some concluding remarks are provided in Section VIII.
NotationThe following notations are used along this work. \(\mathbb{R}^{n}\), \(\mathbb{R}^{m\times n}\) and \(\mathbb{N}\) denote the \(n\) dimension Euclidean space, the \(m\times n\) dimensional Euclidean space and the set of non-negative integers, respectively. The notation \(v_{i}\) stands for the \(i\)-th element of vector \(v\) and \(M_{[i,\cdot]}\) (\(M_{[\cdot,j]}\)) for the \(i\)-th row (\(j\)-th column) of matrix \(M\). The 1-norm of the (row) vector \(v\) is \(\|v\|_{1}=|v|\textbf{1}\), with \(|.|\) the elementwise absolute value, and \(\mathbf{1}\) a column vector of ones of appropriate size. \(diag(v)\) returns a square diagonal matrix with the elements of vector \(v\) in the main diagonal, whereas \(card(\cdot)\) gives the cardinal.
## II Symbolic dependencies in set computations
This section provides preliminary concepts which will be used in the following sections. Throughout this article, \(s\) refers to an indexed family of distinct symbolic variables of type unit interval, that is, \(\forall i\in\mathbb{N}\), the symbol \(s_{i}\) (uniquely identified by the integer \(i\)) refers to a scalar real variable the value of which is only known to belong to the unit interval \(\mathcal{D}(s_{i})=[-1,+1]\subset\mathbb{R}\). Also, \(\mathcal{D}(s)=[-1,+1]^{card(s)}\). In other words, the a priori unknown value \(\iota s_{i}\) taken by the symbolic variable \(s_{i}\) satisfies \(\iota s_{i}\in\mathcal{D}(s_{i})\). The generic notation \(\iota\) which reads as "interpretation/valuation of" helps disambiguate between symbols (syntax) and values (semantics) [28]. Note that, in general, several interpretations may coexist. Set-valued interpretations take sets as values. In the following, consistently with the definition domain \(\mathcal{D}(s_{i})\) related to \(s_{i}\), the set-valued interpretation of each symbolic variable \(s_{i}\) will be \(s_{i|_{\iota}}=[-1,+1]\). In addition, the integer-valued vector \(I\) is used to uniquely identify a set of symbols, for example, vector
\(I=[1,\,5,\,3]\) identifies the symbols \(s_{1}\), \(s_{5}\) and \(s_{3}\). For brevity of notation, \(s_{I}\) denotes the column vector \([s_{i}]_{i\in I}\).
**Definition 1** (s-zonotope [28]).: _A symbolic zonotope \(\mathcal{X}_{|s}\) is an affine symbolic function that can be written in the form \(c+Rs_{I}\) where vector \(c\) and matrix \(R\) do not depend on the symbolic variables in \(s_{I}\). Notation: \(\mathcal{X}_{|s}=\langle c,R,I\rangle_{|s}=c+Rs_{I}\)._
**Definition 2** (e-zonotope [28]).: _The e-zonotope \(\mathcal{X}_{|s}\) related to the s-zonotope \(\mathcal{X}_{|s}=\langle c,R,I\rangle_{|s}=c+Rs_{I}\) is the set-valued interpretation of \(\mathcal{X}_{|s}\) as \(\mathcal{X}_{|s}=\langle c,R,I\rangle_{|s}=\{c+R\sigma|\sigma\in\mathcal{D}(s_ {I})\}\)._
A basic example is : given \(i\in\mathbb{N}\), \(\langle 0,1,i\rangle_{|s}=s_{i}\) (symbolic expression corresponding to the \(i\)-th symbol in \(s\)) and \(\langle 0,1,i\rangle_{|s}=\mathcal{D}(s_{i})=[-1,+1]\) (set-valued interpretation of \(s_{i}\)). More generally, s-zonotopes and their interpretation as e-zonotopes make it possible to explicitly perform operations either at symbolic/syntactic level (\(._{|s}\)) or at semantic level (\(._{|s}\)).
**Remark 1**.: _In this work, all symbols being of type unit interval, \(c\) being a real vector and \(R\) a real matrix, \(\mathcal{X}_{|s}\) is a classical zonotope \(\langle c,R\rangle\) with center \(c\) and generator matrix \(R\) (for extensions to other symbol types, see [28]). Note that \(\langle 0,1,i\rangle_{|s}-\langle 0,1,j\rangle_{|s}=s_{i}-s_{j}=0\) for \(i=j\), whereas \(\langle 0,1,i\rangle_{|t}-\langle 0,1,j\rangle_{|t}=[-2,+2]\) for all \((i,j)\), that is, even for \(i=j\). Operating at the symbolic/syntactic level thus permits more accurate set evaluations by preserving trace of symbolic dependencies. This is a key point to prevent from pessimistic outer approximations induced by the so-called dependency problem [29] affecting natural interval arithmetic and other classical set-based operations only considering the semantic level._
From a computational point of view, an s-zonotope is defined by storing the triplet \((c,R,I)\). Due to their affine structure, a key aspect is to efficiently trace the identifier \(i\in I\) of the symbol that multiplies each column of the matrix \(R\). To that end, Matrices with Labeled Columns (MLCs), constitute a data structure featuring columnwise sparsity: It is defined by the pair \((R,I)\) that allows for efficiently recasting standard operations involving s-zonotopes as set-operations on the identifiers vector (\(I\)) and column-wise operations in the projection matrices (\(R\)). For how to translate operations such as sum or linear image onto a computational platform using MLCs the interested reader can refer to [30].
Due to their relevance in further developments, the following operations involving s-zonotopes are briefly recalled.
**Lemma 1** (common symbols [30]).: _Any two s-zonotopes \(\mathcal{X}_{|s}=\langle c_{x},R,I\rangle_{|s}\) and \(\mathcal{Y}_{|s}=\langle c_{y},G,J\rangle_{|s}\), can be rewritten using a common set of symbols \(s_{K}\) as \(\mathcal{X}_{|s}=\langle c_{x},\tilde{R},K\rangle_{|s}\) and \(\mathcal{Y}_{|s}=\langle c_{y},\tilde{G},K\rangle_{|s}\), with_
\[\begin{split}\tilde{R}&=\begin{bmatrix}R_{1},&R_{2},&0 \end{bmatrix},\quad\tilde{G}=\begin{bmatrix}G_{1},&0,&G_{2}\end{bmatrix},\\ K&=\begin{bmatrix}I\cap J;&I\setminus J;&J\setminus I\end{bmatrix}.\end{split} \tag{1}\]
Matrices \((R_{1},G_{1})\) in Lemma 1 may be empty matrices if \(I\cap J\) is empty (similarly for \(R_{2}\) and \(I\setminus J\) or \(G_{2}\) and \(J\setminus I\)).
**Definition 3** (basic operations [30]).: _Given two s-zonotopes \(\mathcal{X}_{|s}\) and \(\mathcal{Y}_{|s}\) with a common set of symbols \(s_{K}\) as in Lemma 1, then their sum and vertical concatenation are the s-zonotopes_
\[\mathcal{X}_{|s}+\mathcal{Y}_{|s} =\left\langle c_{x}+c_{y},[R_{1}+G_{1},\,R_{2},\,G_{2}],K\right\rangle _{|s}, \tag{2}\] \[\left[\mathcal{X}_{|s};\mathcal{Y}_{|s}\right] =\left\langle\begin{bmatrix}c_{x}\\ c_{y}\end{bmatrix},\begin{bmatrix}R_{1}&R_{2}&0\\ G_{1}&0&G_{2}\end{bmatrix},K\right\rangle_{|s}. \tag{3}\]
**Definition 4** (inclusion [28]).: _The s-zonotope \(\mathcal{Y}_{|s}\) is said to include the s-zonotope \(\mathcal{X}_{|s}\), if the set-valued interpretation of \(\mathcal{Y}_{|s}\) includes the set-valued interpretation of \(\mathcal{X}_{|s}\). In other words, the expression \(\mathcal{X}_{|s}\subset\mathcal{Y}_{|s}\) interprets as \(\mathcal{X}_{|s}\subset\mathcal{Y}_{|s}\)._
Definition 4 serves the way for rewriting rules (at symbolic level) that may be either inclusion preserving or inclusion neutral or none of both at set-based evaluation (semantic) level. A more formal treatment of this topic can be found in the definition 27 (rewriting rules and inclusion) in [28], where a definition of inclusion functions is also given in definition 2.
**Definition 5** (reduction [28]).: _The reduction operator \(\downarrow_{q}\) transforms an s-zonotope \(\mathcal{X}_{|s}=\langle c,R,I\rangle_{|s}\) into a new s-zonotope \(\tilde{\mathcal{X}}_{|s}=\downarrow_{q}\mathcal{X}_{|s}=\langle c,G,J\rangle_{|s}\), such that \(\tilde{\mathcal{X}}_{|s}\) includes \(\mathcal{X}_{|s}\) while depending on at most \(q\) symbols, i.e. \(card(J)\leq q\)._
Reduction is thus an inclusion preserving transform. In Definition 5, \(I\cap J\neq\emptyset\) is not mandatory but often useful to prevent from a further propagation of conservative approximations, while controlling the complexity of \(\tilde{\mathcal{X}}_{|s}\) through the maximum number \(q\) of its symbols/generators. In this context, preserving the more significant symbols/dependencies is often beneficial: as in [30], if \(p>q\) a common practice is to replace the \(p-q+1\) less important symbols by a new independent one while guaranteeing the inclusion \(\mathcal{X}_{|s}\subseteq\tilde{\mathcal{X}}_{|s}\). Besides, note that new symbols introduced to characterize independent behaviors must be uniquely identified. Wherever needed, the generation of a vector of \(n\) new unique symbols identifiers is denoted as \(!(n)\). The generation of a pre-specified number of identifiers can be attained by implementing, for example, the Unique Symbols Provider (USP) service introduced in [30].
## III Problem statement
### _System description_
Consider the interconnection of a discrete-time non-linear dynamic model (4) and a neural network. The physical system is modeled as:
\[x(t+1)=f(x(t),u(t),w(t)), \tag{4}\]
where \(x(t)\in\mathbb{R}^{n_{x}}\) and \(u(t)\in\mathbb{R}^{n_{u}}\) respectively refer to the state and the control input at time step \(t\in\mathbb{N}\). For all \(t\geq 0\), vector \(w(t)\) accounts for modeling errors and process disturbances and satisfies \(w(t)\in\mathcal{W}=[-1,\,+1]^{n_{u}}\).
The system (4) is controlled by a state-feedback controller \(g(x(t)):\mathbb{R}^{n_{x}}\mapsto\mathbb{R}^{n_{u}}\) parameterized by an \(l\)-layer feed-forward fully connected neural network. The map \(x\mapsto g(x)\) is described by the following recursive equations
\[\begin{split}& x^{(0)}=x,\\ & x^{(k+1)}=\phi^{(k)}(W^{(k)}x^{(k)}+b^{(k)}),\quad k=0,...,l-1, \\ & g(x)=W^{(l)}x^{(l)}+b^{(l)},\end{split} \tag{5}\]
where \(x^{(k)}\in\mathbb{R}^{n_{k}}\) are the outputs (post-activation) of the \(k\)-th layer. The weight matrix \(W^{(k)}\in\mathbb{R}^{n_{k+1}\times n_{k}}\) and bias \(b^{(k)}\in\mathbb{R}^{n_{k+1}}\) define the affine mapping \(z^{(k)}=W^{(k)}x^{(k)}+b^{(k)}\) for the \((k+1)\)-th layer. Besides, the vector-valued function \(\phi^{(k)}:\mathbb{R}^{n_{k+1}}\rightarrow\mathbb{R}^{n_{k+1}}\) is applied element-wise to the pre-activation vector \(z^{(k)}\), that is, \(\phi^{(k)}(z^{(k)})=[\varphi(z_{1}^{(k)}),\cdots,\varphi(z_{n_{k+1}}^{(k)})]^{T}\), where \(\varphi:\mathbb{R}\rightarrow\mathbb{R}\) is the (scalar) activation function. Common activation choices are: ReLU \(\varphi(z)=max(0,z)\); sigmoid \(\varphi(z)=\frac{1}{1+e^{-z}}\); and tanh \(\varphi(z)=tanh(z)\).
The closed-loop system with dynamics (4) and a previously trained neural-network control policy (5), is governed by
\[x(t+1)=f_{g}(x(t),w(t))=f\big{(}x(t),g(x(t)),w(t)\big{)}. \tag{6}\]
Accordingly, given an initial set \(\mathcal{X}_{0}\subset\mathbb{R}^{n_{x}}\), the forward reachable set of (6) at time step \(t\) is denoted as \(\mathcal{X}(t)\). For \(t\geq 1\), this set is defined as:
\[\mathcal{X}(t)=\big{\{}x(t)\,| \exists(x(0),w(0:t-1))\in\mathcal{X}_{0}\times\mathcal{W}\times...\times\mathcal{W}, \tag{7}\] \[\forall\tau\in[0,t-1],x(\tau+1)=f_{g}(x(\tau),w(\tau))\big{\}}.\]
### _Finite-time reach-avoid (RA) verification problem_
Given a goal set \(\mathcal{G}\subset\mathbb{R}^{n_{x}}\) a sequence of avoid sets \(\mathcal{A}(t)\subset\mathbb{R}^{n_{x}}\) and a finite time horizon \(N\in\mathbb{N}^{+}\), it is desired to test whether
\[\mathcal{X}(N)\subseteq\mathcal{G} \tag{8}\] \[\mathcal{X}(t)\cap\mathcal{A}(t)=\emptyset,\quad\forall t=0,...,N-1\]
holds true for the closed loop system (6). In general, the exact evaluation of (8) for a NNCSS is a computationally intractable problem. Thus, the problem is resorted to iteratively compute a tractable over-approximation of the reachable set \(\mathcal{X}(t)\subseteq\mathcal{\bar{X}}(t)\), to test (8) using \(\mathcal{\bar{X}}(t)\) instead. Because of the over-approximation, the proposed verification setting only provides one-sided guarantees, that is, if \(\mathcal{\bar{X}}(t)\) satisfies (8) then it can be guaranteed that (7) will satisfy the RA property, but no sound conclusion about the safety of (7) can be made if the over-approximation \(\mathcal{\bar{X}}(t)\) violates (8). Therefore, the computation of tight over-approximations is of paramount importance, so that a maximum number of truly satisfied specifications can be computationally proven as such.
## IV Closed-loop verification using s-zonotopes
This section presents the methodology for computing a sound over-approximation of the closed-loop system that preserves system-controller linear dependencies. The computation takes advantage of s-zonotopes described in the previous section. The abstraction of the control loop components using affine symbolic expressions is presented below.
### _Initial set_
It is assumed that the initial set can be described by the set-valued interpretation of an s-zonotope \(\mathcal{X}_{|s}(0)=\langle c_{0},R_{0},I_{0}\rangle_{|s}\), where \(c_{0}\in\mathbb{R}^{n_{x}}\) and \(R_{0}\in\mathbb{R}^{n_{x}\times n_{0}}\) and \(I_{0}=!(n_{0})\) is a set of \(n_{0}\) unique identifiers for the interval valued symbols \(s_{l_{0}}\). In other words, it is assumed that \(\mathcal{X}_{0}=\mathcal{X}_{|t}(0)\). Note that, any arbitrary zonotopic set \(\{c+R\xi\,\|\,\|\xi\|_{\infty}\leq 1\}\) can be abstracted as an s-zonotope by characterizing the independent behaviour of the generators through new interval type symbols.
### _NN controller affine abstraction_
For the sake of simplicity of notations, the temporal notation is dropped here. Given a state bounding s-zonotope \(\mathcal{X}_{|s}=\langle c,R,I\rangle_{|s}\) and a NN controller (5), the idea is to abstract the NN behavior through an affine symbolic expression of the form
\[\mathcal{U}_{|s}=\langle C_{u},[G,\,H],[I;\,J]\rangle_{|s}=c_{u}+Gs_{I}+Hs_{J}, \tag{9}\]
such that, it guarantees the local enclosure of the network outputs, i.e. \(g(\mathcal{X}_{|t})\subseteq\mathcal{U}_{|t}\). Note that, expression (9) captures the linear dependencies of the state symbols (identified by \(I\)), plus the addition of new error symbols (identified by \(J\)) that are introduced to guarantee the soundness of the method.
The computation of vector \(c_{u}\), matrices \(G,H\), and the identifiers vector \(J\) is discussed below. The focus is on generating a dependencies-preserving inclusion for an arbitrary layer of NN (5), since a sound enclosure for the whole network follows by induction due its sequential nature. For simplicity, the layer superscript is removed below and the superscript \({}^{+}\) is used to denote the next layer.
**Affine mapping** Given the \(s\)-zonotope \(\mathcal{X}_{|s}=\langle c,R,I\rangle_{|s}\), the affine mapping \(\mathcal{Z}_{|s}=W\mathcal{X}_{|s}+b\) in the layers of (5) yields a (pre-activation) s-zonotope of the form
\[\mathcal{Z}_{|s}=\langle\check{c},\check{R},I\rangle_{|s}, \tag{10}\] \[\check{c}=Wc+b,\quad\check{R}=WR.\]
**Activation functions** Activation functions \(\varphi(\cdot)\) in (5) are applied element-wise to the pre-activation vector. Hence, the projection of \(\mathcal{Z}_{|s}\) onto the \(i\)-th neuron, yields the s-zonotope
\[\mathcal{Z}_{|s}=\langle\check{c}_{i},\check{R}_{[i,:]},I\rangle_{|s}. \tag{11}\]
Notice that, any point belonging to set-valued interpretation \(\mathcal{Z}_{i|t}\) of (11) is confined within an interval \([l_{i},\,u_{i}]\), where, since \(\mathcal{Z}_{i|t}\) is a one dimensional zonotopic set, it follows that \(\mathcal{Z}_{i|t}=[l_{i},\,u_{i}]\) with the lower and upper bounds
\[l_{i}=\check{c}_{i}-\|\check{R}_{[i,:]}\|_{1},\quad u_{i}=\check{c}_{i}+\| \check{R}_{[i,:]}\|_{1}. \tag{12}\]
Therefore, the soundness of the method can be certified by guaranteeing the inclusion (see Definition 4) of the graph of the activation function in the range \([l_{i},\,u_{i}]\). To that end, the activation function \(\varphi(\cdot)\) is abstracted through an affine symbolic function of the form
\[\mathcal{X}_{i|s}^{+}=\alpha_{i}\mathcal{Z}_{i|s}+\beta_{i}+\gamma_{i}s_{j}, \tag{13}\]
where \(s_{j}\) represents a new independent symbol (identified through \(j=!(1)\)) that must be introduced to guarantee the full coverage of the activation function graph on the considered range, that is, in order to satisfy the condition
\[\begin{bmatrix}\mathcal{Z}_{i|t}\\ \varphi(\mathcal{Z}_{i|t})\end{bmatrix}\subseteq\begin{bmatrix}\mathcal{Z}_{i|t} \\ \alpha_{i}\mathcal{Z}_{i|t}+\beta_{i}+\gamma_{i}\mathcal{D}(s_{j})\end{bmatrix}. \tag{14}\]
The \(i\)-th neuron post-activation s-zonotope \(\mathcal{X}_{i|s}^{+}\) in (13) not only guarantees that its set-valued interpretation encloses the neuron output, but it preserves the linear influence of the symbols \(s_{I}\) in the output set. This later point plays a fundamental role since it allows to retain the interplay between the inputs of the neurons at the same layer. Coherently, the
layer post-activation s-zonotope can be computed by vertically concatenating in a recursive fashion the different \(\mathcal{X}_{i|s}^{+}\) after rewriting them using the same set of symbols
\[\mathcal{X}_{|s}^{+}=[\,...\,[\,[\mathcal{X}_{1|s}^{+};\mathcal{X}_{2|s}^{+}]; \mathcal{X}_{3|s}^{+}]\,...\,;\mathcal{X}_{n_{k}|s}^{+}]. \tag{15}\]
**Proposition 1** (NN s-zonotope).: _Given the s-zonotope \(\mathcal{X}_{|s}^{(0)}=\langle c^{(0)},R^{(0)},I^{(0)}\rangle_{|s}\), and let \(\alpha^{(k)},\beta^{(k)},\gamma^{(k)}\in\mathbb{R}^{n_{k}+1}\) be some parameter vectors that guarantee the inclusion of the \(n_{k+1}\) activation functions in the \(k\)-th layer, then the enclosure of the NN output set \(g(\mathcal{X}_{|_{i}}^{(0)})\subseteq\mathcal{U}_{|_{i}}=\langle c_{u},[G,\,H ],[I;\,J]\rangle_{\iota}\) is guaranteed for the s-zonotope in (9) with parameters_
\[c^{(k+1)}=diag(\alpha^{(k)})(W^{(k)}c^{(k)}+b^{(k)})+\beta^{(k)}, \tag{16a}\] \[\tilde{H}^{(k+1)}=\big{[}diag(\alpha^{(k)})W^{(k)}\tilde{H}^{(k)},\quad diag(\gamma^{(k)})\big{]}\,,\] (16b) \[\tilde{G}^{(k+1)}=diag(\alpha^{(k)})W^{(k)}\tilde{G}^{(k)},\quad k =1,...,l-1,\] (16c) \[c_{u}=W^{(l)}c^{(l)}+b^{(l)},\] (16d) \[H=W^{(l)}\tilde{H}^{(l)},\] (16e) \[K=W^{(l)}\tilde{K}^{(l)},\] (16f) \[J=[!(n_{1});\,...;!(n_{l})], \tag{16g}\]
_where \(\tilde{G}^{(1)}=diag(\beta^{(0)})W^{(0)}R^{(0)}\) and \(\tilde{H}^{(1)}=diag(\gamma^{(0)})\)._
Proof.: Expressions (16a)-(16f) result from the recursive application of Lemma 1 and the vertical concatenation of the post-activation s-zonotopes (13) for the \(n_{k+1}\) neurons of the \(k\)-th layer. Besides, (16g) reflects the symbols identifier update of the noise terms introduced at the neurons of each layer.
Regarding the output inclusion, starting with an initial set \(\mathcal{X}_{|_{i}}^{(0)}\), by induction, given the pre-activation s-zonotope \(\mathcal{X}_{|_{i}}^{(k)}\), the operations at the \(k\)-th layer are: affine mapping; linear abstraction (inclusion preserving for appropriate triplet \((\alpha_{i}^{(k)},\beta_{i}^{(},\gamma_{i}^{(k)}))\)); and vertical concatenation. Thus, the composition of inclusion functions being an inclusion function, the proof follows.
For each neuron, the triplet of parameters \((\alpha,\beta,\gamma)\) must be appropriately designed to satisfy (14), while minimizing the conservatism induced by using an affine relaxation. In this regard, a relevant heuristic consists in minimizing the magnitude of the error symbol introduced to guarantee the activation function graph enclosure, i.e. to minimize \(|\gamma|\). Due to the independent behaviour of the error symbol, this can be reformulated as minimizing the area of the enclosing parallelogram [31].
**Lemma 2**.: _Given the bounds \([l,\,u]\) in (12) with \(l<u\), the triplet of parameters \((\alpha^{*},\beta^{*},\gamma^{*})\) that minimizes \(|\gamma|\) while guaranteeing the satisfaction of (14) are:_
* _ReLU function_ \(\varphi(x)=max(0,x)\)__ \[\alpha^{*}=\frac{\varphi(u)-\varphi(l)}{u-l},\quad\beta^{*}=\gamma^{*}=\frac{ \varphi(l)-\alpha^{*}\cdot l}{2}.\] (17)
* _S-shaped functions_
* _Sigmoid_ \(\varphi(x)=\frac{1}{1+e^{-x}}\) _with_ \(\varphi^{\prime}(x)=\varphi(x)(1-\varphi(x))\)__
* _tanh_ \(\varphi(x)=tanh(x)\) _with_ \(\varphi^{\prime}(x)=1-\varphi(x)^{2}\)__ \[\alpha^{*}=\min(\varphi^{\prime}(l),\varphi^{\prime}(u)),\] \[\beta^{*}=\frac{\varphi(u)+\varphi(l)-\alpha^{*}\cdot(u+l)}{2},\] (18) \[\gamma^{*}=\frac{\varphi(u)-\varphi(l)-\alpha^{*}\cdot(u-l)}{2}.\]
**Remark 2**.: _The proposed NN abstraction method shares a similar structure with the zonotope abstraction based on affine arithmetic presented in [31] for the (open-loop) NN output bounding. However, here, the explicitly computed affine symbolic expression (9) will further play a key role in closed-loop verification, and an efficient computation of the projection matrices exploiting the generators (matrix) structure of s-zonotopes, is also used._
### _Dynamical system affine abstraction_
Similar to the NN controller dynamics (5), the function (4) that describes the state evolution at time \((t+1)\) can be abstracted by means of an (inclusion preserving) affine mapping. The resulting s-zonotope will depend on the symbols that define the state at time \(t\), plus some extra symbols that account for: I) NN controller non-linearities; II) abstract system non-linearities; III) the uncertainty sources.
For the computation of a state bounding s-zonotope, it is assumed that the function \(f(\cdot)\) in (4) results from the composition of elementary functions and operators for which an affine symbolic expression (s-zonotope) can be computed. Note that this is not much restrictive since (linear) operations such as linear image, sum or vertical concatenation are closed (i.e. they return s-zonotopes) under affine mappings. Besides, any univariate locally continuous differentiable function can be abstracted through an affine mapping.
**Lemma 3**.: _Let \(h:[l,\,u]\rightarrow\mathbb{R}\) be a class \(\mathcal{C}^{1}\) function on a given interval \([\,l,\,u]\subset\mathbb{R}\). Then, the function \(\tilde{h}(x,\epsilon)=\alpha x+\beta+\gamma\epsilon\) satisfies that \(\forall x\in[\,l,\,u],\,\,\exists\epsilon\in[-1,\,+1],h(x)=\tilde{h}(x,\epsilon)\) for the triplet of parameters:_
\[\alpha=\frac{h(u)-h(l)}{u-l},\quad\beta=\frac{h(\underline{x})+h( \bar{x})-\alpha(\underline{x}+\bar{x})}{2},\] \[\gamma=\frac{h(\bar{x})-h(\underline{x})+\alpha(\underline{x}- \bar{x})}{2},\]
_where, defining \(\xi(x)=h(x)-\alpha x\), then_
\[\bar{x}=\operatorname*{arg\,max}_{x\in\{\delta_{1},...,\delta_{n},u\}}\xi(x), \quad\,\underline{x}=\operatorname*{arg\,min}_{x\in\{\delta_{1},...,\delta_{n}, u\}}\xi(x),\]
_with \(\delta_{1},...,\delta_{n}\) the stationary-points of \(\xi(\cdot)\) in \([l,\,u]\)._
Proof.: See Appendix A.
Lemma 3 provides a method to propagate (inclusion preserving) s-zonotopes through univariate non-linearities. Besides, the approach in Lemma 3 returns an optimal, in the sense of minimizing the magnitude \(|\gamma|\) of the error symbol, set of parameters for convex/concave differentiable functions [28]. On the other hand, the interaction between multiple variables can be handled through the sum operation (2) or by over-approximating the product of two s-zonotopes.
**Lemma 4**.: _Given two 1-D s-zonotopes \(\mathcal{X}_{|s}=\langle c_{x},r^{T},K\rangle_{|s}\) and \(\mathcal{Y}\rangle_{|s}=\langle c_{y},g^{T},K\rangle_{|s}\) with a common set of symbols \(s_{K}\) (with \(n=card(s_{K})\)), then the product \(\mathcal{X}_{|s}\times\mathcal{Y}_{|s}\) is included by the s-zonotope \(\mathcal{L}_{|s}=\langle c_{l},[l^{T},\,m],[K;\,j]\rangle_{|s}\) with_
\[\begin{split}& c_{l}=c_{x}c_{y}+\frac{1}{2}\sum_{i=1}^{n}r_{i}g_{i}, \qquad l=c_{x}g+c_{y}r,\\ & m=\frac{1}{2}\sum_{i=1}^{n}|r_{i}g_{i}|+\sum_{i=1}^{n}\sum_{l> i}^{n}|r_{i}g_{l}+r_{l}g_{i}|,\end{split} \tag{19}\]
_and \(j=!(1)\)._
**Example 1**.: _Consider the system \(x^{+}=\sin(x)-u+0.1w\), with an initial set \(\mathcal{X}_{0}=[0,\,1]\) described by \(\mathcal{X}_{|s,0}=0.5+0.5s_{1}\). This system is controlled by a NN with 1 layer of 2 neurons that, for \(\mathcal{X}_{0}\), is abstracted as \(\mathcal{U}_{|s}=0.1+0.2s_{1}-0.1s_{2}+0s_{3}\). The non-linear function \(h(x)=\sin(x)\) is abstracted, for \(\mathcal{X}_{0}\), as \(\hat{\mathcal{X}}_{|s}=0.45+0.42s_{1}+0.03s_{4}\). Besides, the independent behaviour of the disturbances is captured by \(\mathcal{W}|_{s}=s_{5}\). Accordingly, a dependency preserving over-approximation of the successor state is given by \(\mathcal{X}_{|s}^{+}=0.35+0.22s_{1}+0.1s_{2}+0.03s_{4}+0.1s_{5}\). The a priori knowledge on the number of error symbols introduced at each abstraction (e.g. \(\mathcal{U}_{|s}\) introduces up to two error symbols, one per neuron) allows to directly store the generators matrix of each s-zonotope by taking into account the common set of symbols, thus providing an efficient computation of required operations as shown below:_
\[\begin{split}&\qquad\qquad\qquad c\quad\quad s_{1}\quad\quad s_{2} \quad\quad s_{3}\quad\quad s_{4}\quad\quad s_{5}\\ -&\left[\begin{array}{c|c|ccccc}0.1&0.2&-0.1&0&0&0 \\ 0.45&0.42&0&0&0.03&0\\ \end{array}\right]&\leftrightarrow&\mathcal{U}_{|s}\\ +&\left[\begin{array}{c|c|ccccc}0&0&0&0&0&0.1\\ \end{array}\right]&\leftrightarrow&0.1\cdot\mathcal{W}_{|s}\\ =&\left[\begin{array}{c|ccccc}0.35&0.22&0.1&0&0.03&0.1\\ \end{array}\right]&\leftrightarrow&\mathcal{X}_{|s}^{+}\end{split}\]
```
1:\(\mathcal{X}_{0}\), NN param \((\boldsymbol{W},\boldsymbol{b})\), \(\mathcal{G},\mathcal{A}(i)\), \(N\), \(q\)
2:isRAok, \(t_{err}\), \(\mathcal{X}_{|s}(j)\)\(j=0,...,min(t_{err},N)\).
3:Initialize: Generate \(\mathcal{X}_{|s}(0)\); set \(t_{err}\leftarrow\infty\)
4:for i = 0 to \(N-1\)do
5:if\(\mathcal{X}_{|i}(i)\cap\mathcal{A}(i)\)then
6:\(t_{err}\gets i\)
7:break all
8:else
9:\(\mathcal{U}_{|s}(i)\gets controller(\mathcal{X}_{|s}(i),\boldsymbol{W}, \boldsymbol{b})\)
10:\(\bar{\mathcal{X}}_{|s}(i+1)\gets system(f(\cdot),\mathcal{X}_{|s}(i), \mathcal{U}_{|s}(i),\mathcal{W}_{|s}(i))\)
11:\(\mathcal{X}_{|s}(i+1)\leftarrow_{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
their relevant information (generators, symbol identifiers and order of the monomials) into matrices.
**Definition 6** (s-polynotope [28]).: _A symbolic polynotope \(\mathcal{P}_{|s}\) is a polynomial function that can be written in the form \(c+Rs_{I}^{E}\) where vector \(c\) and matrices \(R\) and \(E\) do not depend on the symbolic variables in \(s_{I}\). Notation: \(\mathcal{P}_{|s}=\langle c,R,I,E\rangle_{|s}=c+Rs_{I}^{E}\)._
Definition 6 uses the exponential matrix notation (as in Definitions 23-25 in [28]), where the usually sparse matrix \(E\) accounts for exponents of the symbols involved in each monomial. As an example, \(s_{I}=[s_{1},s_{2}]^{T}\) and \(E=[1\,0\,3;0\,2\,4]\), yields \(s_{I}^{E}=[s_{1},s_{2}^{2},s_{1}^{3}s_{2}^{4}]^{T}\). Similar to s-zonotopes, a distinction is made between an s-polynotope as defined in Definition 6 and its set-valued interpretation defined as the (possibly non-convex) set \(\mathcal{P}_{|u}=\{c+R\sigma^{E}\,|\,\sigma\in\mathcal{D}(s_{I})\}\).
Symbolic polytopes obviously extend s-zonotopes (obtained from \(E=\mathcal{I}\), i.e. with identity as exponent matrix) and are closed under the extension of basic operations already defined for s-zonotopes like linear image, sum or concatenation. The reader is referred to [28] for further details on how to define and operate on s-polynotopes.
### _NN controller polynomial abstraction_
The abstraction of the I/O map of a NN controller of the form (5) using s-polynotopes is presented below. In particular, given a state bounding s-polynotope \(\mathcal{X}_{|s}=\langle c,R,I,E\rangle_{|s}\) (note that any initial s-zonotope in Section IV-A can be directly transformed into an equivalent s-polynotope) the idea is to compute a polynomial symbolic map of the form
\[\mathcal{U}_{|s}=\langle c_{u},G,Q,E_{u}\rangle=c_{u}+Gs_{Q}^{E_{u}}, \tag{20}\]
such that, the enclosure of the network outputs is guaranteed, i.e. \(g(\mathcal{X}_{|s})\subseteq\mathcal{U}_{|t}\). The vector of identifiers in (20) has the structure \(Q=[I,\,J]\), thus involving the symbols in the state bounding s-polynotope (identified by \(I\)) as well as error symbols (identified by \(J\)). Notice that the exponent matrix \(E_{u}\) may also capture cross terms involving symbols with identifiers in both \(I\) and \(J\).
Similar to section IV-B, the computation of (20) can be obtained from a forward propagation of \(\mathcal{X}_{|s}\) through (in this case) a polynomial relaxation of the activation functions. The pre-activation s-polynotope \(\mathcal{Z}_{|s}=W\mathcal{X}_{|s}+b\) and its projection onto the \(i\)-th neuron are given by
\[\begin{split}\mathcal{Z}_{|s}&=\langle\bar{c},\tilde {R},I,E\rangle_{|s},\qquad\mathcal{Z}_{|s}=\langle\bar{c}_{i},\tilde{R}_{[i, ]},I,E\rangle_{|s},\\ \dot{c}&=Wc+b,\qquad\qquad\qquad\tilde{R}=WR.\end{split} \tag{21}\]
Bounding the set-valued interpretation of \(\mathcal{Z}_{|s}\) within the interval \([l_{i},\,u_{i}]\), then the polynomial structure of s-polynotopes enables to obtain a sound over-approximation of the NN output by locally covering the activation function graph through an \(n\)-order polynomial expression of the form
\[\mathcal{X}_{i|s}^{+}=\sum_{m=1}^{n}\alpha_{i,m}(\mathcal{Z}_{i|s})^{m}+\beta _{i}+\gamma_{i}s_{j}, \tag{22}\]
where \(s_{j}\) represents a new independent symbol (identified through \(j=!(1)\)) introduced to guarantee the enclosure of the activation function graph in the range \([l_{i},\,u_{i}]\). Therefore, since \(\mathcal{X}_{i|s}^{+}\) results in an s-polynotope arising from the polynomial mapping of s-polynotopes, the layer post-activation s-polynotope \(\mathcal{X}_{s}^{+}\) is computed by vertically concatenating the neuron post-activation s-zonotopes.
Polynomial over linear abstractions of the activation function not only allow to reduce the conservatism introduced by the error symbols, but also enable the to compute input-output symbolic relationships that better fit the activation behaviour.
**Example 2**.: _Suppose that the projection onto a ReLU neuron is given by the s-polynotope \(\mathcal{Z}_{|s}=0.5-0.5s_{1}+s_{1}s_{2}\), whose set-valued interpretation is bounded/included in the interval \([l,\,u]=[-1,\,2]\). Then, the ReLU function can be locally abstracted over this range using an \(n=2\)-order polynomial of the form \(\mathcal{X}_{|s}^{+}=\alpha_{2}(\mathcal{Z}_{|s})^{2}+\alpha_{1}\mathcal{Z}_{| s}+\beta+\gamma s_{3}\), with \((\alpha_{2},\,\,\alpha_{1})=(0.25,\,0.5),\,\,\beta=\gamma=0.125\). This, in turn, generates the post-activation s-polynotope_
\[\begin{split}\mathcal{X}_{|s}^{+}&=0.25(0.5-0.5s_{ 1}+s_{1}s_{2})^{2}+0.5(0.5-0.5s_{1}+s_{1}s_{2})\\ &\quad+0.125+0.125s_{3}\\ &=0.4375-0.375s_{1}+0.0625s_{1}^{2}+0.125s_{3}+0.75s_{1}s_{2}\\ &\quad-0.25s_{1}^{2}s_{2}+0.25s_{1}^{2}s_{2}^{2}.\end{split}\]
_Figure 1 depicts the non-convex local enclosure generated by the set-valued interpretation of \([\mathcal{Z}_{|s};\mathcal{X}_{|s}^{+}]\)._
Example 2 evidences the complexity/accuracy trade-off inherent to using \(n\)-order polynomial abstractions: a high \(n\) grants an accurate representation of the activation functions; whereas, on the other hand, it increases the computational complexity due to the increased number of monomials. A reduction strategy is thus used to manage the representation complexity. It can consist in either truncating the maximum degree of the polynomial approximation (20), or limiting the maximum number of monomials involved. To address this latter issue, an approach consists in (independently) assessing the monomials relevance based on the \(2\)-norm of the generators (matrix columns) [32], and use natural interval extension [33] to bound the list of selected monomials through a reduced number of independent symbols.
The ability of the final s-polynotope (20) to generate a sound over-approximation of the network outputs is guaranteed by selecting (for each neuron) a triplet \((\alpha,\beta,\gamma)\), where
Fig. 1: 2-nd order polynomial enclosure of ReLu function (Example 2)
\(\alpha=[\alpha_{n},...,\alpha_{1}]^{T}\) is an \(n\)-dimensional vector, that ensures the (local) coverage of the activation function. In the case of the commonly used ReLU activation functions, as shown in Example 2 their convex nature allows describe them more accurately than with the sole affine dependencies by using \(2\)-nd order polynomial expressions. The reduction in the magnitude related to the error symbol is especially significant in those situations where an affine approximation yields a rough description of ReLU function, i.e. for \(|l|\approx u\) (if \(l<0<u\)).
**Proposition 2**.: _Given the interval \([l,\,u]\) with \(l<0<u\) and a ReLU activation function \(\varphi(x)=max(x,0)\). The set of parameters_
\(\alpha_{2}=\frac{1}{2u},\ \alpha_{1}=1-\alpha_{2}u,\ \beta=\gamma=\frac{\alpha_{2}u^{2}}{8}\) _if_ \(|l|\leq u\leq 2|l|\)__
\(\alpha_{2}=\frac{\gamma}{2},\ \alpha_{1}=-\alpha_{2}l,\ \beta=\gamma=\frac{ \alpha_{2}l^{2}}{8}\) _if_ \(u<|l|\leq 2u\)__
_guarantees that \(\eta(x,\epsilon)=\alpha_{2}x^{2}+\alpha_{1}x+\beta+\gamma\epsilon\) satisfies that \(\forall x\in[l,\,u],\exists\epsilon[-1,\,1],\ \varphi(x)=\eta(x,\epsilon)\) with \(|\gamma|\leq\frac{3}{8}|\gamma_{aff}^{*}|\), where \(\gamma_{aff}^{*}\) is the error symbol introduced by the affine abstraction in Lemma 2._
Proof.: See Appendix B.
### _Finite-time RA using s-polynotopes_
The main aspects in addressing the RA verification problem using s-polynotopes are discussed below. In general, the same steps presented in Algorithm 1 can be used while adapting the NN controller and dynamical system abstraction to an s-polynotope formulation. In this case, the computation of the s-polynotope \(\mathcal{U}_{|s}\) in Step 7 of Algorithm 1 has already been presented in section V-B. Regarding the abstraction of the nonlinear function \(f(\cdot)\) in Step 8, since s-polynotopes constitute an extension of s-zonotopes, \(f(\cdot)\) can always be abstracted using (at least) the affine dependency preserving method in Lemma 3. Note that, s-polynotopes also enable the description of (multivariate) polynomial equations without the need to over-approximate them, at least in all intermediate symbolic compositions and up to some tunable computation load.
It must be taken into account that several operations in a s-polynotope formulation of Algorithm 1 such as bounding the projection of an s-polynotope onto a neuron, intersection/inclusion of an s-polynotope with an avoid/reach set or the reduction operator (in Step 9), in turn require the computation of interval bounds from (multivariate) interval polynomial expressions. If computationally affordable, the range bounds computed by a (simple and fast) interval extension may be refined either by iteratively bisecting the variables domain, or resorting to numerically reliable optimization-based methods.
## VI Input partitioning strategy
In general, the conservatism induced by abstraction-based verification tools strongly depends on the size of the initial set. It is thus extremely useful to assess the regions of the initial/input space for which meaningful (i.e. not too coarse) over-approximations of the closed-loop system evolution can be obtained. To that end, this section presents an algorithm to split the initial set of a NNCSs verification problem in a smartly guided way. More precisely, the proposed splitting strategy relies on and benefits from the dependency modeling and tracing used in Section IV. In particular, the algorithm assesses the sensitiveness of the initial/input directions on the satisfaction of a safety property by an s-zonotopic over-approximation through the analysis of the relative influence of the initial symbols. The principle of the algorithm is introduced in Section VI-A. Then, some relevant notions are detailed in section VI-B, whereas the algorithm pseudo-code for a RA problem implementation is reported in Section VI-C. Finally, some further discussion on different settings is presented.
### _Splitting principle_
The main idea of the proposed algorithm is to keep a linear increase in the number of subsets by only splitting at each iteration the sole initial/input set symbol that has greater influence on the satisfaction of the safety property \(\mathcal{S}\) to be verified. To that end, notice that given any initial s-zonotope \(\mathcal{X}_{|s}(0)=\langle c_{0},R_{0},I\rangle_{|s}\) as defined in Section IV-A, then the successive computation of forward reachable sets returns over-approximating s-zonotopes structured as
\[\mathcal{X}_{|s}(t)=c_{f}+R_{f}s_{I}+G_{f}s_{J}, \tag{23}\]
where the matrix \(R_{f}\) (resp. \(G_{f}\)) reflects the impact of the initial (resp. error) symbols identified by \(I\) (resp. \(J\)) on the computed over-approximation at time \(t\). Typically, testing \(\mathcal{S}(\mathcal{X}_{|s}(t))\) boils down to a metric/size evaluation on the set-valued interpretation of \(\mathcal{X}_{|s}(t)\) (e.g. to check threshold preassigning). Hence, due to the linearity of (23), the influence of each input symbol \(s_{i}\)\((i\in I)\) can be assessed using a metric that gauges the generator (column of \(R_{f}\)) size that is related to \(s_{i}\), whereas the accuracy of an s-zonotope approximation to evaluate \(\mathcal{S}\) can be determined by measuring the zonotope \(\langle 0,G_{f}\rangle\) spanned by the error symbols.
Therefore, at each iteration of the algorithm, an input s-zonotope \(\mathcal{X}_{|s}(0)\), such that the corresponding output/final s-zonotope does not satisfy \(\mathcal{S}\), is split into two new input s-zonotopes that are later evaluated on the satisfaction of \(\mathcal{S}\). The algorithm may run until the satisfaction of the safety property, or until the accuracy of the method (gauged through \(\langle 0,G_{f}\rangle\)) is below a certain threshold.
### _Relevant notions_
Some relevant notions for the s-zonotope based partitioning algorithm are discussed below.
#### Vi-B1 Accuracy assessment
considering the evaluation of a safety property for a s-zonotope of the form (23), the accuracy of the over approximation can be assessed by gauging the zonotope \(\langle 0,G_{f}\rangle\) spanned by a (set-valued) interpretation of the error symbols. In particular, further implementations make use of the zonotope \(F\)-radius [34], that is,the Frobenius norm of the generators matrix \(\|G_{f}\|_{F}\) to reasoning upon the quality of the affine approximation.
#### Vi-B2 Input symbols relative influence
the sensitivity of an input symbol \(s_{i}\)\((i\in I)\) is computed based on the F-radius ratios of the I/O zonotopes spanned by \(\iota s_{i}\). That is, through the ratio \(\|R_{0}^{[i]}\|_{2}/\|R_{0}^{[i]}\|_{2}\) where \(R_{0}^{[i]}\) (and \(R_{f}^{[i]}\)) denote the columns
of \(R_{0}\) (and \(R_{f}\)) that multiply the symbol \(s_{i}\). This relation is used to quantify how a variation on \(s_{i}\) at the input s-zonotope \(\mathcal{X}_{|s}(0)\) affects the output s-zonotope.
#### Vi-B3 Symbol bisection
bisecting a unit interval symbol \(s_{i}\ (i\in I)\) is done by rewriting it as \(s_{i}\to 0.5+0.5s_{j}\) and \(s_{1}\to-0.5+0.5s_{k}\), where \(j=!(1)\) and \(k=!(1)\), thus generating two new s-zonotopes.
#### Vi-B4 Polyhedral RA sets
checking the empty intersection and/or the inclusion of a state bounding set with/within a polyhedron in half-space representation of the type \(\{h_{i}^{T}x\leq r_{i},\ i=1,...,m\}\) can be done by evaluating the infimum/supremum of the projections of the bounding set onto the directions \(h_{i}\in\mathbb{R}^{n_{x}}\)[35]. For a state bounding s-zonotope of the form (23), the supremum of the dot product with \(h\) is computed as
\[\sup_{x\in\mathcal{X}_{|s}(t)}h^{T}x=h^{T}c_{f}+\|h^{T}R_{f}\|_{1}+\|h^{T}G_{f} \|_{1}. \tag{24}\]
### _Algorithm implementation for finite-time RA_
Algorithm 2 reflects the pseudo-code of the proposed partitioning strategy to check the satisfaction of a RA problem over a time horizon \(N\). Algorithm 2 uses square brackets to label the different s-zonotopes that arise after splitting. As an example, \(\mathcal{X}_{|s}[0]\) reads as the initial s-zonotope (i.e. \(\mathcal{X}_{|s}[0]=\mathcal{X}_{|s}^{0}\)), which is split onto a second \(\mathcal{X}_{|s}[1]\) and a third \(\mathcal{X}_{|s}[2]\) s-zonotopes (with \(\mathcal{X}_{|s}[1]\cup\mathcal{X}_{|s}[2]=\mathcal{X}_{|s}[0]\)). Besides, \(L\) denotes a set of integer labels/indices (for the above example \(L=\{0,1,2\}\)), and \(\mathcal{X}_{|s}[L]\) is a shorthand for the set of s-zonotopes \(\{\mathcal{X}_{|s}[l]\ |\ l\in L\}\).
At each iteration, the routine reach runs a slightly modified version of Algorithm 1, that, in this case, returns the last time instant (and the corresponding s-zonotope) for which the RA problem is not satisfied. These times-to-last-error are managed by vector \(T\). The algorithm iteratively selects the label of the initial s-zonotope that yields the largest time-to-last-error (Step 14). The use of this backward management of the information that prioritizes to split until the RA constraints are satisfied at time \(t\), then at time \(t-1\), etc., will be further discussed in the next paragraph. Once the \(l\)-th (with \(l\in L\)) s-zonotope has been selected, sym-select returns the initial symbol identifier \(i\in I\) that has greater relative influence over the violated property. The symbol \(s_{i}\) of the \(l\)-th set is split by the routine sym-split that returns two new initial subsets (Step 6). The times-to-last-error for the new s-zonotopes are computed and the set \(L\) and vector \(T\) updated (Steps 7-11). In particular, Algorithm 2 runs either until the RA problem is satisfied for the whole set \(L\), or until a maximum number \(n_{max}\) of splits is reached.
```
0: same as Algorithm 1, \(n_{max}\)
0: isRAok, set of s-zonotopes \(\mathcal{X}_{|s}[L]\)
1:Initialize:\(l=n=0\); \(L=\{l\}\); \(\mathcal{X}_{|s}[0]=\mathcal{X}_{0}\)
2:\((t,\mathcal{X}_{|s}(t)[0])\leftarrow\texttt{reach}(\mathcal{X}_{|s}[0],N)\)
3:\(T\gets append(t)\)
4:while\((\max(T)>0)\ \vee\ (n/2==n_{max})\)do
5:\(i\leftarrow\texttt{sym-select}(\mathcal{X}_{|s}[l],\mathcal{X}_{|s}(T(l))[l])\)
6:\((\mathcal{X}_{|s}[n+1],\mathcal{X}_{|s}[n+2])\leftarrow\texttt{sym-split}( \mathcal{X}_{|s}[l],i)\)
7:for\(j=1\) to \(2\)do
8:\((t,\mathcal{X}_{|s}(t)[n+j])\leftarrow\texttt{reach}(\mathcal{X}_{|s}[n+j], \max(T))\)
9:\(L\gets L\cup\{n+j\}\)
10:\(T\gets append(t)\)
11:endfor
12:\(L\gets L\setminus\{l\}\)
13:\(T(l)\gets 0\)
14:\(l\leftarrow\arg\max(T(L))\)
15:\(n\gets n+2\)
16:endwhile
17: isRAok = (\(\max(T)==0\))
```
**Algorithm 2** Input partitioning for finite-time RA
Algorithm 2 manages the information in a backward fashion, that is, it selects a s-zonotope with the higher time-to-last-error. This usually gives better results than working in a forward fashion (that is, selecting the s-zonotope with lower time-to-first-error) since it avoids to get stuck by exhaustively splitting up to the satisfaction of a constraint at time \(t\), which, then, may have a small impact on the constraint satisfaction at \(t+1\). On this subject, the algorithm can be straightforwardly adapted to handle the forward case by directly using Algorithm 1 (instead of reach), using \(T(l)\gets N+1\) (instead of \(T(l)\gets 0\)) in Step 13 and selecting the minimum (instead of the maximum) of vector \(T\). Besides, the reduction operation used in Algorithm 1 must not truncate the initial symbols even if their relevance decreases with time, so that the input-output mapping of the symbols identified by \(I\) is preserved.
### _Other possible settings and applications_
Other choices for the proposed input partitioning strategy are as follows:
* The strategy in Algorithm 2 can be adapted to handle open-loop verification problems (e.g. elements like the NN in isolation). In this case, the reach routine will only compute the output s-zonotope for the isolated element for a number of forward steps \(N=1\).
* The maximum number of splits stopping criterion in Algorithm 2 can be modified/complemented with a tolerance on the accuracy assessment (see Section VI-B). In other words, if the accuracy tolerance is fulfilled and a property is still violated, then the algorithm should stop to prevent from further splitting and the safety property is considered as unsatisfied up to the accuracy tolerance.
* Another interesting application is to modify the s-zonotope split decision rule (Step 14 of Algorithm 2) to focus the split in those regions for which the accuracy of using an affine abstraction is low (i.e. high \(\|G_{f}\|_{F}\)). This tends to return a set of initial s-zonotopes such that each locally provides an accurate (affine) abstraction of the system behavior.
## VII Simulations
### _Benchmark description_
The numerical simulations consist in the discrete-time version of some of the verification problems proposed in the ARCH-COMP 2021 [4]. Five dynamical systems are assessed,
namely: single pendulum (**S**), TORA (**T**), unicycle car (**C**), adaptive cruise control (**ACC**) and double pendulum (**D**). The above systems have been discretized using the forward Euler method with sampling period \(\Delta T\), and they are controlled by a NN controller with control period \(\Delta h\). The NN controllers are the ones provided in [4] to control the continuous-time version of the models. To address this issue, the dynamical models have been analyzed under sampling times \(\Delta T(\leq\Delta h)\) chosen sufficiently small for the discretization to have negligible impact in the model responses. Under this context, the same safety constraints and initial conditions than the ones proposed in [4] have been re-used to setup the reported simulations. Note that, as discussed in Remark 3, the use of symbolic approaches supports the variation of \(\Delta T\) (for some \(\Delta h\)) without inducing conservatism due to a loss of dependencies between repeated control inputs. A detailed description of the systems dynamics can be found in [4], whereas the main parameters that define each safety verification problem are shown in Table I.
### _Benchmark results using \(s\)-zonotopes_
All the results reported below were obtained on a standard laptop with Intel Core [email protected]\(\times\)4 processor and 16GB RAM running Windows 10. Table II shows the set of initial states, the safety constraints (with their time horizon), as well as the time required by an s-zonotope implementation to verify each problem. The reduction order is \(q=200\) in all the experiments. Some particularities are discussed below:
* _Single pendulum_ (**S**): in a discrete-time setting, the constraint \(x_{1}\in[0,\,1]\) is guaranteed to be satisfied for problem **S1** (with \(\Delta T=0.05\)s) for the time interval \(t\in[0.55,\,1]\) (that is, for samples \(\{11,...,20\}\)), whereas in **S2** (with \(\Delta T=0.001\)s) the constraint satisfaction is guaranteed for the time interval \(t\in[0.516,\,1]\).
* _TORA_ (**T**): in **T1**, the closed-loop system is not stable for the discrete-time model obtained for \(\Delta T=1\)s. In this case, an unambiguous constraint violation is achieved at \(t=3\) in \(0.036\)s. On the other hand, the closed-loop model obtained in **T2** and **T3** is stable, and the s-zonotope method verifies the satisfaction of the safety constraint in both problems without resorting to split the input set.
* _Unicycle car_ (**C**): the model under study considers the addition of an unknown-but-bounded disturbance \(w\in 10^{-4}[-1,\,+1]\) affecting the fourth state. The safety properties are verified for both **C1** and **C2**. In particular, Figure 2 shows the envelope computed for **C2** in the time interval \(t\in[0,\,10]\) and how the outer-approximation lies within the goal set at \(t=10\).
* _Adaptative cruise control_ (**ACC**): both problems **ACC1** and **ACC2** are verified for the given time horizon.
* _Double pendulum_ (**D**): the set of constraints in problems **D1-2** are violated by the closed-loop system. An unambiguous constraint violation is achieved for **D1** at \(t=0.25\) and for **D2** at \(t=0.278\). On the other hand, the problem **D3** cannot be verified from a simple affine abstraction: the accumulated error indeed increases in the reachability analysis of **D3**, not allowing to guarantee the constraints satisfaction or their unambiguous violation, and thus motivating further extensions.
The results presented above show how, despite their low computational complexity, s-zonotopes yield a high performance in NNCSs verification, being able to verify almost all the benchmark problems without splitting the input set. It is also remarkable the scalability of this approach. As an example, for problem **T3** with \(\Delta T=0.001\)s, \(\Delta h=1\)s and time horizon \(t\in[0,20]\), the proposed tool only requires of \(1.515\)s to compute and assess \(N=20\mathrm{s}/\Delta T=20.000\) forward iterations.
### _Use of s-polynotopes_
The capability of s-polynotopes to capture the non-convex map of NNs is illustrated below. To that end, the set of randomly generated neural networks used in [17] are analyzed. All the NNs consists of 2 inputs, 2 outputs and they differ on the number of hidden layers and neurons per layer. The
Fig. 2: Problem **C2**: framed zonotopes represent the computed bounds at each \(\Delta h=.2\)s; blurred lines represent the bounds update at \(\Delta T=.001\)s.
first four NNs present \(l=\{1,2,3,4\}\) hidden layers, each having \(n_{k}=100\) ReLU neurons per layer. The examined NN input set is \(\mathcal{X}_{0}=[0.9,\,1.1]\times[0.9,\,1.1]\). Figure 3 shows the set-valued interpretation of the output bounding s-polynotopes obtained by abstracting the activity functions of active neurons with second order polynomials i.e. with \(n=2\) in (22). The computation times are \(\{0.178,0.240,2.021,3.329\}s\) for the NNs with \(l=1\) to \(l=4\) hidden layers, respectively. Similarly, another set of NNs with \(l=\{7,8,9,10\}\) hidden layers and \(n_{k}=10\) ReLU neurons per layer is evaluated for the same input set. Figure 4 represents the interpretation of the resulting s-polynotopes that are computed in \(\{0.2389,0.108,0.786,0.155\}\)s, respectively. Those examples (Figure 3 and Figure 4) taken from [17] illustrate the ability of s-polynotopes composition to accurately generate inclusion preserving polynomial I/O mappings of NNs. As a byproduct, an efficient implicit description of possibly non convex output sets is obtained.
### _Partitioning strategy_
Firstly, in order to show the performance of the partitioning algorithm, it will be applied to the open-loop robotic arm example used in [12, 13]. Particularly, the non-linear dynamics of a 2 DOF robot arm are modeled by a \((2,5,2)\) NN with \(tanh\) activations. The considered set of joint angles are extended to \((\theta_{1},\theta_{2})\in[\frac{\pi}{3},\frac{4\pi}{3}]^{2}\). An implementation of Algorithm 2 adapted to analyze the NN in isolation is executed in order to iteratively minimize the \(F\)-radius1 of the zonotope spanned by the error symbols (\(\|G_{f}\|_{F}\)) for a fixed number of \(n_{max}=400\) splits. The computation time of the algorithm is \(0.097\)s. Figure 4(b) shows the resulting pattern of \(401\) input subsets, whereas Figure 4(a) represents the corresponding s-zonotope interpretations obtained in the output space altogether with an exhaustive evaluation of the NN (blue dots). This latter figure shows how Algorithm 2 achieves an accurate description of the non-convex output set by focusing the splitting effort in those regions of the input space for which an affine abstraction granted by s-zonotopes is not accurate enough.
Footnote 1: The \(F\)-radius of a zonotope is the Frobenius norm of its generator matrix (see Definition 3 in [34]).
Furthermore, considering the initial set \((\theta_{1},\theta_{2})\in[\frac{\pi}{3},\frac{2\pi}{3}]^{2}\), Algorithm 2 is set to split up to the satisfaction of the safety constraint \(y_{1}\leq d\) (where \(y_{1}\) denotes the first output). Table III reflects the number of splits and the time required by Algorithm 2 to satisfy the above safety constraint for different values of \(d\). Besides, Table III also shows, for a fixed number of splits, the number of existing possible combinations of set selections and symbols bisections, as well as how many among them are able to satisfy the property. As an example, for \(d=1.2\), Algorithm 2 requires \(8\) splits. For the same
Fig. 4: NNs with 10 neurons per layer and \(l=\{7,8,9,10\}\) hidden layers. Set-valued interpretation of the over-approximating s-polynotope (red set); exhaustive evaluation of the NNs (blue dots).
Fig. 3: NNs with 100 neurons per layer and \(l=\{1,2,3,4\}\) hidden layers. Set-valued interpretation of the over-approximating s-polynotope (red set); exhaustive evaluation of the NNs (blue dots).
problem, there are no possible combinations of less than 7 splits for which the property can be proven; there exist 18 out of \(5.491\cdot 10^{4}\) possibilities that satisfy it with 7 splits (\(0.032\%\)); and 336 out of \(3.66\cdot 10^{5}\) possibilities that satisfy it with 8 splits (\(0.0918\%\)).
Regarding the closed-loop examination of Algorithm 2, this is applied to assess the satisfaction of problem **D3**, which cannot be satisfied by a simple s-zonotope abstraction. To that end, Algorithm 2 is set to split up to the satisfaction of the safety constraints in Table II. The algorithm requires a total of 19 splits (i.e. 20 subsets) computed in \(5.12\)s. Figure 6 shows the time evolution of the interval enclosure of the resulting 20 reachable sets (light blue background), altogether with 50 random simulations of the closed-loop system (blue dots).
## VIII Conclusions
A compositional approach focused on inclusion preserving long term symbolic dependency modeling is introduced in this work for the analysis of NNCSs, where such long term is to be understood both in time iterations (regarding the controlled system dynamics) and in layer iterations (regarding the sole NNs). This results in a generic method that has been developped in several ways. Firstly, the matrix structure of s-zonotopes enables to compute (fast and simple) affine symbolic mappings to abstract the I/O mapping of the control loop components. Two further extensions are also proposed: the use of s-polynotopes to compute inclusion preserving polynomial mappings capable of accurately describing the non-convex map of NNs, and an input partitioning algorithm that benefits from the ability granted by s-zonotopes to preserve linear dependencies between the loop elements. Simulations show the comparative efficiency of the proposals and support the prevalence of dependency preserving methods for closed-loop analysis over the use of accurate, but dependency breaking, output bounding verification tools. Future works should address the integration with the analysis of continuous-time dynamical systems, as well as the study of optimized affine/polynomial abstractions for achieving better performance in verifying specific safety properties.
## Acknowledgments
The authors would like to thank Prof. Vicenc Puig (UPC, Barcelona, Spain) who initially prompted this collaboration and his support during the research stay of the first author at the University of Bordeaux. This work was in part supported by the Margarita Salas grant from the Spanish Ministry of Universities funded by the European Union NextGenerationEU.
Fig. 5: Robot arm example: input partitioning for \((\theta_{1},\theta_{2})\in[\frac{\pi}{2},\frac{4\pi}{3}]^{2}\); yellow-green colors characterize the corresponding I/O set pairs; exhaustive evaluation of the NN (blue dots).
Fig. 6: Input partitioning problem **D3**: interval enclosure of the resulting 20 reachable sets (light blue); random simulations (blue dots).
## Appendix A Proof of the Lemma 3
Define \(\alpha=\frac{h(u)-h(l)}{u-l}\) and form \(\xi(x)=h(x)-\alpha x\) which is continuous in \([l,\,u]\), differentiable in \((l,\,u)\) and satisfies \(\xi(l)=\xi(u)=0\). Then, the maximum \(\xi(\bar{x})\) (resp. minimum \(\xi(\underline{x})\)) of \(\xi(x)\) on \([l,\,u]\) must be for \(\bar{x}\) (resp. \(\underline{x}\)) in the boundary points \(\{u,l\}\) or in its identity points denoted as \(\{\delta_{1},...,\delta_{n}\}\), that is, the solutions of \(\xi^{\prime}(x)=0\to h^{\prime}(x)=\alpha\).
Given \(\xi(\bar{x})\) and \(\xi(\underline{x})\), then for all \(x\in[l,\,u]\)\(\xi(\underline{x})\leq\xi(\underline{x})\leq\xi(\underline{x})\implies\underline{y}(x)\leq h (x)\leq\bar{y}(x)\), with \(\underline{y}(x)=\alpha(x-\underline{x})+h(\underline{x})\) and \(\bar{y}(x)=\alpha(x-\bar{x})+h(\bar{x})\). Thus, it follows that \(\forall x\in[l,\,u],\exists\epsilon\in[-1,+1]\) such that
\[h(x)=\frac{\bar{y}(x)+\underline{y}(x)}{2}+\frac{\bar{y}(x)-\underline{y}(x) }{2}\epsilon\]
or equivalently \(h(x)=\tilde{h}(x,\epsilon)=\alpha x+\beta+\gamma\epsilon\) with
\[\beta =\frac{\bar{y}(x)+\underline{y}(x)}{2}-\alpha x=\frac{h( \underline{x})+h(\bar{x})-\alpha(\underline{x}+\bar{x})}{2},\] \[\gamma =\frac{\bar{y}(x)-\underline{y}(x)}{2}=\frac{h(\bar{x})-h( \underline{x})+\alpha(\underline{x}-\bar{x})}{2}.\]
## Appendix B Proof of the Proposition 2
_Inclusion preservation_: the sign criterion \(\gamma\geq 0\) is chosen below. Given \(\bar{y}(x)=\alpha_{2}x^{2}+\alpha_{1}x+\beta+\gamma\) and \(\underline{y}(x)=\alpha_{2}x^{2}+\alpha_{1}x+\beta-\gamma\), then parameters \((\alpha_{2},\alpha_{1},\beta,\gamma)\) ensure local coverage of \(\varphi(x)\) if \(\underline{y}(x)\leq\varphi(x)\leq\bar{y}(x),\forall x\in[l,\,u]\).
Consider firstly the scenario \(|l|\leq u\leq 2|l|\). In this case, \(\alpha_{2}=\frac{1}{2u}>0\) and thus \(\underline{y}(x),\bar{y}(x)\) are strictly convex.
On the one hand, \(\beta=\gamma\) and \(\alpha_{1}=-\alpha_{2}u\) impose that, \(y(0)=0=\varphi(0)\) and \(\underline{y}(u)=u=\varphi(u)\), whereas \(\underline{y}(l)=\frac{1}{2}(\frac{l}{u}+l)\leq 0=\varphi(l)\) for \(u\geq|l|\) and \(l<0\). Therefore, from \(\underline{y}(l)\leq\varphi(l)\), \(\underline{y}(0)=\varphi(0)\), \(\underline{y}(u)=\varphi(u)\) and the convexity of \(\underline{y}(x)\) wrt \(x\), it follows that \(\underline{y}(x)\leq\varphi(x),\forall x\in[l,\,u]\).
On the other hand, from \(\alpha_{1}=1-\alpha_{2}u\) and \(\beta=\gamma=\frac{\alpha_{2}u^{2}}{8}\), then \(\bar{y}(x)\) is tangent to the positive region of \(\varphi(x)\) in \(\hat{x}=\frac{u}{2}\) (that is, \(\bar{y}(\hat{x})=\varphi(\hat{x})=\hat{x}\) and \(\bar{y}^{\prime}(\hat{x})=\varphi^{\prime}(\hat{x})=1\)), and thus, since \(\bar{y}(x)\) is convex, it follows that \(\bar{y}(x)\geq\varphi(x),\forall x\geq 0\). Additionally, for \(\alpha_{2}=\frac{1}{2u}\) the (global) minimum of \(\bar{y}(x)\) is \(\bar{y}(x^{*})=0\) for \(x^{*}=\frac{-u}{2}\) (that is, \(\bar{y}^{\prime}(x^{*})=0\)), and thus \(\bar{y}(x)\geq\varphi(x),\forall x\leq 0\).
A similar reasoning can be used to show the inclusion preservation for the scenario \(u<|l|\leq 2u\).
_Conservatism reduction:_ For the scenario \(|l|\leq u\leq 2|l|\), the parameter \(\gamma\) has the value \(\gamma=\frac{\alpha_{2}u^{2}}{8}=\frac{u}{16}\). On the other hand, for a ReLU function \(\varphi(x)=max(0,x)\) the triplet for an affine abstraction in Lemma 2 yields \(\gamma^{*}_{aff}=\frac{u|l|}{2(u+|l|)}\). Therefore, for \(u\leq 2|l|\) the following inequality is obtained
\[\gamma^{*}_{aff}=\frac{u|l|}{2(u+|l|)}\geq\frac{u|l|}{2(2|l|+|l|)}=\frac{u}{6} >\frac{u}{16}=\gamma\]
and thus \(\gamma\leq\frac{3}{8}\gamma^{*}_{aff}\sim|\gamma|\leq\frac{3}{8}|\gamma^{*}_{aff}|\) (since \(\gamma,\gamma^{*}_{aff}>0\)). A similar reasoning can be used to prove the case \(u<|l|\leq 2u\).
|
2306.15513 | PASNet: Polynomial Architecture Search Framework for Two-party
Computation-based Secure Neural Network Deployment | Two-party computation (2PC) is promising to enable privacy-preserving deep
learning (DL). However, the 2PC-based privacy-preserving DL implementation
comes with high comparison protocol overhead from the non-linear operators.
This work presents PASNet, a novel systematic framework that enables low
latency, high energy efficiency & accuracy, and security-guaranteed 2PC-DL by
integrating the hardware latency of the cryptographic building block into the
neural architecture search loss function. We develop a cryptographic hardware
scheduler and the corresponding performance model for Field Programmable Gate
Arrays (FPGA) as a case study. The experimental results demonstrate that our
light-weighted model PASNet-A and heavily-weighted model PASNet-B achieve 63 ms
and 228 ms latency on private inference on ImageNet, which are 147 and 40 times
faster than the SOTA CryptGPU system, and achieve 70.54% & 78.79% accuracy and
more than 1000 times higher energy efficiency. | Hongwu Peng, Shanglin Zhou, Yukui Luo, Nuo Xu, Shijin Duan, Ran Ran, Jiahui Zhao, Chenghong Wang, Tong Geng, Wujie Wen, Xiaolin Xu, Caiwen Ding | 2023-06-27T14:38:05Z | http://arxiv.org/abs/2306.15513v1 | PASNet: Polynomial Architecture Search Framework for Two-party Computation-based Secure Neural Network Deployment
###### Abstract
Two-party computation (2PC) is promising to enable privacy-preserving deep learning (DL). However, the 2PC-based privacy-preserving DL implementation comes with high comparison protocol overhead from the non-linear operators. This work presents PASNet, a novel systematic framework that enables low latency, high energy efficiency & accuracy, and security-guaranteed 2PC-DL by integrating the hardware latency of the cryptographic building block into the neural architecture search loss function. We develop a cryptographic hardware scheduler and the corresponding performance model for Field Programmable Gate Arrays (FPGA) as a case study. The experimental results demonstrate that our light-weighted model PASNet-A and heavily-weighted model PASNet-B achieve 63 ms and 228 ms latency on private inference on ImageNet, which are 147 and 40 times faster than the SOTA CryptGPU system, and achieve 70.54% & 78.79% accuracy and more than 1000 times higher energy efficiency. The pretrained PASNet models and test code can be found on Github1.
Footnote 1: [https://github.com/HarveyP123/PASNet-DAC2023](https://github.com/HarveyP123/PASNet-DAC2023)
Privacy-Preserving in Machine Learning, Multi Party Computation, Neural Architecture Search, Polynomial Activation Function, Software/Hardware Co-design, FPGA
## I Introduction
Machine-Learning-As-A-Service (MLaaS) has been an emerging solution nowadays, to provide accelerated inference for diverse applications. However, most MLaaS require clients to reveal the raw input to the service provider [1] for evaluation, which may leak the privacy of users. Privacy-preserving deep learning (PPDL) and private inference (PI) have emerged to protect sensitive data in deep learning (DL). The current popular techniques include multi-party computation (MPC) [2] and homomorphic encryption (HE) [3]. HE is mainly used to protect small to medium-scale DNN models without involving costly bootstrapping and large communication overhead. MPC protocols such as secret-sharing [2] and Yao's Garbled Circuits (GC) [4] can support large-scale networks by evaluating operator blocks. This work mainly focuses on secure two-party computation (2PC), which represents the minimized system for multi-party computing (MPC) and is easy to extend [5].
The primary challenge in 2PC-based PI is the comparison protocol overhead [6] for non-linear operators. As shown in Fig. 1, ReLU contributes over 99% of latency in a ciphertext setting for deep neural network (DNN), despite negligible overhead in plaintext. Replacing ReLU with second-order polynomial activation could yield 50\(\times\) speedup.
To achieve high performance, good scalability, and high energy efficiency for _secure_ deep learning systems, two orthogonal research directions have attracted enormous interest. The first one is the nonlinear operations overhead reduction algorithms. Existing works focus on _ReLU cost optimization_, e.g., minimizing ReLU counts (DeepReduce [7], CryptoNAS [8]) or replacing ReLUs with polynomials (CryptoNets [9], Delphi [10], SAFENT [11]), and _extremely low-bit_ weights and activations (e.g., Binary Neural Network (BNN) [12]). However, these works neglect the accuracy impact. They often sacrifice the model comprehension capability, resulting in several accuracy losses on large networks and datasets such as ImageNet, hence are not scalable. The second trend is _hardware acceleration for PI_ to speed up the MPC-based DNN through GPUs [2, 13]. Since no hardware characteristic is captured during DNN design, this top-down ("algorithm \(\rightarrow\) hardware") approach can not effectively perform design space
Fig. 1: Latency of operators under 2PC PI setup. Network bandwidth: 1 GB/s. Device: ZCU104. Dataset: ImageNet.
exploration, resulting in sub-optimal solutions.
We focus on three observations: 1) preserving **prediction accuracy** for substantial benefits; 2) scalable **cryptographic overhead reduction** for various network sizes; 3) cohesive **algorithm/hardware optimizations** using closed loop "algorithm \(\leftrightarrow\) hardware" with design space exploration capturing hardware characteristics.
We introduce the **Polynomial Architecture Search (PASNet)** framework, which jointly optimizes DNN model structure and hardware architecture for high-performance MPC-based PI. Considering cryptographic DNN operators, data exchange, and factors like encoding format, network speed, hardware architecture, and DNN structure, PASNet effectively enhances the performance of MPC-based PI.
Our key design principle is to _enforce_ exactly what is assumed in the DNN design--training a DNN that is both hardware efficient and secure while maintaining high accuracy.
To evaluate the effectiveness of our framework, we use FPGA accelerator design as a demonstration due to its predictable performance, low latency, and high energy efficiency for MLaaS applications (e.g., Microsoft Azure [14]). We summarize our contributions as follows:
1. We propose a trainable _straight through polynomial activation initialization_ method for cryptographic hardware-friendly trainable polynomial activation function to replace the expensive ReLU operators.
2. Cryptographic hardware scheduler and the corresponding performance model are developed for the FPGA platform. The latency loop-up table is constructed.
3. We propose a differentiable cryptographic hardware-aware NAS framework to selectively choose the proper polynomial or non-polynomial activation based on given constraint and latency of cryptographic operators.
## II **Basic of Cryptographic Operators**
### **Secret Sharing**
**2PC setup.** We consider a similar scheme involving two semi-honest in a MLaaS applications [5], where two servers receive the confidential inputs from each other and invoke a two party computing protocol for secure evaluation.
**Additive Secret Sharing.** In this work, we evaluate 2PC secret sharing. As a symbolic representation, for a secret value \(x\in\mathbb{Z}_{m}\), \(\llbracket x\rrbracket\leftarrow(x_{S_{0}},x_{S_{1}})\) denotes the two shares, where \(x_{S_{i}},i\in\{0,1\}\) belong to server \(S_{i}\). Other notations are as below:
* _Share Generation_\(\text{shr}(x)\): A random value \(r\) in \(\mathbb{Z}_{m}\) is sampled, and shares are generated as \(\llbracket x\rrbracket\leftarrow(r,x-r)\).
* _Share Recovering_\(\text{rec}(\llbracket x\rrbracket)\): Given shares \(\llbracket x\rrbracket\leftarrow(x_{S_{0}},x_{S_{1}})\), it computes \(x\gets x_{S_{0}}+x_{S_{1}}\) to recover \(x\).
An example of plaintext vs. secret shared based ciphertext evaluation is given in Fig. 2, where ring size is 4 and \(\mathbb{Z}_{m}=\{-8,-7,...7\}\). The integer overflow mechanism naturally ensures the correctness of ciphertext evaluation. Evaluation in the example involves secure multiplication, addition and comparison, and details are given in following sections.
### **Polynomial Operators Over Secret-Shared Data**
**Scaling and Addition.** We denote secret shared matrices as \(\llbracket X\rrbracket\) and \(\llbracket Y\rrbracket\). The encrypted evaluation is given in Eq. 1.
\[\llbracket aX+Y\rrbracket\leftarrow(aX_{S_{0}}+Y_{S_{0}},aX_{S_{1}}+Y_{S_{1}}) \tag{1}\]
**Multiplication.** We consider the matrix multiplicative operations \(\llbracket R\rrbracket\leftarrow\llbracket X\rrbracket\otimes\llbracket Y\rrbracket\) in the secret-sharing pattern. where \(\otimes\) is a general multiplication, such as Hadamard product, matrix multiplication, and convolution. We use oblivious transfer (OT) [15] based approach. To make the multiplicative computation secure, an extra Beaver triples [16] should be generated as \(\llbracket Z\rrbracket=\llbracket A\rrbracket\otimes\llbracket B\rrbracket\), where \(A\) and \(B\) are randomly initialized. Specifically, their secret shares are denoted as \(\llbracket Z\rrbracket=(Z_{S_{0}},Z_{S_{1}})\), \(\llbracket A\rrbracket=(A_{S_{0}},A_{S_{1}})\), and \(\llbracket B\rrbracket=(B_{S_{0}},B_{S_{1}})\). Later, two matrices are derived from given shares: \(E_{S_{i}}=X_{S_{i}}-A_{S_{i}}\) and \(F_{S_{i}}=Y_{S_{i}}-B_{S_{i}}\), in each party end separately. The intermediate shares are jointly recovered as \(E\leftarrow\text{rec}(\llbracket E\rrbracket)\) and \(F\leftarrow\text{rec}(\llbracket F\rrbracket)\). Finally, each party, i.e, server \(S_{i}\), will calculate the secret-shared \(R_{S_{i}}\) locally:
\[R_{S_{i}}=-i\cdot E\otimes F+X_{S_{i}}\otimes F+E\otimes Y_{S_{i}}+Z_{S_{i}} \tag{2}\]
**Square.** For the element-wise square operator shown \(\llbracket R\rrbracket\leftarrow\llbracket X\rrbracket\otimes \llbracket X\rrbracket\), we need to generate a Beaver pair \(\llbracket Z\rrbracket\) and \(\llbracket A\rrbracket\) where \(\llbracket Z\rrbracket=\llbracket A\rrbracket\otimes\llbracket A\rrbracket\), and \(\llbracket A\rrbracket\) is randomly initialized. Then parties evaluate \(\llbracket E\rrbracket=\llbracket X\rrbracket-\llbracket A\rrbracket\) and jointly recover \(E\leftarrow\text{rec}(\llbracket E\rrbracket)\). The result \(R\) can be obtained through Eq. 3.
\[R_{S_{i}}=Z_{S_{i}}+2E\otimes A_{S_{i}}+E\otimes E \tag{3}\]
### **Non-Polynomial Operator Modules**
Non-polynomial operators such as ReLU and MaxPool are evaluated using secure comparison protocol.
**Secure 2PC Comparison.** The 2PC comparison, a.k.a. millionaires protocol, is committed to determine whose value held by two parties is larger, without disclosing the exact value to each other. We adopt work [6] for 2PC comparison. Detailed modeling is given in Section III-C.
Fig. 2: A example of 4 bit plaintext vs. ciphertext evaluation.
## III **The PASNet Framework**
The framework (Fig. 3) takes inputs like optimization target, hardware pool, network information, and 2PC operator candidates for cryptographic operator modeling, benchmarking, and automated design space optimization in PI using hardware-aware NAS. This section presents a new cryptographic-friendly activation function, its initialization method, DNN operator modeling under 2PC, and a hardware-aware NAS framework for optimizing DNN accuracy and latency. While evaluated on FPGA accelerators, the method can be easily adapted to other platforms like mobile and cloud.
### **Trainable \(X^{2}act\) Non-linear Function.**
We use a hardware friendly trainable second order polynomial activation function as an non-linear function candidate, shown in Eq. 4, where \(w_{1}\), \(w_{2}\) and \(b\) are all trainable parameters. We propose _straight through polynomial activation initialization_ (**STPAI**) method to set the \(w_{1}\) and \(b\) to be small enough and \(w_{2}\) to be near to 1 in Eq. 4 for initialization.
\[\delta(x)=\frac{c}{\sqrt{N_{x}}}w_{1}x^{2}+w_{2}x+b \tag{4}\]
**Convergence.** Layer-wise second-order polynomial activation functions preserve the convexity of single-layer neural network [17]. Higher order polynomial activation function or channel-wise fine-grained polynomial replacement proposed in SAFENet [11] may destroy the neural network's convexity and lead to a deteriorated performance.
**Learning rate.** The gradient of \(w_{1}\) must be balanced to match the update speed of other model weights. As such, we add a new scaling \(\frac{c}{\sqrt{N_{x}}}\) prior to \(w_{1}\) parameter. In the function, \(c\) is a constant, \(N_{x}\) is the number of elements in feature map.
### **Search Space of Hardware-aware NAS.**
We focus on convolutional neural networks (CNNs) in our study. CNNs are mostly composed of Conv-Act-Pool and Conv-Act blocks. In work, we use the regular backbone model as a search baseline, such as the VGG family, mobilenetV3, and ResNet family. Each layer of supernet is composed of the layer structure obtained from baseline and its possible combination with \(X^{2}act\) and \(Pool_{a}\) replacement. A toy example is shown in Fig. 3, where a two-layer supernet is constructed, and the first layer is Conv-Act-Pool, and the second layer is Conv-Act. The first layer has four combinations which are Conv-ReLU-Pool\({}_{\text{m}}\), Conv-ReLU-Pool\({}_{\text{a}}\), Conv-\(X^{2}act\)-Pool\({}_{\text{m}}\), and Conv-\(X^{2}act\)-Pool\({}_{\text{a}}\). The second layer has two combinations: Conv-ReLU and Conv-\(X^{2}act\). The Conv block's parameters can be either shared among candidates or separately trained during the search.
### **Operator Modeling and Latency Analysis**
This section will analyze five different operators: 2PC-ReLU, 2PC-\(X^{2}act\), 2PC-MaxPool, 2PC-AvgPool, and 2PC-Conv. Therefore, they require \((1,n)\)-OT (noted as **OT flow** block to implement 2PC comparison flows. Batch normalization can be fused into the convolution layer and it's not listed.
#### Iii-C1 2PC-OT Processing Flow.
While OT-based comparison protocol has been discussed in [15], we hereby provide other communication detail as shown in Fig. 4. Assume both servers have a shared prime number \(m\), one generator (\(g\)) selected from the finite space \(\mathbb{Z}_{m}\), and an **index** list with \(L\) length. As we adopt 2-bit part, the length of **index** list is \(L=4\).
**1 Server 0** (\(S_{0}\)) generates a random integer \(rd_{s_{0}}\), and compute mask number \(S\) with \(S=g^{rd_{S_{0}}}\ mod\ m\), then shares
Fig. 4: Processing Steps of **2PC-OT** flow.
Fig. 3: Overview of PASNet framework for 2PC DNN based private inference setup.
\(S\) with the Server 1 (\(S_{1}\)). We only need to consider communication (\(COMM_{1}\)) latency as \(COMM_{1}=T_{bc}+\frac{32}{Rtb_{bw}}\), since computation (\(CMP_{1}\)) latency is trivial.
**2 Server 1** (\(S_{1}\)) received \(S\), and generates \(\mathbf{R}\) list based on \(S_{1}\)'s 32-bit dataset \(\mathbf{M_{1}}\), and then send them to \(S_{0}\). Each element of \(\mathbf{M_{1}}\) is split into \(U=16\) parts, thus each part is with 2 bits. Assuming the input feature is square with size \(FI\) and \(IC\) denotes the input channel, and we denote the computational parallelism as \(PP\). The \(CMP_{2}\) is modeled as Eq. 5, and \(COMM_{2}\) is modeled as Eq. 6.
\[CMP_{2}=\frac{32\times 17\times FI^{2}\times IC}{PP\times freq} \tag{5}\]
\[COMM_{2}=T_{bc}+\frac{32\times 16\times FI^{2}\times IC}{Rtb_{bw}} \tag{6}\]
**3 Server 0** (\(S_{0}\)) received \(\mathbf{R}\), it will first generate the encryption \(\mathbf{key_{0}}(y,u)=\mathbf{R}(y,u)\oplus(S^{b2d(\mathbf{M_{1}}(y,u))+1}\ mod\ m)^{rd_{S_{0}}}\ mod\ m\). The \(S_{0}\) also generates is comparison matrix for it's \(\mathbf{M_{0}}\) with 32-bit datatype and \(U=16\) parts, thus the matrix size for each value (\(x\)) is \(4\times 16\). The encrypted \(Enc(\mathbf{M_{0}}(x,u))=\mathbf{M_{0}}(x,u)\oplus\mathbf{key_{0}}(y,u)\) will be sent to \(S_{1}\). The \(COMM_{3}\) of this step is shown in Eq. 8, and \(CMP_{3}\) can be estimated as Eq. 7.
\[CMP_{3}=\frac{32\times(17+(4\times 16))\times FI^{2}\times IC}{PP\times freq} \tag{7}\]
\[COMM_{3}=T_{bc}+\frac{32\times 4\times 16\times FI^{2}\times IC}{Rtb_{bw}} \tag{8}\]
**4 Server 1** (\(S_{1}\)) decodes the interested encrypted massage by \(\mathbf{key_{1}}=S^{rd_{S_{0}}}\ mod\ m\) in the final step. The \(CMP_{4}\) and \(COMM_{4}\) are calculated as following:
\[CMP_{4}=\frac{((32\times 4\times 16)+1)\times FI^{2}\times IC}{PP\times freq} \tag{9}\]
\[COMM_{4}=T_{bc}+\frac{FI^{2}\times IC}{Rtb_{bw}} \tag{10}\]
#### Iii-C2 2PC-ReLU Operator
2PC-ReLU requires 2PC-OT flow. 2PC-ReLU latency (\(Lat_{2PC-ReLU}\)) model is given in Eq. 11.
\[Lat_{2PC-ReLU}=\sum_{i=2}^{4}{CMP_{i}}+\sum_{j=1}^{4}{COMM_{j}} \tag{11}\]
#### Iii-C3 2PC-MaxPool Operator
Original MaxPool function is shown in Eq. 12. The 2PC-MaxPool uses OT flow comparison, and the latency model is shown in Eq. 13.
\[out=\max_{\begin{subarray}{c}k_{h}\in[0,K_{h}-1]\\ k_{w}\in[0,K_{w}-1]\end{subarray}}in(n,c,hS_{h}+k_{h},wS_{w}+k_{w}) \tag{12}\]
\[Lat_{2PC-MaxPool}=\sum_{i=2}^{4}{CMP_{i}}+\sum_{j=1}^{4}{COMM_{j}}+3T_{bc} \tag{13}\]
#### Iii-C4 2PC-\(X^{2}\)act Operator
The original \(X^{2}act\) has been shown in Eq. 4. The \(X^{2}act\) needs a ciphertext square operation and 2 ciphertext-plaintext multiplication operations. The basic protocol is demonstrated in Sec. II-B. The latency of computation and communication can be modeled as: \(CMP_{x^{2}}=\frac{2\times FI^{2}\times IC}{PP\times freq}\) and \(COMM_{x^{2}}=T_{bc}+\frac{32\times FI^{2}\times IC}{R_{hw}}\). The latency model of 2PC-\(X^{2}\)act (\(Lat_{2PC-X^{2}act}\)) is shown in Eq. 14.
\[Lat_{2PC-X^{2}act}=CMP_{x^{2}}+2\times COMM_{x^{2}} \tag{14}\]
#### Iii-C5 2PC-AvgPool Operator
The 2PC-AvgPool operator only involves addition and scaling, the latency is
\[Lat_{2PC-AvgPool}=\frac{2\times FI^{2}\times IC}{PP\times freq} \tag{15}\]
#### Iii-C6 2PC-Conv Operator
The 2PC-Conv operator involves multiplication between ciphertext, and the basic computation and communication pattern are given Sec. II-B. The computation part follows tiled architecture implementation [18]. Assuming we can meet the computation roof by adjusting tiling parameters, the latency of the 2PC-Conv computation part can be estimated as \(CMP_{Conv}=\frac{3\times K\times K\times FC^{2}\times IC\times OC}{PP\times freq}\), where \(K\) is the convolution kernel size. The communication latency is modeled as \(COMM_{Conv}=T_{bc}+\frac{32\times FI^{2}\times IC}{Rt_{bw}}\). Thus, the latency of 2PC-Conv is given in Eq. 16.
\[Lat_{2PC-Conv}=CMP_{Conv}+2\times COMM_{Conv} \tag{16}\]
### **Differentiable Harware Aware NAS Algorithm**
```
0:\(M_{b}\): backbone model; \(D\): a specific dataset \(Lat(OP)\): latency loop up table; \(H\): hardware resource
0: Searched polynomial model \(M_{p}\)
1:while not converged do
2: Sample minibatch \(x_{trn}\) and \(x_{val}\) from trn. and val. dataset
3: // Update architecture parameter \(\alpha\):
4: Forward path to compute \(\zeta_{trn}(\omega,\alpha)\) based on \(x_{trn}\)
5: Backward path to compute \(\delta\omega=\frac{\delta\zeta_{trn}(\omega,\alpha)}{\delta\omega}\)
6: Virtual step to compute \(\omega^{\prime}=\omega-\xi\delta\omega\)
7: Forward path to compute \(\zeta_{val}(\omega^{\prime},\alpha)\) based on \(x_{val}\)
8: Backward path to compute \(\delta\alpha^{\prime}=\frac{\partial_{\zeta_{val}}(\omega^{\prime},\alpha)}{ \partial\alpha^{\prime}}\)
9: Backward path to compute \(\delta\omega^{\prime}=\frac{\partial_{\zeta_{val}}(\omega^{\prime},\alpha)}{ \partial\omega^{\prime}}\)
10: Virtual steps to compute \(\omega^{\pm}=\omega\pm\delta\omega^{\prime}\)
11: Two forward path to compute \(\zeta_{trn}(\omega^{\prime},\alpha)\)
12: Two backward path to compute \(\delta\alpha^{\pm}=\frac{\partial_{\zeta_{trn}}(\omega^{\pm},\alpha)}{ \partial\alpha}\)
13: Compute hessian \(\delta\alpha^{\prime\prime}=\frac{\delta\alpha^{\prime}-\delta\alpha}{2\alpha}\)
14: Compute final architecture parameter gradient \(\delta\alpha=\delta\alpha^{\prime}-\xi\delta\alpha^{\prime\prime}\)
15: Update architecture parameter using \(\delta\alpha\) with Adam optimizer
16: // Update weight parameter \(\omega\):
17: Forward path to compute \(\zeta_{trn}(\omega,\alpha)\) based on \(x_{trn}\)
18: Backward path to compute \(\delta\omega=\frac{\partial_{\zeta_{trn}}(\omega,\alpha)}{\partial\omega}\)
19: Update architecture parameter using \(\delta\omega\) with SGD optimizer
20: **end while**
Obtain architecture by \(OP_{l}(x)=OP_{l,k^{*}}(x)\), \(s.t.\ k^{*}=\text{argmax}_{k}\ \theta_{l,k}\)
```
**Algorithm 1** Differentiable Polynomial Architecture Search.
Early work [19] focus on using RL for NAS. The RL based method effectively explores the search space but still requires a significant amount of search overhead such as GPU hours and energy. Hardware-aware NAS have also been investigated [20]. In this work, we incorporate latency constraint into the target loss function of the DARTS framework [21], and develop a differentiable cryptographic hardware-aware micro-architecture search framework. We firstly determine a supernet model for NAS, and introduces gated operators \(OP_{l}(x)\) which parametrizes the candidate operators \(OP_{l,j}(x)\) selection with a trainable weight \(\alpha_{l,k}\) (Eq. 17). For example, a gated pooling operator consists of MaxPool and AvgPool operators and 2 trainable parameters for pooling selection. The latency of the operators could be determined based on Sec. III-C. A parameterized latency constraint is given as \(Lat(\alpha)=\sum_{l=1}^{n}\sum_{j=1}^{m}\theta_{l,j}Lat(OP_{l,j})\), where the latency of gated operators are weighted by \(\theta_{l,j}\). We incorporate the latency constraint into the loss function as \(\zeta(\omega,\alpha)=\zeta_{CE}(\omega,\alpha)+\lambda Lat(\alpha)\), and penalize the latency \(Lat(\alpha)\) by \(\lambda\).
\[\theta_{l,j}=\frac{\exp(\alpha_{l,j})}{\sum_{k=1}^{m}\exp(\alpha_{l,k})},\;OP _{l}(x)=\sum_{k=1}^{m}\theta_{l,k}OP_{l,k}(x) \tag{17}\]
The optimization objective of our design is shown in Eq. 18, we aim to minimize the validation loss \(\zeta_{val}(\omega^{*},\alpha)\) with regard to architecture parameter \(\alpha\). The optimal weight \(\omega^{*}\) is obtained through minimize the training loss. The second order approximation of the optimal weight is given as \(\omega^{*}\approx\omega^{\prime}=\omega-\xi\;\delta\zeta_{trn}(\omega,\alpha)/ \delta\omega\), the approximation is based on current weight parameter and its' gradient. The virtual learning rate \(\xi\) can be set equal to that of weight optimizer.
\[\text{argmin}_{\alpha}\;\zeta_{val}(\omega^{*},\alpha),\;s.t.\;\omega^{*}= \text{argmin}_{\omega}\;\zeta_{trn}(\omega,\alpha) \tag{18}\]
Eq. 19 gives the approximate \(\alpha\) gradient using chain rule, the second term of \(\alpha\) gradient can be further approximated using small turbulence \(\varepsilon\), where weights are \(\omega^{\pm}=\omega\pm\delta\zeta_{val}(\omega^{\prime},\alpha)/\delta\omega^{\prime}\) and Eq. 20 is used for final \(\alpha\) gradient.
\[\delta\zeta_{val}(\omega^{\prime},\alpha)/\delta\alpha-\xi\delta \zeta_{val}(\omega^{\prime},\alpha)/\delta\omega^{\prime}\delta\delta\zeta_{trn} (\omega,\alpha)/\delta\omega\delta\alpha \tag{19}\] \[\frac{\delta\delta\zeta_{trn}(\omega,\alpha)}{\delta\omega\delta \alpha}=\delta(\zeta_{trn}(\omega^{+},\alpha)-\zeta_{trn}(\omega^{-},\alpha))/ (2\varepsilon\delta\alpha) \tag{20}\]
With the help of analytical modeling of optimization objective, we are able to derive the differentiable polynomial architecture search framework in Algo. 1. The input of search framework includes backbone model \(M_{b}\), dataset \(D\), latency loop up table \(Lat(OP)\), and hardware resource \(H\). The algorithm returns a searched polynomial model \(M_{p}\). The algorithm iteratively trains the architecture parameter \(\alpha\) and weight \(\omega\) parameter till the convergence. Each \(\alpha\) update requires 4 forward paths and 5 backward paths according to Eq. 18 to Eq. 20, and each \(\omega\) update needs 1 forward paths and 1 backward paths. After the convergence of training loop, the algorithm returns a deterministic model architecture by applying \(OP_{l}(x)=OP_{l,k^{*}}(x)\), \(s.t.\;k^{*}=\text{argmax}_{k}\;\alpha_{l,k}\). The returned architecture is then used for 2PC based PI evaluation.
## IV **Evaluation**
**Hardware setup.** Our platform uses two ZCU104 MPSoCs connected via a 1 GB/s LAN router. With a 128-bit load/store bus and 32-bit data, we process four data simultaneously at 200MHz. The fixed point ring size is set to 32 bits for PI.
**Datasets and Backbone Models.** PASNet is evaluated on CIFAR-10 and ImageNet for image classification tasks. CIFAR-10 [22] has colored \(32\times 32\) images, with \(10\) classes, \(50,000\) training, and \(10,000\) validation images. ImageNet [22] has RGB \(224\times 224\) images, with \(1000\) categories, \(1.2\) million training, and \(50,000\) validation images.
**Systems Setup.** Polynomial architecture search experiments are conducted using Ubuntu 18.04, Nvidia Quadro RTX 6000 GPU, PyTorch v1.8.1, and Python 3.9.7. Pretrained weights for CIFAR-10 and ImageNet are from [23] and Pytorch Hub [24], respectively. Cryptographic DNN inference is performed on FPGA-based accelerators using two ZCU104 boards, connected via Ethernet LAN. The FPGA accelerators are optimized with coarse-grained and fine-grained pipeline structures, as discussed in Sec. III-C.
### **Hardware-aware NAS Evaluation**
Our hardware-aware PASNet evaluation experiment (algorithm descripted in Sec. III-D) was conducted on CIFAR-10 training dataset. A new training & validation dataset is randomly sampled from the CIFAR-10 training dataset with 50%-50% split ratio. The new training dataset is used to update the weight parameter of PASNet models, and the new validation dataset is used to update the architecture parameter.
Fig. 5: PASNet framework evaluation on CIFAR-10 dataset under 2PC PI setup. Network bandwidth: 1 GB/s. Device: ZCU104.
The hardware latency is modeled through section. III-C, and the \(\lambda\) for latency constraint in loss function is tuned to generate architectures with different latency-accuracy trade-off. Prior search starts, the major model parameters are randomly initialized and the polynomial activation function is initialized through **STPAI** method. We use VGG-16 [25], ResNet-18, ResNet-34, ResNet-50 [26], and MobileNetV2 [27] as backbone model structure to evaluate our PASNet framework.
With the increase of latency penalty, the searched structure's accuracy decreases since the DNN structure has more polynomial operators. After the proper model structure is found during architecture search process, the transfer learning with **STPAI** is conducted to evaluate the finetuned model accuracy.
The finetuned model accuracy under 2PC setting with regard to \(\lambda\) setting can be found in Fig. 5(a). The baseline model with all ReLU setting and all-polynomial operation based model are also included in the figure for comparison. Generally, a higher polynomial replacement ratio leads to a lower accuracy. The VGG-16 model is the most vulnerable model in the study, while the complete polynomial replacement leads to a 3.2% accuracy degradation (baseline 93.5%). On the other side, ResNet family are very robust to full polynomial replacement and there are only \(0.26\%\) to \(0.34\%\) accuracy drop for ResNet-18 (baseline 93.7%), ResNet-34 (baseline 93.8%) and ResNet-50 (baseline 95.6%). MobileNetV2's is in between the performance of VGG and ResNet, in which a full polynomial replacement leads to \(1.27\%\) degradation (baseline 94.09%).
On the other hand, Fig. 5(b) presents the latency profiling result of searched models performance on CIFAR-10 dataset under 2PC setting. All polynomial replacement leads to 20 times speedup on VGG-16 (baseline 382 ms), 15 times speedup on MobileNetV2 (baseline 1543 ms), 26 times speedup, ResNet-18 (baseline 324 ms), 19 times speedup on ResNet-34 (baseline 435 ms), and 25 times on speedup ResNet-50 (baseline 922 ms). With most strict constraint \(\lambda\), the searched model latency is lower.
### **Cross-work ReLU Reduction Performance Comparison**
A futher accuracy-ReLU count analysis is conducted and compared with SOTA works with ReLU reduction: DeepRe-Duce [7], DELPHI [10], CryptoNAS [8], and SNI [28]. As shown in Fig. 6, we generate the pareto frontier with best accuracy-ReLU count trade-off from our architecture search result. We name the selected models as **PASNet**, and compare it with other works. The accuracy-ReLU count comparison is show in Fig. 7. Our work achieves a much better accuracy vs. ReLU comparison than existing works, especially at the situation with extremely few ReLU counts.
### **Cross-work PI System Performance Comparison**
We pick up 4 searched PASNet model variants for CIFAR-10 & ImageNet dataset accuracy & latency evaluation and name them as **PASNet-A**, **PASNet-B**, **PASNet-C**, **PASNet-D**. PASNet-A is a light-weighted model and shares the same backbone model as ResNet-18 but has only polynomial operators. PASNet-B and PASNet-C are heavily-weighted models that share the same backbone model as ResNet-50. PASNet-B has only polynomial operators and PASNet-C has 4 2PC-ReLU operators. PASNet-D is a medium-weighted model derived from MobileNetV2 with all polynomial layers. Note that the baseline top-1 accuracy of ResNet-18 on CIFAR-10 and ImageNet are 93.7% and 69.76%, baseline top-1 accuracy of ResNet-50 on CIFAR-10 and ImageNet are 95.65% and 78.8%, and the baseline top-1 accuracy of MobileNetV2 on CIFAR-10 and ImageNet are 94.09% and 71.88%.
The PASNet variants evaluation results and ImageNet cross-work comparison with SOTA CryptGPU [13] and Crypt-FLOW [1] implementation can be found in Tab. I. We observe a 0.78% top-1 accuracy increase for our light-weighted
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c} \hline & \multicolumn{4}{c|}{CIFAR-10 dataset} & \multicolumn{4}{c}{ImageNet dataset} \\ \hline Model & Top I (\%) & Lat. (ms) & Comm. (MB) & Eff. (1/(ms*KW)) & Top I (\%) & Top 5 (\%) & Lat. (s) & Comm. (GB) & Eff. (1/(s*KW)) \\ \hline PASNet-A & 93.37 & 12.2 & 2.86 & 5.12 & 70.54 & 89.59 & 0.063 & 0.035 & 999 \\ \hline PASNet-B & 95.31 & 36.74 & 13.18 & 1.70 & 78.79 & 93.99 & 0.228 & 0.162 & 274 \\ \hline PASNet-C & 95.33 & 62.91 & 30.03 & 0.99 & 79.25 & 94.38 & 0.539 & 0.368 & 115 \\ \hline PASNet-D & 92.82 & 104.09 & 25.01 & 0.60 & 71.36 & 90.15 & 0.184 & 0.103 & 339 \\ \hline CryptoGPU & \(\backslash\) & \(\backslash\) & \(\backslash\) & \(\backslash\) & 78 & 92 & 9.31 & 3.08 & 0.15 \\ \hline ResNet0 & \(\backslash\) & \(\backslash\) & \(\backslash\) & \(\backslash\) & 76.45 & 93.23 & 25.9 & 6.9 & 0.096 \\ \hline \end{tabular}
\end{table} TABLE I: PASNet evaluation & cross-work comparison with CryptGPU [13] and CryptFLOW [1]. Batch size = 1
PASNet-A compared to baseline ResNet-18 performance on ImageNet. Heavily-weighted models PASNet-B and PASNet-C achieve comparable (-0.01%) or even higher accuracy (+0.45%) than the ResNet-50 baseline. we achieve only a 0.13% accuracy drop for our medium-weighted PASNet-D compared to baseline MobileNetV2 performance on ImageNet. Even with the ZCU 104 edge devices setting, we can achieve a much faster secure inference latency than the SOTA works implemented on the large-scale server system. Our light-weighted PASNet-A achieves 147 times latency reduction and 88 times communication volume reduction compared to CryptGPU [13]. Our heavily-weighted model PASNet-B achieved 40 times latency reduction and 19 times communication volume reduction than CryptGPU [13] while maintaining an even higher accuracy. Our highest accuracy model PASNet-C achieved 79.25% top-1 accuracy on the ImageNet dataset with 17 times latency reduction and 8.3 times communication volume reduction than CryptGPU [13]. Note that our system is built upon the ZCU104 edge platform, so our energy efficiency is much higher (more than 1000 times) than SOTA CryptGPU [13] and Crypt FLOW [1] systems.
## V **Discussion**
Existing MLaaS accelerations focused on plaintext inference acceleration [29, 30, 31, 32]. Others target on plaintext training acceleration [53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63], federated learning [64, 65, 66] to protect the privacy of training data, and privacy protection of model vendor [67, 68].
In this work, we propose PASNet to reduce high comparison protocol overhead in 2PC-based privacy-preserving DL, enabling low latency, high energy efficiency, and accurate 2PC-DL. We employ hardware-aware NAS with latency modeling. Experiments demonstrate PASNet-A and PASNet-B achieve 147x and 40x speedup over SOTA CryptGPU on ImageNet PI test, with 70.54% and 78.79% accuracy.
## Acknowledgement
This work was in part supported by the NSF CNS-2247891, 2247892, 2247893, CNS-2153690, DGE-2043183, and the Heterogeneous Accelerated Compute Clusters (HACC) program at UIUC. Any opinions, findings and conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the funding agencies.
|
2304.02616 | Quantum Pontryagin Neural Networks in Gamkrelidze Form Subjected to the
Purity of Quantum Channels | We investigate a time and energy minimization optimal control problem for
open quantum systems, whose dynamics is governed through the Lindblad (or
Gorini-Kossakowski-Sudarshan-Lindblad) master equation. The dissipation is
Markovian time-independent, and the control is governed by the Hamiltonian of a
quantum-mechanical system. We are specifically interested to study the purity
in a dissipative system constrained by state and control inputs. The idea for
solving this problem is by the combination of two following techniques. We deal
with the state constraints through Gamkrelidze revisited method, while handling
control constraints through the idea of saturation functions and system
extensions. This is the first time that quantum purity conservation is
formulated in such framework. We obtain the necessary conditions of optimality
through the Pontryagin Minimum Principle. Finally, the resulted boundary value
problem is solved by a Physics-Informed Neural Network (PINN) approach. The
exploited Pontryagin PINN technique is also new in quantum control context. We
show that these PINNs play an effective role in learning optimal control
actions. | Nahid Binandeh Dehaghani, A. Pedro Aguiar, Rafal Wisniewski | 2023-03-17T23:21:54Z | http://arxiv.org/abs/2304.02616v3 | # Quantum Pontryagin Neural Networks in Gamkrelidze Form Subjected to the Purity of Quantum Channels
###### Abstract
We investigate a time and energy minimization optimal control problem for open quantum systems, whose dynamics is governed through the Lindblad (or Gorini-Kossakowski-Sudarshan-Lindblad) master equation. The dissipation is Markovian time-independent, and the control is governed by the Hamiltonian of a quantum-mechanical system. We are specifically interested to study the purity in a dissipative system constrained by state and control inputs. The idea for solving this problem is by the combination of two following techniques. We deal with the state constraints through Gamkrelidze revisited method, while handling control constraints through the idea of saturation functions and system extensions. This is the first time that quantum purity conservation is formulated in such framework. We obtain the necessary conditions of optimality through the Pontryagin Minimum Principle. Finally, the resulted boundary value problem is solved by a Physics-Informed Neural Network (PINN) approach. The exploited Pontryagin PINN technique is also new in quantum control context. We show that these PINNs play an effective role in learning optimal control actions.
## I Introduction
During the last few decades, control of quantum systems has been extensively investigated. Recently, dissipative quantum systems have attracted much more attention in comparison with the quantum conservative systems, which are normally described by the time-dependent Schrodinger equation, for pure states captured by a wave function, or Liouville-von Neumann equation (LVNE), for both pure and mixed states described by the density operator rather than the wavefunction. The dynamics of open quantum systems is governed through the extension of LVNE by considering dissipative processes, known as the so-called Lindblad master equation. Due to the vast application of dissipative systems in various areas of knowledge such as quantum physics, chemistry and also quantum computation and information processing, the control of such systems is of high interest. Control of open quantum systems ranges from the control of dissipation resulted from molecular collisions to the fabrication of quantum computers.
Amongst quantum control tools, optimal control theory (OCT) has been applied to an increasingly extensive number of problems. Although OCT has been very well exploited for closed quantum systems, the control of dissipative quantum systems is still open to new ideas. Important examples of what may be thought of as the optimal control of dissipative quantum systems are the analytic solution for cooling the three-level \(\Lambda\) systems, [3], optimal control of quantum purity for dissipative two-Level open quantum systems, [4], and quantum optimal control problem with state constrained preserving coherence, [5]. In optimal control theory and particularly for state-to-state population transition problem, it is possible to find the best achievable control to take a dynamical system from one initial state to predefined target state, especially in the presence of state or control constraints by exploiting Pontryagin's Maximum Principle. In [5], we addressed the problem of maximizing fidelity in a quantum state transition process such that the coherence is preserved within some bounds. We formulated the quantum optimal control problem in the presence of state constraints and obtained optimality conditions of Pontryagin's Maximum Principle in Gamkrelidze's form for the first time in quantum control context.
In this work, we consider an optimal control problem of quantum dissipative dynamics, in which the dissipation is Markovian time-independent. This means that the memory effects are neglected and the dynamics of the quantum system under study is only dependent on the current state not the past history. Therefore, we can describe the density operator evolution by the Lindblad master equation. In such dynamics, there is an interaction of the controllable part of the system, the Hamiltonian (or conservative) term, and the uncontrollable part, the non-Hamiltonian (or dissipative) term. Under most conditions, dissipation leads to an increase in entropy (or a decrease in purity) of the system. However, proposing an strategy to control the Hamiltonian term of the system evolution with the intent that the non-Hamiltonian term causes an increase in purity rather than decrease is a matter of debate. In this regard, we consider two types of constraint, namely state and control constraints, and preserve the quantum purity in a state transition problem. We then exploit the newly developed Pontryagin Physics-Informed Neural Networks (PINN) method derived from the Theory of Functional Connection (TFC), [6, 9], to learn the optimal control actions and solve the Boundary Value Problem (BVP) associated with the necessary optimality conditions resulted
from the indirect application of Pontryagin Minimum Principle (PMP) in Gamkrelidze form.
### _Main Contributions:_
We investigate the optimal control problem of minimum-time and minimum-energy for dissipative quantum systems interacting with the environment, whose dynamics is governed by the Lindblad equation. Our contributions are itemized as follows
* We consider the purity of quantum channels as state constraints, such that the purity is preserved within some bounds. The technique to deal with this problem leans upon Gamkrelidze revisited method, which is novel in quantum control context. (We have shown the feasibility of this method for coherence preservation in a different framework in one of our recent works, [5].)
* We also consider control constraints, such that the value of control function remains in a defined interval. We handle control constraints through the idea of saturation functions and system extensions. This technique is also new in quantum control problems.
* We derive the necessary optimality PMP conditions, show under which the necessary conditions are also sufficient conditions for (local) optimality, and finally solve the boundary value problem resulted from the PMP. To do so, we use and adapt a recently developed neural network approach known as the Pontryagin neural networks. The application of this method is new in quantum control problems, where we solve the state constrained PMP in Gamkrelidze form by means of Pontryagin Neural Networks (PoNNs).
Overall, the combination of these two techniques is new for an optimal control problem. Moreover, this is for the first time that purity preservation is formulated and solved in such framework. We also study the fidelity criteria in terms of purity, and see the effects of state constraints of purity preservation on fidelity.
The structure of the work is as follows: In Section II, we introduce the Lindblad master equation and provide a linear isomorphism such that the transformed system is linear in the state and real. Section III formulates the optimal control problem of minimum time and energy, and in the next section we present the Pontryagin's Minimum Principle in Gamkrelidze form with saturation functions. We solve the resulted BVP through a physics-informed neural network approach, explained in section V. The simulation results have been shown for the quantum state transfer problem in a two-level system in section VI. The paper ends with conclusion and an overview on prospective research challenges.
### _Notation._
For a general continuous-time trajectory \(x\), the term \(x(t)\) indicates the trajectory assessed at a specific time \(t\). For writing the transpose of a matrix (or a vector) we use the superscript \(T\), and we use \(\dagger\) to show the conjugate transpose of a matrix (or vector). To denote the wave functions as vectors, we use the Dirac notation such that \(\left|\psi\right\rangle=\sum\limits_{k=1}^{n}\alpha_{k}\left|\hat{\psi}_{k}\right\rangle\), where \(\left|\psi\right\rangle\) indicates a state vector, \(\alpha_{k}\) are the complex-valued expansion coefficients, and \(\left|\hat{\psi}_{k}\right\rangle\) are basis vectors that are fixed. The notation bra is defined such that \(\left\langle\psi\right|=\left|\psi\right\rangle^{\dagger}\). In addition, the notation \(\left|\rho\right.\left\rangle\right\rangle\) indicates the vectorized form of the density operator in the Fock-Liouville space, and the vectorization operator is shown by _vec_. For writing partial differential equations, we denote partial derivatives using subscripts. In the general situation where \(f\) denotes a function of \(n\) variables including \(x\), then \(f_{x}\) denotes the partial derivative relative to the \(x\) input. The sign \(\otimes\) indicates the tensor product. Finally, the notation \(\left[\cdot,\cdot\right]\) represent a commutator and \(\left\{\cdot,\cdot\right\}\) is a Poisson bracket. Throughout the paper, the imaginary unit is \(i=\sqrt{-1}\).
## II The Lindbladian dynamics equation
The general mathematical tool to describe our knowledge of the state of an n-level quantum system is through the density operator \(\rho\), which is a Hermitian positive semi-definite operator of trace one acting on the Hilbert space \(\mathbb{H}\) of the system. In quantum mechanics, the evolution along time \(t\) of density operator through the Lindblad (or Gorini-Kossakowski-Sudarshan-Lindblad) master equation represents the most extensive generator of Markovian dynamics which takes the form, [11],
\[\dot{\rho}\left(t\right)=-i\left[H\left(u(t)\right),\rho\left(t\right)\right] +\sum\limits_{k}\lambda_{k}\left[L_{k}\rho L_{k}^{\dagger}-\frac{1}{2}\left\{L _{k}^{\dagger}L_{k},\rho\right\}\right] \tag{1}\]
in which the first term represents the unitary evolution of the quantum system, where \(H\left(u(t)\right)\) is a Hermitian operator called the quantum-mechanical Hamiltonian defined as
\[H\left(u(t)\right)=\underbrace{diag(E_{1},E_{2},\cdots,E_{n})}_{H_{d}}+ \underbrace{\sum\limits_{l=1}^{m}u_{l}(t)H_{l}}_{H_{C}}\]
where \(H_{d}\) is diagonal implying that the basis corresponds to the eigenvectors, and \(E_{i}\) represents a real number concerning the energy level. The control \(u_{l}(t)\in\mathbb{R}\) demonstrates a set of external functions coupled to the quantum system via time independent interaction Hamiltonians \(H_{l}\). The second term of (1) represents the dissipative part of the state evolution, where \(L_{k}\) indicates the sequence of arbitrary Lindblad operators, and \(\gamma_{k}\geq 0\) is the damping rate. Master equations can be botherson due to the commutation term and the Poisson bracket. Since convex combination gives the preservation of the trace and positive definiteness, it is possible to create a Hilbert space of density operators via defining a scalar product. The next result provides a linear space of density operators, called Fock-Liouville space, through which we present a solution of the master equation via vectorization such that the resulting Liouvillian superoperator is governed by a linear system.
**Proposition 1**: _Consider the Lindbladian dynamical system (1). There exists a linear isomorphism, coordinate transformation, such that the transformed system is linear in the states and real._
Proof:: The result is proved by providing a coordinate transformation given by the following composition of 3 linear isomorphisms (whose notation is defined in the sequel): i) \(\mathcal{L}:\ \rho\mapsto\mathcal{L}\rho,\ \ \ \text{ii)}\ \left|\cdot\right\rangle\right\rangle:\ \rho\mapsto\left|\rho\ \right\rangle\)),
\(\text{iii)}\ \tilde{\tau}:\ v\mapsto\tilde{v}:=\left[\text{Re}(v)\ \ \ \text{Im}(v)\right]\)
and finally \(\mathcal{L}=\tilde{\tau}\circ\left|\cdot\right\rangle\right)\circ\mathcal{L} \circ\left|\cdot\right\rangle\right)^{-1}\circ\tilde{\tau}^{-1}\). To obtain \(\mathrm{i}\), we apply the Choi-Jamiolkowski isomorphism for vectorization through the mapping \(\left|\cdot\right\rangle\right\rangle:\ \left|i\right\rangle\left\langle j \right|\mapsto\left|j\right\rangle\otimes\left|i\right\rangle\), to (1) obtaining the Liouville superoperator acting on the Hilbert space of density operator as \(\left|\hat{\rho}\ \right\rangle=\mathcal{L}\left|\rho\ \right\rangle\)) where
\[\mathcal{L}= -i\left(I\otimes H-H^{T}\otimes I\right)\] \[+\sum_{k}\gamma_{k}\left[L_{k}^{*}\otimes L_{k}-\frac{1}{2}I \otimes L_{k}^{\dagger}L_{k}-\frac{1}{2}\left(L_{k}^{\dagger}L_{k}\right)^{T }\otimes I\right] \tag{2}\]
in which \(I\) is the identity matrix. To show \(\mathrm{ii}\), note that, for an arbitrary density operator \(\rho=\sum\limits_{i,j}\rho_{i,j}\left|i\right\rangle\left\langle j\right|\), its vectorized form is given by \(\left|\rho\ \right\rangle=\sum\limits_{i,j}\rho_{i,j}\left|j\right\rangle \otimes\left|i\right\rangle\). Finally to get \(\mathrm{iii}\), we implement the system state and superoperator as
\[\left|\hat{\rho}\right\rangle\rangle=\begin{bmatrix}\text{Re}\left(\left| \rho\right\rangle\right)\\ \text{Im}\left(\left|\rho\right\rangle\right)\end{bmatrix},\quad\tilde{ \mathcal{L}}:=\begin{pmatrix}\text{Re}\left(\mathcal{L}\right)&-\text{Im} \left(\mathcal{L}\right)\\ \text{Im}\left(\mathcal{L}\right)&\text{Re}\left(\mathcal{L}\right)\end{pmatrix}\]
so that we obtain
\[\left|\dot{\rho}\ \right\rangle=\tilde{\mathcal{L}}\left|\rho\ \right\rangle\rangle \tag{3}\]
We consider (3) as the system dynamics in the rest of paper.
## III Optimal Control Formulation
This section describes the problem formulation. To this end, we have first to describe the state constraint that arises from the concept of quantum purity.
### _Quantum purity_
Quantum purity is a fundamental property of a quantum state. For pure states, the purity \(P=1\), while \(P<1\) shows that the quantum state is mixed. In dissipative quantum systems, a state may be initialized as pure, i.e., \(\rho=\left|\psi\right\rangle\left\langle\psi\right|\), and then due to the interaction with the environment and through a channel \(\chi\) is decohered and mapped to a mixed state. A quantum channel \(\chi\) over the space \(\mathbb{H}\) represents a completely positive trace-preserving quantum map, i.e., \(\chi\in CPTP(\mathbb{H})\). Hence, the purity of the channel \(\chi\) is considered as just the purity of the state \(\rho\). Therefore, one can write
\[P(\rho)=tr\left(\chi(\rho)^{2}\right)=tr\left(\rho^{2}\right)=\big{(}vec\left( \rho^{\dagger}\right)\big{)}^{\dagger}vec\left(\rho\right)\]
which leads to \(\tilde{P}(\rho)=\left\langle\left\langle\hat{\rho}^{\dagger}\left|\tilde{ \rho}\ \right\rangle\right\rangle\). Preservation or maximization of the purity of a state transmitted through a quantum channel, i.e., a dissipative quantum system, is an important objective in quantum information processing. To do so, several decoherence-reduction techniques, such as quantum error correcting codes, and decoherence-free subspaces have been developed. In this work, we keep quantum purity above some predefined level by imposing the constraint as
\[\alpha P_{0}\leq\tilde{P}(\rho)\leq P_{0},\quad 0<\alpha<1\]
where \(P_{0}=P(\rho_{0})\) is the purity of the initial state.
### _Problem Formulation_
Quantum operations such as quantum state transition need to be done in the shortest possible time. However, due to the inverse relation between control time and amplitude, a fast operation may cause a very large control amplitude, which is practically impossible. The methods to design the quantum optimal controller vary according to the choice of the cost functional, the construction of the Pontryagin-Hamiltonian function, and the computation scheme to solve the PMP optimality conditions. Here, we deal with a time- and energy minimization state constrained optimal control problem (\(P\)) with bounded control aiming to transfer the initial state \(\left|\tilde{\rho}(t_{0})\right\rangle\rangle=\rho_{0}\) to a desired target state \(\left|\tilde{\rho}(t_{f})\right\rangle\right\rangle=\rho_{f}\). The problem casts as the following:
\[(P)\left\{\begin{aligned} &\min_{u,f_{f}}\left\{J=\Gamma t_{f}+\eta \int_{t_{0}}^{t_{f}}u^{2}\left(t\right)\,dt\right\}\\ &\text{subject to}\\ &\left|\tilde{\rho}(t)\right\rangle\right\rangle=\tilde{\mathcal{L}} \left|\tilde{\rho}\ \right\rangle\ \text{a.e.}\ t\in\left[t_{0},t_{f}\right]\\ &\left|\tilde{\rho}(t_{0})\right\rangle\right\rangle=\rho_{0}\in \mathbb{R}^{4n}\\ & u\left(t\right)\in\mathcal{U}:=\left\{u\in L_{\infty}:u\left(t \right)\in\Omega\subset\mathbb{R}\right\}\\ &\Omega=\left[u_{\min},u_{\max}\right]\ \text{a.e.}\ t\in\left[t_{0},t_{f}\right]\\ & h\left(\left|\tilde{\rho}(t)\right\rangle\right)\right\rangle \leq 0\ \text{for all}\ t\in\left[t_{0},t_{f}\right]\end{aligned}\]
where \(\Gamma\) and \(\eta\) in the performance index \(J\) are positive coefficients, and \(t_{f}\) shows the free final time to be optimized. The second term of \(J\) is a common choice for the cost functional in molecular control, which measures the energy of the control field in the interval \(\left[t_{0},t_{f}\right]\). The control is represented as a measurable bounded function. The inequality \(h\left(\left|\tilde{\rho}(t)\right\rangle\right)\leq 0\) defines the state constraints for the density operator - see the specific example in (6). In this setup, we assume that all the sets are Lebesgue measurable and the functions are Lebesgue measurable and Lebesgue integrable. The goal is to obtain a pair \(\left(\rho^{*},u^{*}\right)\) which is optimal in the sense that the value of cost functional is the minimum over the set of all feasible solutions.
**Remark 1**: _It is important to assert the existence of a solution to Problem (P) in the class of measurable controls. Following the Filippov's theorem, [12], since the right-hand side of the dynamical system is linear with respect to the control, and the set of control values is convex and compact, then one can conclude that there exists a feasible control process to this problem._
## IV Pontryagin's Minimum Principle in Gamkrelidze's form with saturation functions
To deal with the indicated problem, one can identify two Lagrangian multipliers: \(\mu\), and \(\lambda\), where
* \(\mu\) is bounded variation, non-increasing \(\mu:\left[t_{0},t_{f}\right]\rightarrow\mathbb{R}^{2}\), such that \(\mu(t)\) is constant on the time interval in which the state constraint is inactive.
* \(\left|\lambda\right\rangle\rangle:\left[t_{0},t_{f}\right]\rightarrow\mathbb{R}^{4n}\) is the time-varying Lagrange multiplier vector, whose elements are called the costates of the system.
We handle control constraints with saturation functions, [1], such that the indicated inequality-constraint for control is transformed into a new equality-constraint. To do so, we define a new unconstrained control variable \(\nu(t)\), and substitute the control constraint with a smooth and monotonically increasing saturation function \(\phi:\mathbb{R}\rightarrow(u_{\min},u_{\max})\) such that
\[\phi\left(\nu\right)=u_{max}-\frac{u_{max}-u_{min}}{1+\exp\left(sv\right)}\quad \text{with}\quad s=\frac{c}{u_{\max}-u_{\min}}\]
in which \(c>0\) is a constant parameter, useful for modifying the slope of \(\phi(\nu)=0\). The advantage of using a saturation function is that it is defined within the range of \(\Omega\), and asymptotically approaches the saturation limits for \(\nu\rightarrow\pm\infty\). The next steps are the following:
* We add a regularization term to the cost functional \(J\) via a regularization parameter \(\alpha\), and define the new cost functional as \(\bar{J}=J+\alpha\int_{0}^{f_{J}}\nu^{2}(t)dt\), and solve the optimal control problem successively by decreasing \(\alpha_{k}\). We use the result that if \(u_{k+1}\) and \(u_{k}\) are the optimal control inputs for \(\alpha_{k+1}<\alpha_{k}\), then with \(\lim\limits_{k\rightarrow\infty}\alpha_{k}=0\), \(\bar{J}(u_{k},\alpha_{k})\) converges to a non-increasing optimal cost, [1]. In other words, by bringing \(\alpha\) closer to \(0\), we approach to the original problem.
* We consider an additional optimality condition for the new control variable by minimizing the Pontryagin-Hamilton function with respect to \(\nu\). Moreover, we need to consider the constraint equation \[u(t)-\phi(\nu)=0\] (4) for the boundary value problem.
* We introduce a multiplier \(\beta:\left[t_{0},t_{f}\right]\rightarrow\mathbb{R}\) to take the equality constraint into account.
In this new setup, we will now obtain the optimality conditions. To this end, we first construct the Pontryagin Hamiltonian \(\mathcal{H}\) defined for all \(t\in\left[t_{0},t_{f}\right]\) by
\[\mathcal{H}(\rho,\lambda,u,\nu,\delta,\alpha,\beta,t) =\left(\left|\lambda\left(t\right)\right\rangle\right)-2\delta(t) \left|\beta(t)\right\rangle\rangle)^{T}\tilde{\mathcal{L}}\left|\beta(t) \right\rangle\rangle\] \[+\eta u^{2}\left(t\right)+\alpha\nu^{2}(t)+\beta(t)\left(u(t)- \phi\left(\nu\right)\right) \tag{5}\]
where \(\delta\left(t\right)=\left[\left.1\right.-1\right]\mu(t)\).
**Proposition 2**: _Consider the optimal control problem (P1) that is similar to (P), but with cost function \(\bar{J}\), and the additional constraint (4). Let \(u^{\star}(t)\) be an optimal control and \(\tilde{\rho}^{\star}(t)\) the corresponding state trajectory response. Then, there exist the multiplier \(\lambda^{\star}(t)\) that together with \(\delta,\beta:\left[t_{0},t_{f}\right]\rightarrow\mathbb{R}\) satisfy the PMP necessary conditions. More precisely,_
\[\mathcal{H}(\rho^{\star},\lambda^{\star},u^{\star},\nu,\mu,\alpha,\beta,t) \leq\mathcal{H}(\rho^{\star},\lambda^{\star},u,\nu,\mu,\alpha,\beta,t)\]
_for all \(t\in\left[t_{0},t_{f}\right]\), and all feasible controls \(u\in\Omega\). Moreover,_
\[\frac{\partial\mathcal{H}}{\partial u} =(\left|\lambda^{\star}\right\rangle)-2\delta\left|\tilde{\rho}^ {\star}\right\rangle)^{T}\tilde{\mathcal{L}}_{u}\left|\tilde{\rho}^{\star} \right\rangle)+2\eta u^{\star}+\beta=0\] \[\frac{\partial\mathcal{H}}{\partial\nu} =2\alpha\nu-\beta\frac{\partial\phi\left(\nu\right)}{\partial \nu}=0\]
_The remaining first-order necessary conditions for the state and costate variables are given as_
\[\left|\hat{\rho}^{\star}\right\rangle\right\rangle =\frac{\partial\mathcal{H}}{\partial\left|\lambda\right\rangle} =\tilde{\mathcal{L}}\left|\tilde{\rho}^{\star}\right\rangle)\,,\quad\left| \tilde{\rho}^{\star}\left(t_{0}\right)\right\rangle)=\rho_{0}\] \[\left|\lambda^{\star}\right\rangle\right\rangle^{T} =\frac{-\partial\mathcal{H}}{\partial\left|\tilde{\rho}\right\rangle }=\tilde{\mathcal{L}}^{T}\left|\lambda^{\star}\right\rangle)-4\delta\tilde{ \mathcal{L}}\left|\rho^{\star}\right\rangle)\,,\left|\lambda^{\star}\left(t_{f} \right)\right\rangle\right\rangle=\lambda_{f}=\mathbf{0}\]
In addition, the transversality condition imposes that
\[\mathcal{H}\left(t_{f}\right)+\Gamma=0\]
For the indicated set of conditions, the extended Hamilton-Pontryagin function is
\[\mathcal{H}(\rho,\lambda,u,\nu,\mu,\alpha,\beta,t)=(\left|\lambda \left(t\right)\right\rangle)-\mu^{T}(t)\nabla_{\rho}h\left(\left|\beta(t) \right\rangle)\right)^{T}\] \[\tilde{\mathcal{L}}\left|\tilde{\rho}\left(t\right)\right\rangle \rangle+\eta u^{2}\left(t\right)+\alpha\nu(t)^{2}+\beta(t)\left(u(t)-\phi \left(\nu\right)\right)\]
in which the state constraint \(h\left(\left|\beta(t)\right\rangle\right\rangle\right)\) and its multiplier \(\mu(t)\) are expressed as
\[h\left(\left|\beta(t)\right\rangle\right))=\left(\begin{smallmatrix}\mathcal{P} \left[\chi(\rho)\right]-P_{0}\\ \alpha P_{0}-\mathcal{P}\left[\chi(\rho)\right]\end{smallmatrix}\right)\leq 0, \quad\mu(t)=\begin{smallmatrix}\mu_{1}(t)\\ \mu_{2}(t)\end{smallmatrix} \tag{6}\]
Therefore, \(\nabla_{\rho}h\left(\left|\beta(t)\right\rangle\right))=\left(\nabla_{\rho} \tilde{P}[\chi(\rho)]\quad-\nabla_{\rho}\tilde{P}[\chi(\rho)]\right)^{T}\), where \(\nabla_{\rho}\tilde{P}[\chi(\rho)]=2\left|\tilde{\rho}(t)\right\rangle)\). Then, the Pontryagin Hamiltonian forms as written in (5) from which the necessary conditions according to the PMP have been indicated.
**Proposition 3**: _Let \(\rho^{\star}(t)\), \(u^{\star}(t)\), \(\lambda^{\star}(t)\) be an optimal trajectory of (P1). Then, \(\rho^{\star}(t)\), \(u^{\star}(t)\), \(\lambda^{\star}(t)\) is a local minimizer of the Pontryagin Hamiltonian (5) as long as \(\alpha\) satisfies the condition_
\[\alpha>\frac{\beta}{2}\varepsilon, \tag{7}\]
_where \(\varepsilon=\max_{\nu\in\mathbb{R}}\left|\frac{\partial^{2}\phi(\nu)}{\partial y ^{2}}\right|\) is a finite bound._
Consider the control input \(\tilde{u}=(u,\nu)\). According to the second-order sufficient condition for local optimality, if the Pontryagin Hamiltonian has a positive definite Hessian with respect to \(\tilde{u}\), then it is guaranteed that \(u^{\star}(t)\) is the local minimizer of the Hessian, i.e., the generalized Legendre-Clebsch (L-C) condition guarantees that over a singular arc, the Pontryagin Hamiltonian is minimized. In this case, the Hessian is given by
\[\mathcal{H}_{\tilde{u}\tilde{u}}=\begin{pmatrix}2\eta&0\\ 0&2\alpha-\beta\frac{\partial^{2}\phi}{\partial y^{2}}\end{pmatrix}>0.\]
Clearly, it is positive definite since \(\eta\) is positive, \(\frac{\partial^{2}\phi}{\partial y^{2}}\) is bounded and \(\alpha\) satisfies (7).
## V Physics-Informed Neural Networks Based on the Theory of Functional Connections
In this section, we exploit the newly developed Pontryagin PINN method derived from the theory of functional connection, [6], [9]. We give a short overview on how physics-informed neural networks derived from TFC can be used to solve BVPs. Consider a generic differential equation that needs to be solved to obtain the dynamic vector \(y(t)\in\mathbb{R}^{n}\), being the solution of the system, as follows
\[F\left(t,y\left(t\right),\dot{y}\left(t\right)\right)=0,\quad y\left(t_{k}\right) =y_{k},\quad k\in\left\{\emptyset,1,2,\ldots\right\}\cdot\]
Note that the set in \(k\) can be empty. Table I describes (an adaptation of) the main steps of the application of TFC method developed in [6], [9]. In (8), the function \(g\left(\tau\right):\mathbb{R}\rightarrow\mathbb{R}^{n}\) indicates a user-specified function, and \(\Omega\) is the so-called
switching function. The free function \(g\left(\tau\right)\), as developed in [7] based on the theory of extreme learning machine (ELM), can be modeled by a single hidden layer feedforward neural network, where the summation is over all \(L\) hidden neurons, and \(\sigma(\cdot)\) is the activation function. The output weight \(\alpha_{l}\) and the input weights \(\omega_{l}=\left[\omega_{1},\omega_{2},\cdots,\omega_{L}\right]\) connect the \(l\)th hidden node to the output node and input nodes, respectively. The bias of the \(l\)th hidden node is shown by \(b_{l}\). Now, let proceed with the solution of the two point BVP resulted from the necessary optimality conditions for (\(P_{1}\)) that must be satisfied simultaneously. Taking into account the procedure indicated in Table I, we approximate the state and costate as
\[\begin{split}\left|\,\tilde{p}\right\rangle\left\rangle\,\left( \tau,\xi_{\rho}\right)&=\left(\sigma_{\tilde{p}}\left(\tau \right)-\Omega_{1}\left(\tau\right)\sigma_{\tilde{p}}\left(\tau_{0}\right)- \Omega_{2}\left(\tau\right)\sigma_{\tilde{p}}\left(\tau_{f}\right)\right)^{T} \xi_{\tilde{p}}\\ &\quad+\Omega_{1}\left(\tau\right)\rho_{0}+\Omega_{2}\left(\tau \right)\rho_{f}\\ \left|\,\lambda\right\rangle\left)\left(\tau,\xi_{\lambda}\right)& =\left(\sigma_{\lambda}\left(\tau\right)-\Omega_{2}\left(\tau \right)\sigma_{\lambda}\left(\tau_{f}\right)\right)^{T}\xi_{\lambda}+\Omega_{ 2}\left(\tau\right)\lambda_{f}\end{split}\]
where \(\Omega_{1}\) and \(\Omega_{2}\) are switching functions expressed by, [8],
\[\Omega_{1}\left(\tau\right)=1+\frac{2\left(\tau-\tau_{0}\right)^{ 3}}{\left(\tau_{f}-\tau_{0}\right)^{3}}-\frac{3\left(\tau-\tau_{0}\right)^{2}} {\left(\tau_{f}-\tau_{0}\right)^{2}}\] \[\Omega_{2}\left(\tau\right)=-\frac{2\left(\tau-\tau_{0}\right)^{ 3}}{\left(\tau_{f}-\tau_{0}\right)^{3}}+\frac{3\left(\tau-\tau_{0}\right)^{2}} {\left(\tau_{f}-\tau_{0}\right)^{2}}\]
By taking the derivatives of the approximated expressions of state and costate equation we obtain
\[\begin{split}\left|\,\tilde{p}\right\rangle\left\rangle\,\left( \tau,\xi_{\rho}\right)&=c\left(\sigma_{\rho}^{\prime}\left(\tau \right)-\Omega_{1}^{\prime}\left(\tau\right)\sigma_{\rho}\left(\tau_{0}\right) -\Omega_{2}^{\prime}\left(\tau\right)\sigma_{\tilde{p}}\left(\tau_{f}\right) \right)^{T}\xi_{\rho}\\ &\quad+\Omega_{1}^{\prime}\left(\tau\right)\rho_{0}+\Omega_{2}^{ \prime}\left(\tau\right)\rho_{f}\\ \left|\,\lambda\right\rangle\left)\left(\tau,\xi_{\lambda} \right)&=c\left(\sigma_{\lambda}^{\prime}\left(\tau\right)-\Omega_{ 2}^{\prime}\left(\tau\right)\sigma_{\lambda}\left(\tau_{f}\right)\right)^{T} \xi_{\lambda}+\Omega_{2}^{\prime}\left(\tau\right)\lambda_{f}\end{split}\]
Regarding the control variables, the following functional approximation are introduced
\[u\left(\tau,\xi\right)=\sigma_{u}^{T}\left(\tau\right)\xi_{u},\quad\nu\left( \tau,\xi\right)=\sigma_{\nu}^{T}\left(\tau\right)\xi_{\nu}\]
We also expand the equality constraint multiplier \(\beta\left(\tau,\xi\right)=\sigma_{\beta}^{T}\left(\tau\right)\xi_{\beta}\). In addition, the state constraint multipliers are
\[\mu_{1}\left(\tau,\xi\right)=\sigma_{\mu_{1}}^{T}\left(\tau\right)\xi_{\mu_{1} },\quad\mu_{2}\left(\tau,\xi\right)=\sigma_{\mu_{2}}^{T}\left(\tau\right)\xi_ {\mu_{2}}\]
In line with the ELM algorithm, [7], for the free final time problems, the vector of PoNNs' parameters to be learned is constructed as \(\xi=\left\{\,\xi_{\tilde{p}}\,\,\xi_{\tilde{x}}\,\,\xi_{\tilde{p}}\,\,\xi_{ \tilde{p}}\,\,\xi_{\tilde{u}}\,\,\xi_{\tilde{u}}\,\,\xi_{\tilde{u}}\,\,\xi_{ \tilde{u}}\,\,\xi_{\tilde{u}}\,\,\xi_{\tilde{u}}\,\,\xi_{\tilde{u}}\,\,\xi_{ \tilde{u}}\,\,\xi_{\tilde{u}}\,\,\xi_{\tilde{u}}\,\,\xi_{\tilde{u}}\,\,\xi_{ \tilde{u}}\,\,\xi_{\tilde{u}}\,\,\xi_{\tilde{u}}\,\,\xi_{\tilde{u}}\,\,\xi_{ \tilde{u}}\,\,\xi_{\tilde{u}}\,\,\xi_{\tilde{u}}\,\,\xi_{\tilde{u}}\,\,\xi_{ \tilde{u}}\,\,\xi_{\tilde{u}}\,\,\xi_{\tilde{u}}\,\,\xi_{\tilde{u}}\,\,\xi_{ \tilde{u}}\,\,\xi_{\tilde{u}}\,\,\xi_{\tilde{u}}\,\,\xi_{\tilde{u}}\,\,\xi_{ \tilde{u}}\,\,\xi_{\tilde{u}}\,\,\xi_{\tilde{u}}\,\,\xi_{\tilde{u}}\,\,\xi_{ \tilde{u}}\,\,\xi_{\tilde{u}}\,\,\xi_{\tilde{u}}\,\,\xi_{\tilde{u}}\,\,\xi_{ \tilde{u}}\,\,\xi_{\tilde{u}}\,\,\xi_{\tilde{u}}\,\,\xi_{\tilde{u}}\,\,\xi_{ \tilde{u}}\,\,\xi_{\tilde{u}}\,\,\xi_{\tilde{u}}\,\,\xi_{\tilde{u}}\,\,\xi_{ \tilde{u}}\,\,\xi_{\tilde{u}}\,\,\xi_{\tilde{u}}\,\,\xi_{\tilde{u}}\,\,\xi_{ \tilde{u}}\,\,\xi_{\tilde{u}}\,\,\xi_{\tilde{u}}\,\xi_{\tilde{u}}\,\,\xi_{ \tilde{u}}\,\,\xi_{\tilde{u}}\,\,\xi_{\tilde{u}}\,\,\xi_{\tilde{u}}\,\xi_{\tilde{u}}\,\, \xi_{\tilde{u}}\,\,\xi_{\tilde{u}}\,\xi_{\tilde{u}}\,\,\xi_{\tilde{u}}\,\xi_{ \tilde{u}}\,\,\xi_{\tilde{u}}\,\xi_{\tilde{u}}\,\xi_{\tilde{u}}\,\,\xi_{\tilde{u}}\, \,\xi_{\tilde{u}}\,\,\xi_{\tilde{u}}\,\xi_{\tilde{u}}\,\xi_{\tilde{u}}\,\xi_{ \tilde{u}}\,\xi_{\tilde{u}}\,\xi_{\tilde{u}}\,\xi_{\tilde{u}}\,\xi_{\tilde{u}}\,\xi_{ \tilde{u}}\,\xi_{\tilde{u}}\,\xi_{\tilde{u}}\,\xi_{\tilde{u}}\,\xi_{\tilde{u}}\, \xi_{\tilde{u}}\,\xi_{\tilde{u}}\,\xi_{\tilde{u}}\,\xi_{\tilde{u}}\,\xi_{\tilde{u}}\, \xi_{\tilde{u}}\,\xi_{\tilde{u}}\,\xi_{\tilde{u}}\,\xi_{\tilde{u}}\,\xi_{\tilde{u}}\, \xi_{\tilde{u}}\,\xi_{\tilde{u}}\,\xi_{\tilde{u}}\,\xi_{\tilde{u}}\,\xi_{\tilde{u}}\, \xi_{\tilde{u}}\,\xi_{\tilde{u}}\,\xi_{\tilde{u}}\,\xi_{\tilde{u}}\,\xi_{\tilde{u}}\, \xi_{\tilde{u}}\,\xi_{\tilde{u}}\,\xi_{\tilde{u}}\,\xi_{\tilde{u}}\,\xi_{\tilde{u}}\, \xi_{\tilde{u}}\,\xi_{\tilde{u}}\,\xi_{\tilde{u}}\,\xi_{\tilde{u}}\,\xi_{ \tilde{u}}\,\xi_{\tilde{u}}\,\xi_{\tilde{u}}\,\xi_{\tilde{u}}\,\xi_{\
which is mapped to the extended superoperator \(\mathcal{\hat{Z}}\). We represent the results of (3) to show the element wise time behaviour of the density operator. For all results, the state is initiated as \(\rho_{11}=1\), with no coherence between different states. First, we neglect the dissipation part, so the solution reduces to the resolution of the Liouville-von Neumann equation, see Fig 1(a). As the next case, we consider the dissipation while there is no coherent driving. The population of the excited state experiences an exponential decay, see Fig 1(b). Afterward, it becomes feasible to compute the behavior of a dissipative quantum system that undergoes both coherent driving and decay. In such cases, oscillations and decay co-occur as both behaviors exist simultaneously, see Fig 1(c). In the following, we plot the state evolution under the action of optimal control. We initiate by the pure state \(\rho_{11}=1\) and target to \(\rho_{00}=1\), while following the constraints on purity preservation. The population evolution is shown in Fig 1(d). According to the data depicted in the graph, there is a clear trend of exponential decay in the population of the initial state, while there is a corresponding upward trend in the population of the target as time passes. In order to check the security level of reaching the target, we have to calculate the transition probability known as the quantum fidelity. The term fidelity refers to the degree of similarity between two quantum states, typically the system state \(\rho(t)\) and target \(\sigma\). We study fidelity in terms of state purity computed by \(\mathcal{F}\left(\rho,\sigma\right)=\frac{\left(tr(\rho\sigma)\right)^{2}}{P( \rho)P(\sigma)}\), [10]. Since our target \(\sigma\) is a pure state, i.e., \(\sigma=\left|\psi\right\rangle\left\langle\psi\right|\), fidelity can simply be assessed as \(\mathcal{F}\left(\rho,\sigma\right)=\frac{\left\langle\psi\rho\left|\psi \right\rangle\right\rangle}{P(\rho)}\). Figure 2 shows the effects of purity preservation on fidelity.
## VII Conclusions
In this study, we proposed a framework for preserving quantum purity during a quantum state transition problem. Specifically, we aim to minimize both the time and energy required for the transition, while adhering to the Lindblad master equation, which governs the system dynamics. To achieve this goal, we employed a combination of two techniques, namely the Gamkrelidze revisited method and the concept of saturation functions and system extensions. The resulting boundary value problem is then solved using Pontryagin neural networks, which are well-suited for this type of problem formulation. Our approach allows us to preserve the purity of the quantum state during the transition while minimizing the resources required. Furthermore, we analyze the effects of state constraints on the evolution of quantum fidelity, providing a numerical example for a two-level system. As future work, we intend to extend our approach to higher-order dimensional systems, which could yield valuable insights and applications in various fields, including quantum computing and quantum communication. Our results have important implications for the practical implementation of quantum state transitions.
|
2305.08012 | Quantization in Spiking Neural Networks | In spiking neural networks (SNN), at each node, an incoming sequence of
weighted Dirac pulses is converted into an output sequence of weighted Dirac
pulses by a leaky-integrate-and-fire (LIF) neuron model based on spike
aggregation and thresholding. We show that this mapping can be understood as a
quantization operator and state a corresponding formula for the quantization
error by means of the Alexiewicz norm. This analysis has implications for
rethinking re-initialization in the LIF model, leading to the proposal of
'reset-to-mod' as a modulo-based reset variant. | Bernhard A. Moser, Michael Lunglmayr | 2023-05-13T21:43:09Z | http://arxiv.org/abs/2305.08012v2 | # Quantization in Spiking Neural Networks
###### Abstract
In spiking neural networks (SNN), at each node, an incoming sequence of weighted Dirac pulses is converted into an output sequence of weighted Dirac pulses by a leaky-integrate-and-fire (LIF) neuron model based on spike aggregation and thresholding. We show that this mapping can be understood as a quantization operator and state a corresponding formula for the quantization error by means of the Alexiewicz norm. This analysis has implications for rethinking reinitialization in the LIF model, leading to the proposal of _reset-to-mod_ as a modulo-based reset variant.
Leaky-Integrate-and-Fire (LIF) Neuron Spiking Neural Networks (SNN) Re-Initialization Quantization Error Propagation Alexiewicz Norm
## 1 Introduction
Though its simplicity, the leaky integrate-and-fire (LIF) neuron model is widely spread in neuromorphic computing and computational neuroscience Gerstner et al. (2014); Nunes et al. (2022). In contrast to more biophysically realistic models, such as the Hodgkin-Huxley model, LIF is a middle ground to capture essential features of bio-inspired time-based information processing, and simple enough to be applicable from the point of view of neuromorphic engineering. In particular, in biology spikes show varying shapes and its generation and re-initialization follows a complex dynamics as revealed by the Hodgkin-Huxley differential equations. In contrast, LIF is based on the following major idealizations. (1), it neglects shape information and idealizes spikes as shapeless impulses, (2), it is based on the assumption that the triggering process is realized by a comparison with a given threshold-level, and, (3), the process of triggering and re-initialization acts instantaneously.
In this paper we take these assumptions as axioms for a mathematical operator that maps a sequence of weighted Dirac impulses to another one. We ask about properties of this mapping and find that it can be understood as a quantization operation in the space of spike trains. The standard model of quantization of a single number is integer truncation. That is, a given real number \(x=n+r\), \(n\in\mathbb{Z}\), \(r\in(-1,1)\) is mapped to the integer \(n\) and the quantization error is given by \(r\). The resulting mapping \(q:\mathbb{R}\to\mathbb{Z}\) can also be considered from a geometric point of view. Consider a grid of vertexes of a tessellation of polytopes induced by the unit ball of the maximum norm, \(\|.\|_{\infty}\). This way, a given point \(x\) is element of such a polytope \(P\) and mapped to that vertex of \(P\) that is closest to the origin.
In an abstract sense, we will show that LIF can be understood in the same way. Instead of the maximum norm, \(\|.\|_{\infty}\), the space \(\mathbb{S}\) of spike trains is tessellated by means of the unit balls of the Alexiewicz norm \(\|.\|_{A,\alpha}\), which will be outlined next. Then we get the formula \(\|\text{LIF}_{\alpha,\vartheta}(\eta)-\eta\|_{A,\alpha}<\vartheta\), where \(\text{LIF}_{\alpha,\vartheta}\) denotes the LIF mapping based on the leaky parameter \(\alpha\), the threshold \(\vartheta>0\) and the input spike train \(\eta\in\mathbb{S}\).
The paper is outlined as follows. In Section 2 we reformulate the LIF model, taking different re-initialization variants into account. In Section 3 we recall the Alexiewicz norm \(\|.\|_{A}\), generalize it to leaky variants \(\|.\|_{A,\alpha}\) and state the
main theorem. Section 4 presents evaluations considering the discussed reinitialization variants, based on Python code available at [https://github.com/LunglmayrMoser/AlexSNN](https://github.com/LunglmayrMoser/AlexSNN).
## 2 LIF Model and Re-Initialization Variants
The flow of information in a LIF model is outlined in Fig. 1. Arriving from several input channels, spike events trigger the dynamics of the membrane potential by a weighted linear superposition across the channels and an integration with a leaky term in time. The LIF model idealizes the superposition (and, also that of re-initialization) by an instantaneous aggregation. Due to this building process, the incoming spike events can equivalently be represented by a single channel of spikes with amplitudes resulting from a weighted sum of simultaneous spikes across the input channels. This way we have to take spike amplitudes of virtually arbitrary magnitude into account. In the literature, this aspect is pointed out as a limitation of the LIF model, since it deviates from biology in this respect Gerstner et al. (2014). As an idealized mathematical model, however, it has its justification, but then this effect must also be taken into account in the re-initialization step. While in the context of LIF there are discussed basically two modes of re-initialization (synonymously, _reset_), namely _reset-to-zero_ and _reset-by-subtraction_, in Eqn. (2) we will propose a third one, _reset-to-mod_, to be mathematically consequent regarding the outlined issue of instantaneous superposition. According to Eshraghian et al. (2021), _reset-to-zero_ means that the potential is reinitialized to zero after firing, while _reset-by-subtraction_ subtracts the \(\vartheta\)-potential \(u_{\vartheta}\) from the membrane's potential that triggers the firing event. As a third variant we introduce _reset-to-mod_, which can be understood as instantaneously cascaded application of _reset-by-subtraction_ according to the factor by which the membrane's potential exceeds the threshold which results in a modulo computation.
For an input spike train \(\eta_{in}(t)=\sum_{i}a_{i}\delta_{t_{i}}(t)\) the mapping \(\sum_{i}b_{i}\,\delta_{s_{i}}=\text{LIF}_{\vartheta,\alpha}(\eta_{in})\) is recursively given by \(s_{i+1}=\inf\left\{s\geq s_{i}:\,|u_{\vartheta,\alpha}(s_{i},s)|\geq\vartheta \right\},\) where
\[u_{\vartheta,\alpha}(t_{\text{reset}},t):=\int_{t_{\text{reset}}}^{t}e^{- \alpha(\tau-t_{\text{reset}})}\left(\eta_{in}(\tau)-\text{discharge}(t_{\text{reset}},\tau)\right)d\tau \tag{1}\]
models the dynamic change of the neuron membrane's potential after an input spike event at time \(t_{\text{reset}}\). The process of triggering an output spike is actually a charge-discharge event
\[\text{discharge}(t_{i},\tau):=\left\{\begin{array}{lcl}a_{i}\delta_{t_{i}}( \tau)&&\ldots&\text{for _reset-to-zero},\\ \text{sgn}(a_{i})\,\vartheta\,\delta_{t_{i}}(\tau)&&\ldots&\text{for _reset-by-subtraction},\\ q(a_{i}/\vartheta)\,\vartheta\,\delta_{t_{i}}(\tau)&&\ldots&\text{for _reset-to-mod}\end{array}\right. \tag{2}\]
Figure 1: Illustration of steps of information processing by a LIF model. The weighted superposition of incoming spike trains can result in spike amplitudes that can exceed the threshold many times.
where \(\text{sgn}(x)\in\{-1,0,1\}\) is the signum and \(q(x):=\text{sgn}(x)\max\{k\in\mathbb{Z}:k\leq|x|\}\) is the integer truncation quantization function.
The _reset-by-subtraction_ mode can be understood as compensation event so that the net voltage balance of the spiking event equals zero, i.e., in case of an output spike with amplitude \(\vartheta>0\) the membrane is actually discharged by this amount. Accordingly, though not always made clear in the literature, see for example Eshraghian et al. (2021), this assumption has the subtle consequence that an increase of the membrane potential \(u\) by multiples \(q(u/\vartheta)\) of the threshold level \(\vartheta\) results in a discharge of the membrane's potential by the same amount, that is \(q(u/\vartheta)\vartheta\). See Fig. 2 for an illustration and example.
## 3 Alexiewicz Norm and Spike Train Quantization
For \(\eta=\sum_{i}a_{i}\delta_{t_{i}}\in\mathbb{S}\) we introduce the measure \(\|\eta\|_{A,\alpha}:=\max_{n}\left|\sum_{j=1}^{n}a_{j}e^{-\alpha(t_{n}-t_{j})}\right|\), which satisfies the axioms of a norm on the vector space \(\mathbb{S}\). For \(\alpha=0\) we obtain the _Alexiewicz_ norm Alexiewicz (1948), which is topologically equivalent to the discrepancy norm Moser (2012). The Alexiewicz norm reveals the quantization character of the LIF model.
**Theorem 1** (reset-to-mod LIF as \(\|.\|_{A,\alpha}\)-Quantization): _Given a LIF neuron model with reset-to-mod, the threshold \(\vartheta>0\), the leaky parameter \(\alpha\in[0,\infty]\) and the spike train \(\eta\in\mathbb{S}\) with amplitudes \(a_{i}\in\mathbb{R}\). Then, \(\text{LIF}_{\vartheta,\alpha}(\eta)\) is a \(\vartheta\)-quantization of \(\eta\), i.e., the resulting spike amplitudes are multiples of \(\vartheta\), where the quantization error is bounded by_
\[\|\text{LIF}_{\vartheta,\alpha}(\eta)-\eta\|_{A,\alpha}<\vartheta. \tag{3}\]
**Proof.** Without loss of generality we may assume that \(\vartheta=1\). We have to show that \(\|\text{LIF}_{1,\alpha}(\eta)-\eta\|_{A,\alpha}<1\), which is equivalent to the discrete condition that \(\forall n:\max_{n}|\sum_{i=1}\hat{a}_{i}|<1\), where \(\text{LIF}_{1,\alpha}(\eta)-\eta=\sum_{i}\hat{a}_{i}\delta_{t_{i}}\). The proof is based on induction and traces the problem back to standard quantization by truncation.
Suppose that at time \(t_{i_{k-1}}\) after re-initialization by _reset-to-mod_ we get the residuum \(\Delta_{i_{k-1}}\) as membrane potential that is the starting point for the integration after \(t_{i_{k-1}}\). Then, as illustrated in Fig. 3, the residuum \(\Delta_{i_{k}}\) at the next triggering event \(t_{i_{k}}\) is obtained by the equation
\[\Delta_{i_{k}}=\Delta_{i_{k-1}}\oplus a_{i_{k-1}+1}\oplus\ldots\oplus a_{i_{ k}}-[\Delta_{i_{k-1}}\oplus\ldots\oplus a_{i_{k}}]. \tag{4}\]
On the other hand, note that for the differences \(\hat{a}_{i}\) we have
\[\hat{a}_{i_{k+1}}=\hat{a}_{i_{k}}\oplus a_{i_{k}+1}\oplus\ldots\oplus a_{i_{ k-1}-1}\oplus\left(a_{i_{k+1}}-[\Delta_{i_{k}}\oplus a_{i_{k+1}}\oplus \ldots\oplus a_{i_{k+1}}]\right). \tag{5}\]
Note that \(\Delta_{i_{0}}=0\) and, for induction, let us assume that up to index \(k\) we have \(\hat{a}_{i_{k}}=\Delta_{i_{k}}\). This, together with Eqn. (5) gives
\[\hat{a}_{i_{k+1}} = \Delta_{i_{k}}\oplus a_{i_{k}+1}\oplus\ldots\oplus a_{i_{k-1}-1} \oplus\left(a_{i_{k+1}}-[\Delta_{i_{k}}\oplus a_{i_{k+1}}\oplus\ldots\oplus a _{i_{k+1}}]\right) \tag{6}\] \[= \Delta_{i_{k}}\oplus a_{i_{k}+1}\oplus\ldots\oplus a_{i_{k-1}-1} \oplus a_{i_{k+1}}-[\Delta_{i_{k}}\oplus a_{i_{k+1}}\oplus\ldots\oplus a_{i_{k +1}}],\]
which proofs \(\hat{a}_{i_{k}}=\Delta_{i_{k}}\) to hold for all \(k\), showing that the deltas, \(\Delta_{i_{k}}\), can be expressed as differences of the standard quantization by truncation, hence proving the claim of Theorem 1 for all \(\alpha\in[0,\infty]\).
Figure 3: Illustration of Eqn. (4). The red arrows indicate reset by _reset-to-mod_.
Figure 2: Example of input spike train with different LIF outputs depending on the re-initialization variant.
## 4 Evaluations
We evaluate the distribution of the quantization error (3) by means of box whisker diagrams for different re-initialization variants depending on the number of spikes in the spike train and different distributions of amplitudes. Fig. 4 shows the result for incoming spike amplitudes that are below threshold. In this case, (3) holds for _reset-to-mod_ and _reset-by-subtraction_. As expected, both variants behave similarly. For _reset-to-zero_ the bound (3) only holds approximately for larger leaky parameter \(\alpha\), see first row of Fig. 4. For _reset-to-mod_ and _reset-by-subtraction_, With increasing number of spikes one can observe a concentration of measure effect, see, e.g., Vershynin).
However, as shown in Fig. 5, if the incoming spike amplitudes are not below threshold anymore Eqn. (3) only holds for _reset-to-mod_. In this case the measure of concentration effect become more apparent for smaller leaky parameter. For the other variants the quantization error increases in average with the number of spikes, and the theoretical bound (3), proven for _reset-to-mod_, does not hold anymore.
Figure 4: Evaluation of (3) for _reset-to-mod_, _reset-by-subtraction_ and _reset-to-zero_ (1.st/2nd/3rd column), based on spike trains with spike amplitudes in \([-\vartheta,\vartheta]\) and \(100\) runs. The 1st row refers to \(\alpha=1\) and the 2nd row to \(\alpha=0.1\).
Figure 5: The same as in Fig. 4 but with spike amplitudes in \([-3/2\vartheta,3/2\vartheta]\).
## 5 Conclusion
In this paper we provide a novel view on the leaky integrate-and-fire model as quantization operator in the Alexiewicz norm. This analysis gives rise to rethinking the re-initialization modes _reset-to-zero_ and _reset-by-subtraction_ that are commonly used in the context of spiking neural networks. These re-initialization modes only hold under restricted conditions while our proposed variant _reset-to-mod_ satisfies the derived quantization bound under general conditions. This general quantization error formula leads to new error bounds for LIF and SNNs, such as a quasi-isometry relation in analogy to threshold-based sampling Moser (2017), Moser and Lunglmayr (2019). Examples can be found in the github repository [https://github.com/LunglmayrMoser/AlexSNN](https://github.com/LunglmayrMoser/AlexSNN) and Moser and Lunglmayr (2023).
|
2303.05860 | Variational Quantum Neural Networks (VQNNS) in Image Classification | Quantum machine learning has established as an interdisciplinary field to
overcome limitations of classical machine learning and neural networks. This is
a field of research which can prove that quantum computers are able to solve
problems with complex correlations between inputs that can be hard for
classical computers. This suggests that learning models made on quantum
computers may be more powerful for applications, potentially faster computation
and better generalization on less data. The objective of this paper is to
investigate how training of quantum neural network (QNNs) can be done using
quantum optimization algorithms for improving the performance and time
complexity of QNNs. A classical neural network can be partially quantized to
create a hybrid quantum-classical neural network which is used mainly in
classification and image recognition. In this paper, a QNN structure is made
where a variational parameterized circuit is incorporated as an input layer
named as Variational Quantum Neural Network (VQNNs). We encode the cost
function of QNNs onto relative phases of a superposition state in the Hilbert
space of the network parameters. The parameters are tuned with an iterative
quantum approximate optimisation (QAOA) mixer and problem hamiltonians. VQNNs
is experimented with MNIST digit recognition (less complex) and crack image
classification datasets (more complex) which converges the computation in
lesser time than QNN with decent training accuracy. | Meghashrita Das, Tirupati Bolisetti | 2023-03-10T11:24:32Z | http://arxiv.org/abs/2303.05860v1 | # Variational Quantum Neural Networks (VQNNs) in Image Classification
###### Abstract
Quantum machine learning has established as an interdisciplinary field to overcome limitations of classical machine learning and neural networks. This is a field of research which can prove that quantum computers are able to solve problems with complex correlations between inputs that can be hard for classical computers. This suggests that learning models made on quantum computers may be more powerful for applications, potentially faster computation and better generalization on less data. The objective of this paper is to investigate how training of quantum neural network (QNNs) can be done using quantum optimization algorithms for improving the performance and time complexity of QNNs. A classical neural network can be partially quantized to create a hybrid quantum-classical neural network which is used mainly in classification and image recognition. In this paper, a QNN structure is made where a variational parameterized circuit is incorporated as an input layer named as Variational Quantum Neural Network (VQNNs). We encode the cost function of QNNs onto relative phases of a superposition state in the Hilbert space of the network parameters. The parameters are tuned with an iterative quantum approximate optimisation (QAOA) mixer and problem hamiltonians. VQNNs is experimented with MNIST digit recognition (less complex) and crack image classification datasets (more complex) which converges the computation in lesser time than QNN with decent training accuracy.
Keywords:Quantum Neural Network Parameterized circuit Image classification Hilbert space Quantum Approximate Optimization
## 1 Introduction
Quantum computation offers potential avenue to increase the power of machine learning, neural networks models which is discussed in this paper with an algorithm. The objective of this research is to find a way to improve performance of QNN by variational approaches[1]. We have used QAOA to train QNNs. Rather than using one qubit parameterized single circuit in input layer of QNN, we have used a 2 qubit parameterized circuit which is having Cost and Mixer Hamiltonians to calculate the cost of the classical QNN. QAOA uses a parameterized quantum circuit \(\big{|}\Psi(\theta)\big{\rangle}\) to generate trial wave functions which is \(\big{|}\Psi(\gamma,\beta)\big{\rangle}\) to
compute or estimate the expectation value of \(\left\langle\Psi(\gamma,\beta)|H|\Psi(\gamma,\beta)\right\rangle\) with respect to the problem Hamiltonian \(H\). Now this variational QNN is applied on image processing applications (Classification and Recognition) which shows that VQNNs can improve time complexity and training accuracy[6].
## 2 Hybrid Quantum Classical Neural Networks
Quantum machine learning (QML) proposes new types of models that leverage quantum computers' unique capabilities to, for example, work in exponentially higher-dimensional feature spaces to improve the accuracy of models. A neural network is an elaborate function that is built by composing smaller building blocks called neurons[8]. A neuron is nonlinear function that maps one or more inputs to a single real number. Graphically, we represent neurons as nodes in a graph and each edge in our graph is associated with a scalar-value called a weight. The idea of QNN is to multiply each neuron by a different scalar before being collected and processed into a single value. The objective when training a neural network consists primarily of choosing our weights which is the building block oh hybrid QNNs[12].
## 3 Variational Quantum Neural Networks (VQNNs)
### Parameterized quantum circuit
A variational quantum circuit is comprised of three components. First, a feature map F which maps classical data point x into a m qubit quantum state \(\left|\Psi\right\rangle\) :
\[\left|\Psi(x)\right\rangle=F(x)\big{|}0\big{\rangle}^{\otimes}m \tag{1}\]
Second, an ansatz which will build the quantum state with entanglements and rotation gates. The rotational angle of ansatz is parameterized by a vector \(\theta\)[9].
\[\big{|}\phi(x,\theta)\big{\rangle}=A(\theta)\big{|}\Psi(x)\big{\rangle} \tag{2}\]
Finally, an observable \(O\) is measured, and the eigenvalue corresponding to the resultant quantum state is recorded[5]. A variational quantum circuit is run repeatedly with an input \(x\) and parameter vector \(\theta\). The circuit's expectation value (f) can be,
\[f(x,\theta)=\big{\langle}\phi\big{(}x,\theta\big{)}|O|\phi\big{(}x,\theta \big{)}\big{\rangle} \tag{3}\]
When a variational quantum circuit is used for machine learning, this approximated expectation value is typically treated as the output of the model.
### Qaoa
QAOA is a hybrid quantum classical variational algorithm. Unlike the gate-based quantum computation model, it is based on adiabatic theorem from quantum
mechanics. In this model, to perform any computation we need two Hamiltonians called \(\mathrm{H_{M}}\) and \(\mathrm{H_{C}}\). Amongst them, the ground state of \(\mathrm{H_{M}}\) should be a preparable state and \(\mathrm{H_{C}}\) encodes the solution to our problem. QAOA is the trial a parameterized state \(|\Psi(\gamma,\beta)\rangle\) (for some values of parameters \(\gamma\), \(\beta\)) that should be prepared on a quantum computer. This state will give an expectation value of \(\left\langle\Psi(\gamma,\beta)|H|\Psi(\gamma,\beta)\right\rangle\) with respect to the problem Hamiltonian \(H\)[10].
### VQNN Algorithm
In this paper, VQNN is proposed where the quantum training process can be described as the state evolution with the Hilbert space of the parameter register and the QNN register[11]. The quantum training protocol acts on both the parameter register (\(\gamma\), \(\beta\)) and QNN register to encode the cost function of QNN which is a variant of the original QAOA Mixers[2]. These operations can be mathematically expressed as \(e^{-i\gamma C\theta}\) and \(e^{-i\beta H_{\mathrm{M}}}\), where \(\theta\) are the parameter vectors of QNN, \(C(\theta\) ) is the cost function of the QNN, and \(\gamma\)i and \(\beta\)i are tunable hyperparameters. By heuristically tuning the hyperparameters, the quantum training can be done in on the optimal parameters of the QNN after iterations of the QAOA ansatz operations[3].
#### 3.3.1 QAOA ansatz
: The QAOA ansatz has two parts, Mixer and Cost Hamiltonian. The Mixer Hamiltonian (\(\mathrm{H_{M}}\)) is choosen as Eq. 4,
\[H_{\mathrm{M}}=\sum_{j=1}^{n}X_{\mathrm{j}} \tag{4}\]
where \(\mathrm{X_{j}}\) is the Pauli X operator acting on the jth qubit. Then the Cost Hamiltonian (\(\mathrm{H_{C}}\)) is formed with rotational Pauli \(z\) gate and cx gates with a rotational angle \(\theta\). Finally a measurement in the computational basis is performed on the state[7]. Repeating the above state preparation and measurement, the expected value of the cost function,
\[\left\langle C\right\rangle=\left\langle(\beta,\gamma)|H_{\mathrm{C}}|(\beta, \gamma)\right\rangle \tag{5}\]
Here the expectation of QAOA ansatz for rotaltional angle as \(\pi/4\) will be -0.124. Fig.1 represents the quantum circuit.
#### 3.3.2 Training with QNN
: A Neural Network with 2 layers of ConvNN and a fully connected layer at the end is used and the value of the last neuron of the fully-connected layer is fed as the parameter \(\theta\) into the above quantum QAOA ansatz circuit[4].
## 4 Experimental Settings
VQNN algorithm is experimented with MNIST digit datasets and crack detection datasets. MNIST image dataset is taken as a simple and relatively less complex
where the performance of VQNN is measured and later it is experimented with a bit complex image recognition dataset to measure its compatibility. MNIST dataset contains 200 images from the training sample which contains hand written 0 and 1 digits. The crack detection kaggle dataset consists of various concrete surfaces labelled either negative if the surface presents no cracks and positive otherwise. Each class has 1,000 images for a total of 2,000 images.
## 5 Results
MNIST and Crack detection datasets are trained with QNN and VQNNs. The results are shown in the below Table 1 which shows that VQNN is taking less time than QNNs for the experiment with detecting surface cracks with an imporoved accuracy. Fig 2, Fig 3, Fig 4 and Fig 5 describe the training convergence for corresponding experiments with QNNs and VQNNs. Fig 6 is the predicted MNIST digits by VQNNs and Fig 7 is the predicted cracks as positive and negative by VQNNs. |
2301.13853 | LAGAN: Deep Semi-Supervised Linguistic-Anthropology Classification with
Conditional Generative Adversarial Neural Network | Education is a right of all, however, every individual is different than
others. Teachers in post-communism era discover inherent individualism to
equally train all towards job market of fourth industrial revolution. We can
consider scenario of ethnic minority education in academic practices. Ethnic
minority group has grown in their own culture and would prefer to be taught in
their native way. We have formulated such linguistic anthropology(how people
learn)based engagement as semi-supervised problem. Then, we have developed an
conditional deep generative adversarial network algorithm namely LA-GAN to
classify linguistic ethnographic features in student engagement. Theoretical
justification proves the objective, regularization and loss function of our
semi-supervised adversarial model. Survey questions are prepared to reach some
form of assumptions about z-generation and ethnic minority group, whose
learning style, learning approach and preference are our main area of interest. | Rossi Kamal, Zuzana Kubincova | 2023-01-26T05:25:34Z | http://arxiv.org/abs/2301.13853v2 | # LAGAN: Deep Semi-Supervised
###### Abstract
Education is a right of all, however, every individual is different than others. Teachers in post-communism era discover inherent individualism to equally train all towards job market of fourth industrial revolution. We can consider scenario ethnic minority education in academic practices. Ethnic minority group such has grown in their own culture and would prefer to be taught in their native way. We have formulated such linguistic anthropology(how people learn)based engagement as semi-supervised problem. Then, we have developed an conditional deep generative adversarial network algorithm namely LA-GAN to classify linguistic ethnographic features in student engagement. Theoretical justification proves the objective, regularization and loss function of our semi-supervised adversarial model. Survey questions are prepared to reach some form of assumptions about z-generation and ethnic minority group, whose learning style, learning approach and preference are our main area of interest.
Linguistic Anthropology,Semi-Supervised Learning, Ethnicity, Adversarial Training
## I Introduction
Deep neural network is promising technology as it uses deep or layered abstract representation for conceptualizing meaning of raw data with automated manner[1]-[8]. Its coined by often by supervised learning with possible assumption about the presence of huge amount and quality labeled data. However, in practical getting access to labeled data in massive scale is expensive, time-consuming and often requires very expensive human expertise involvement. Therefore, its always a difficult to prepare a deep neural based system with the availability of very limited number of data. However, in practical the presence of unlabeled data is large scale is very practical. Thus, relatively new concept of semi-supervised learning emerges with the promise of exploiting huge unlabeled data to achieve better learning accuracy, given that we have access to limited labeled knowledge. Deep semi-supervised learning resembles thus exploiting classic neural network architectures with classic semi-supervised learning techniques to infer data pattern both labeled and unlabeled sample.Generative Adversarial Network[4]-[8] is recognized for creating synthetic data in the absence of huge labeled data. GAN is resembled by generator and discriminator, in which generators role is to create real-like data, on the other hand discriminator detects the real from fake data. However, in conventional GAN, there is seldom control on data mode being created. Conditional GAN, on the other hand, by putting some contextual condition, such as class class label, data modality eases the data generation.
Linguistic anthropology is a division of social science, which deals with language as social action[9]-[11]. The interdisciplinary domain intersects topics from linguistics, sociology, philosophy, and anthropology.However, knowledge includes verbal and written language as both origin and media of social culture. Thus it is an outward expression of language usage, rather than syntax or grammatical base. In other words, despite underneath phonology or syntax, the way social interactions are expressed in verbal and written media, is denoted as linguistic anthropology.Inevitably, linguistic anthropology is considered a dominant matter in educational institutes as language is the center of speech or written form of communication.Because, the way the speaker interacts with the surrounding plays a key role in mediating knowledge exchange in groups. This also defines the way information exchange occurs in institutional groups and thus helps pupils in interacting in presentation, meeting and in-person towards building a successful career.Our primary research[12]-[20] is focused with education reform in post-colonial era, where there is much a concern over presence of education inequality and absence of education fluidity. Homogeneous society which is living for centuries are now watching different children in class as consequence of arrival of foreign citizen or ethnic groups. Especially, as soon as the communism era ends, ethnic communities are accepted in mainstream schooling. Unfortunately, ethnic communities have been isolated from mainstream school for many decades and they are not ready for job-market in this fourth industrial revolution. We consider that linguistic anthropology features in ethnic community should be taken in consideration for providing them quality education. Ethnic education statistics is collected from classroom observation, in-person or group interview or |
2310.18921 | QWID: Quantized Weed Identification Deep neural network | In this paper, we present an efficient solution for weed classification in
agriculture. We focus on optimizing model performance at inference while
respecting the constraints of the agricultural domain. We propose a Quantized
Deep Neural Network model that classifies a dataset of 9 weed classes using
8-bit integer (int8) quantization, a departure from standard 32-bit floating
point (fp32) models. Recognizing the hardware resource limitations in
agriculture, our model balances model size, inference time, and accuracy,
aligning with practical requirements. We evaluate the approach on ResNet-50 and
InceptionV3 architectures, comparing their performance against their int8
quantized versions. Transfer learning and fine-tuning are applied using the
DeepWeeds dataset. The results show staggering model size and inference time
reductions while maintaining accuracy in real-world production scenarios like
Desktop, Mobile and Raspberry Pi. Our work sheds light on a promising direction
for efficient AI in agriculture, holding potential for broader applications.
Code: https://github.com/parikshit14/QNN-for-weed | Parikshit Singh Rathore | 2023-10-29T06:43:01Z | http://arxiv.org/abs/2310.18921v1 | # QWID: Quantized Weed Identification Deep neural network
###### Abstract
In this paper, we present an efficient solution for weed classification in agriculture. We focus on optimizing model performance at inference while respecting the constraints of the agricultural domain. We propose a Quantized Deep Neural Network model that classifies a dataset of 9 weed classes using 8-bit integer (int8) quantization, a departure from standard 32-bit floating point (fp32) models. Recognizing the hardware resource limitations in agriculture, our model balances model size, inference time, and accuracy, aligning with practical requirements. We evaluate the approach on ResNet-50 and InceptionV3 architectures, comparing their performance against their int8 quantized versions. Transfer learning and fine-tuning are applied using the DeepWeeds dataset. The results show staggering model size and inference time reductions while maintaining accuracy in real-world production scenarios like Desktop, Mobile and Raspberry Pi. Our work sheds light on a promising direction for efficient AI in agriculture, holding potential for broader applications. 1
Footnote 1: GitHub: [https://github.com/parikshit14/QNN-for-weed](https://github.com/parikshit14/QNN-for-weed)
## I Introduction
Weeds are undesirable plants that compete with the agricultural crop plant for resources like soil nutrients, direct sunlight, water and to some extent space to grow. The weeding process plays a significant role in agriculture because weeds produce a major loss in crop yield. With as high as 50-71% yield reduction was seen in soybean, to around 40-71% in groundnut [1]. Weeds accounted for a total of 12.65 billion dollar losses in 2007 in India alone [2].
Weed identification plays an important role in weeding because deriving the weed type with appropriate hoeing depth, hoeing positions, and particular herbicides can be used [3]. Automating these processes minimizes human intervention which is high in cost and also labor-intensive.
Training the models on DeepWeeds dataset [4] consisting of 9 classes namely chinese apple, lantana, parkinsonia, parthe-nium, prickly acacia, rubber vine, siam weed, snake weed and negatives (other non-target plant life). The dataset was prepared in real-world conditions like dark shadows, canopy cover, high contrast, and variable distance between the camera and the plant. Sample of each class is represented in Figure 1.
Models used originally for this dataset were ResNet-50 [5] and Inceptionv3 [6], both transfer learned on the DeepWeeds dataset. We put forward the quantized versions of both ResNet-50 and Inceptionv3, transfer learned and fine-tuned to achieve almost the same accuracy but with significantly better inference time and model size. The int8 version is also better suited than the fp32 in production as it requires low computational power which is not readily available in farmlands.
The state-of-the-art (SOTA) object classification models aim to increase the model accuracy by building a denser neural network with precise calculations in terms of weights, biases, activation functions, matrix multiplications, etc., which leads to high accuracy. As a result, it increases the number of calculations which in turn is time-consuming even for inference (forward pass). These computationally heavy models require high-performance servers or workstations equipped with GPUs enabled for parallel computing. While on embedded devices, the computational resources present are very limited, an offline execution of the model takes place, i.e., the inference is performed on powerful servers instead of the target system. Another approach includes using lightweight deep neural network models like MobileNet [7] which uses depth-wise separable convolutions to reduce the model complexity, resulting in architectural changes which has limited capacity
Fig. 1: Dataset samples [4]
for complex patterns. In this paper, we propose a quantized ResNet and quantized Inception weed classification model to address the limited computational resources on edge devices. Although training a quantized model leads to an accuracy drop from the SOTA models, the drop is very marginal, a mere 1-3% while being able to achieve more than 10 times gain in performance, in terms of inference time when compared to a non-quantized model.
The remainder of the paper is organized as follows. In Section II, we discuss the related research. In Section III, we present our approach, which includes the architecture, training methodology, and the associated issues with quantization. In Section IV, we present our findings on the DeepWeeds dataset and share model results based on relative accuracy, inference time, and complexity, followed by a conclusion in Section V.
## II Related Work
### _Quantization_
Although similar concepts had first appeared in the literature as early as 1898, the history of the theory and practice of quantization dates back to 1948. The early development of pulse code modulation systems led to the initial recognition of quantization in modulation and analog-to-digital conversion.
In neural networks, quantization is used to reduce the memory consumption of weight biases and activation by using low-precision datatypes like int8 instead of fp32. This reduces the model size by a factor of four. For context, the amount of multiplication and addition operations produced by operating a neural network on hardware can quickly reach many millions. High precision is typically not required during inference and could impede the use of AI in real-time or on devices with limited resources. Large computational gains and improved performance are obtained by combining lower-bit mathematical operations with quantized parameters for the intermediate calculations in a neural network.
Quantized neural networks improve power economy in addition to performance for two reasons, i.e., decreased memory access costs, and improved computation efficiency. By utilizing the lower-bit quantized data, less data must be moved both on and off-chip, reducing memory bandwidth and significantly reducing energy consumption. Mathematical operations with lower precision, such as an int8 multiplication as opposed to an fp32 multiplication, use less energy and have a higher compute efficiency, which results in less power being used. Sample comparison of power consumption in Figure 2.
### _CNN-Based Image classification_
Several papers propose CNN-based models to improve the accuracy on the DeepWeeds dataset. The original paper on DeepWeeds [4] proposes a ResNet-50 and Inceptionv3 models trained using transfer learning with fine tuning on ImageNet weights, [9] presents 27 SOTA deep learning models through transfer learning evaluating accuracies and inference of each, [10] proposes a combination of predictions of a CNN and a secondary classifier for statistical features in weed images. [11] proposes a diffusion probabilistic model to generate synthetic weed images of high quality to overcome the cost of data capture along with transfer learning, [12] proposes a combination of convolutional neural network and transformer structures for classification and feature extraction. However, none of the papers propose an improvement in terms of inference time and low computational power as present in actual agricultural environments.
### _CNN-Based Image classification for Embedded systems_
The SOTA CNN models require high-end GPU not only for training but also for the purpose of inference, else it increases the inference time drastically. As a result, best-performing CNN models fail to outperform optimally on embedded systems on chips (SoC) [13]. For instance, a provider in the security camera market discovered that even after switching the YOLO back-end from GoogleNet to a more straightforward CNN like AlexNet, their embedded implementation only operates at a maximum frame rate of 5 frames per second on embedded GPUs [14].
In our proposed algorithm, a quantized convolutional neural network (Q-CNN), a form of compressed and accelerated CNN model, is implemented for the DeepWeeds dataset without significant accuracy loss. The proposed models use PyTorch [15], an open-source framework for model training and inference. It is heavily used in research and development tasks in industry and academia. The PyTorch quantization module currently provides support for x86 CPUs, ARM CPUs which are typically found in mobile/embedded devices, and early support for Nvidia GPU via TensorRT. On these systems, the suggested PyTorch-based model can benefit from similar architectural advantages and can be used for real-time object classification applications.
## III Model Training
We used a combination of transfer learning and fine-tuning approaches to train the ResNet-50 and Inceptionv3 models. Transfer learning alone on a pre-quantized (int8) trained model with a custom-trained classifier head fails to give appropriate
Fig. 2: Energy consumption on 45nm processor [8]
accuracy. Also, transfer learning is not possible on a trained quantized model as it has no trainable parameters. To overcome these issues and use the advantages of a pre-trained model, we use the standard SOTA model (fp32) for transfer learning. It gives us the advantage of pretrained weights instead of random weight initialization which in turn would have required a lot of training. We replaced the SOTA 1,000 class classifier with a custom classifier head for 9 classes. Model parameters are kept unfrozen with a low learning rate of \(1\mathrm{e}{-4}\) for 30 epochs. The entire dataset is divided in a 60:20:20 ratio for training, validation, and testing, similar to that proposed in the original DeepWeeds paper. The models are trained on images with (224,224,3) shape. Most of the other parameters of the training are kept the same as those in the original paper. Adam optimizer [16] and cross-entropy loss function [17] are used in training.
### _Network Architecture_
The overall architecture of the feature extractor remains the same for the most part as of a standard ImageNet model, except for the fact that in order to imitate the effects of int8, fake-quantization modules are inserted to model the effects of quantization via zero point shifting and scaling. To calculate zero point \(z\) and scale \(s\).
Consider \(x_{q}\in[\alpha_{q},\beta_{q}]\) and \(x\in[\alpha,\beta]\), where \(\alpha\) and \(\beta\) are minimum and maximum values in their range.
\[\begin{split} s=\frac{\beta-\alpha}{\beta_{q}-\alpha_{q}}\\ z=\text{round}\left(\frac{\beta}{\alpha_{q}}-\frac{\alpha}{ \beta_{q}}-\alpha\right)\end{split} \tag{1}\]
where \(x_{q}\) = quantized value (int8), \(x\) = value (fp32), \(s\) = scale (fp32) and \(z\) = zero point (int8).
After the training process is completed, model conversion happens, i.e., the activations and weights are quantized to int8 from fp32, and the activations are fused into the preceding layer wherever possible. Since the transition from float to a lesser precision is a lossy process, we typically observe a large decline in accuracy. A quantization-aware training (QAT) [18] is used to assist in reducing this loss.
### _Training Methodology_
In the proposed models, training is done using single-precision floating-point computation. There is no requirement to complete the training in fixed-point because it is done offline on a workstation. Fake-quant modules are inserted to replicate the effects of int8. This technique is termed as quantization-aware training.
\[\begin{split} v_{qc}=clip(v_{q},\alpha_{q},\beta_{q})\\ \text{clip}(x,A,B)=\begin{cases}A&\text{if }x<A\\ x&\text{if }A\leq x\leq B\\ B&\text{if }x>B\end{cases}\end{split} \tag{2}\]
QAT is frequently employed with training Q-CNNs and generates results with greater accuracy than static quantization. Before the deep neural network is applied to the target, a conversion from a floating-point to a fixed-point representation must be made. This requires quantizing the deep neural network weights because the range of potential values for fixed-point and floating-point representations differs. Non-quantized values continue to be used in the backpropagation. The DNN can be pre-trained using a floating-point representation in order to initialize the parameters with reasonable values. This stabilizes the learning phase with the quantized version and yields better results. Although it is technically possible, there would be extra difficulties with the gradient calculation as discussed in section III-B6.
#### Iii-A1 Quantization Mapping
The mathematical representation of mapping fp32 values to int8 values through quantization:
\[x_{q}=round(x/s+z) \tag{3}\]
and dequantization:
\[x=s(x_{q}-z) \tag{4}\]
#### Iii-A2 Weight Quantization
CNN-based models are typically formed from convolutional layers and fully connected layers. These do require quantization-aware training for the parameters. The weights of the convolutional layer can be represented in a tensor as \((f_{h},f_{w},c_{in},c_{out})\), and for a fully connected layer as \((c_{in},c_{out})\)2 The output channel quantization bounds are calculated along each of them. There may be distinct and independent quantization boundaries for each output channel. This ensures smaller scaling factors and finer quantization ranges than the channel with a higher range in weight. Both Inceptionv3 and ResNet-50 have a large number of weight channels with notable magnitude fluctuations.
Footnote 2: c\({}_{in}\)= in channels, \(c_{out}\)= out channels, \(f_{h}\)= filter height, \(f_{w}\)= filter width
#### Iii-A3 Activation Quantization
The activation functions are quantized by mapping their continuous output values to a discrete range of quantization levels. The numerical precision of activation is decreased during this process, enabling the use of low-precision hardware or memory-efficient deployment. The range of values is calculated similarly to that of convolutional and fully connected layers.
Standard ReLU [19]:
\[ReLU(x,0,0,1)=\begin{cases}0&\text{if }x<0\\ x&\text{if }x\geq 0\end{cases} \tag{5}\]
Quantized ReLU [20]:
\[ReLU_{q}(x_{q},z_{x},z_{y},\frac{s_{x}}{s_{y}})=\begin{cases}z_{y}&\text{if }x_{q}<z_{x}\\ z_{y}+\frac{s_{x}}{s_{y}}(x_{q}-z_{x})&\text{if }x_{q}\geq z_{x}\end{cases} \tag{6}\]
#### Iii-A4 Layer Fusion
For some combinations of neural network layers, such as Conv2D-ReLU and Conv2D-BatchNormNorm-ReLU, layer fusions are frequently used [21]. The idea of several layers is abstracted with layer fusion, which results in a single layer. When we do not use layer fusion, meaning we
have two separate layers a Conv2D layer followed by a ReLU activation layer we find that we must keep track of scales and zero points at multiple stages within the neural network. Firstly, we need scales and zero points for the inputs, denoted as \(x0\), to the Conv2D layer. Next, we must monitor the scales and zero points for the outputs, \(x1\), coming from the Conv2D layer, which also serves as inputs to the subsequent ReLU activation layer. Lastly, we need to consider the scales and zero points for the outputs, \(x2\), produced by the ReLU activation layer. However, with the introduction of layer fusion, such as combining Conv2D and ReLU into a single Conv2D-ReLU layer, we can simplify this process. In this case, we only need to keep track of scales and zero points for the inputs \(x0\) to the Conv2D-ReLU layer and the outputs \(x1\) emerging from the same Conv2D-ReLU layer. This layer fusion technique streamlines the management of scales and zero points by integrating Conv2D and ReLU into a unified layer, reducing overall complexity. Figure 3 shows fake quantization modules and fused activations.
#### Iii-B5 Additional Layers
The max-pooling layer only uses an element-wise maximum, and there is no need to quantize because the inputs from the preceding layer have already been quantized and the dynamic range cannot be increased. On the other hand, the element-wise addition layer needs quantization as adding two large values of parameters can exceed the dynamic range of the output. Therefore, the output scale factor is calculated using the same quantization method.
#### Iii-B6 Issues in QAT
The inference accuracy from the quantized integer models is invariably worse than that from the floating point models due to information loss. The fact that the floating points are not perfectly recoverable after quantization and dequantization is the cause of this information loss.
\[x\neq f_{dq}\left(f_{q}\left(x,s_{x},z_{x}\right),s_{x},z_{x}\right) \tag{7}\]
where \(f_{q}\) is quantization function 3 and \(f_{dq}\) is dequantization function 4.
An error term \(\Delta_{x}\) is introduced to consider the impact of such information loss during training:
\[x=f_{dq}\left(f_{q}\left(x,s_{x},z_{x}\right),s_{x},z_{x}\right)+\Delta_{x} \tag{8}\]
As a result, the model will have low inference accuracy loss.
Another issue with QAT is that the quantization and dequantization layers are not differentiable. The quantization/dequantization operation maps a continuous input to a discrete output, resulting in a step-like piecewise constant function. This discontinuity in the function makes it non-differentiable at the quantization points. However, there are strategies and procedures that can be used to approximate gradients and make it possible to train quantized networks, such as straight-through estimation [22] and Gumbel-softmax relaxation [23]. These strategies seek to differentiate the quantization layer so that gradient-based optimization is possible while still reaping the benefits of quantization. Figure 4 shows the close relation between the SOTA vs their quantized counterparts during training.
## IV Results
The training of the proposed models was conducted on Nvidia Tesla P100 GPU and Intel Xeon 2.20 GHz CPU. The programming environment used for training was Python 3.9 and the deep learning framework was PyTorch 2.0.1. The inference was made on three different hardware, i.e., PC with Intel Core i5-8250U from the x86 family, and for the ARM architecture we are using a Mobile device with Tensor G2, and Raspberry Pi with Cortex-A72 for a complete sense of inference time on different architecture coverage. The hardware specifications are presented in Table III. The input image was run for 100 iterations (forward pass only) to get the average iterations per second shown in Table I. The inference time does not include the pre-processing of the input-image tensor nor the model loading time. In Table II, we compare
Fig. 3: Comparison between full-precision inference (left), model for QAT with simulated quantization (middle), and quantized model for integer-only inference (right)
the respective total operations, memory footprint represents the maximum RAM usage by any layer for a single image during inference.
Also, accuracy cannot be a single tool to evaluate the model results because of the presence of unbalanced classes. So, a confusion matrix provides a much clearer picture of the trained model for each class. In Figure 6, we can see both SOTA models and their quantized counterparts perform equally well. Due to the uneven sample distribution, there are more instances where the eighth category or other categories are incorrectly identified. However, this inaccuracy rate can be tolerated because the negative weed category has approximately eight times as many images as the other categories. In Table IV we further represent our findings with model throughput on different hardware devices.
## V Conclusion
In this paper, we proposed a Quantized ResNet-50 and Inceptionv3 model, a low complexity fully convolutional neural network for non-GPU enabled and embedded devices. These models perform comparably with respect to their SOTA counterparts in terms of accuracy. In terms of storage consumption, it takes almost 4\(\times\) less storage, achieving a 4\(\times\) speedup on CPU, 6\(\times\) speedup on Raspberry Pi, and 2\(\times\) speedup on a mobile processor. We believe that this strategy and the findings from our experimental study will make it easier to conduct future quantization research and develop industrial vision applications tailored to agriculture for devices with limited resources, enabling them to be more AI-enabled while using fewer resources.
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Model Name** & **GFLOPs/GOPs** & **Memory footprint (Mb)** \\ \hline ResNet-50 & 4.13 & 53.60 \\ Quantized ResNet-50 & 4.13 & 6.70 \\ Inceptionv3 & 2.85 & 34.64 \\ Quantized Inceptionv3 & 2.85 & 4.58 \\ \hline \hline \end{tabular} Note: The complexity of the Non-Quantized models are represented in floating-point operations while quantized models use operations.
\end{table} TABLE II: Performance Analysis
Fig. 4: Training Accuracy vs Epochs
Fig. 5: Inference Time Comparison
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Model Name** & **Accuracy \%** & **Model Size (MB)** & \multicolumn{3}{c}{**Inference Time (ms)**} \\ \cline{2-6} & **Top1** & **Top3** & & **Core-i5** & **Tensor-G2** & **Cortex-A72** \\ \hline ResNet-50 & 95.95 & 99.43 & 94.45 & 112.71 & 173.62 & 1906.73 \\ Quantized ResNet-50 & 94.77 & 99.49 & 23.72 & 42.61 & 90.99 & 218.42 \\ Inceptionv3 & 95.09 & 99.63 & 87.59 & 90.17 & 120.92 & 838.79 \\ Quantized Inceptionv3 & 94.57 & 99.40 & 22.04 & 34.46 & 71.15 & 187.56 \\ \hline \hline \end{tabular} Note: Accuracy values represent the Top1 and Top3 metrics. Model Size is in megabytes (MB), and Inference Time is measured in milliseconds (ms) on different hardware platforms.
\end{table} TABLE I: Model Comparison |
2310.09838 | Explaining How a Neural Network Play the Go Game and Let People Learn | The AI model has surpassed human players in the game of Go, and it is widely
believed that the AI model has encoded new knowledge about the Go game beyond
human players. In this way, explaining the knowledge encoded by the AI model
and using it to teach human players represent a promising-yet-challenging issue
in explainable AI. To this end, mathematical supports are required to ensure
that human players can learn accurate and verifiable knowledge, rather than
specious intuitive analysis. Thus, in this paper, we extract interaction
primitives between stones encoded by the value network for the Go game, so as
to enable people to learn from the value network. Experiments show the
effectiveness of our method. | Huilin Zhou, Huijie Tang, Mingjie Li, Hao Zhang, Zhenyu Liu, Quanshi Zhang | 2023-10-15T13:57:50Z | http://arxiv.org/abs/2310.09838v1 | # Explaining How a Neural Network Play the Go Game and Let People Learn
###### Abstract
The AI model has surpassed human players in the game of Go (Fang et al., 2018, Granter et al., 2017, Intelligence, 2016), and it is widely believed that the AI model has encoded new knowledge about the Go game beyond human players. In this way, explaining the knowledge encoded by the AI model and using it to teach human players represent a promising-yet-challenging issue in explainable AI. To this end, mathematical supports are required to ensure that human players can learn accurate and verifiable knowledge, rather than specious intuitive analysis. Thus, in this paper, we extract interaction primitives between stones encoded by the value network for the Go game, so as to enable people to learn from the value network. Experiments show the effectiveness of our method.
## 1 Introduction
The explanation for AI models has gained increasing attention in recent years. However, in this paper, we consider a new problem, _i.e_., if an AI model has achieved superior performance in a task to human beings, then how can we use the explanation of this AI model to provide new insights and teach people to better conduct the task? In this study, we focus on AI models designed for the game of Go. It is because AI models for the Go game are regarded to have surpassed human players, and may learn the inference logic beyond current human understandings of the game of Go (Fang et al., 2018, Granter et al., 2017, Intelligence, 2016). Therefore, we aim to explain the complex inference logic encoded by these AI models to teach human players to play the Go game.
The current AI model usually jointly uses the value network, policy network, and Monte Carlo tree search to play the Go game. To simplify the explanation, in this study, we only explain shape patterns1 encoded by the value network. However, the elaborate strategies for the Go game proposes high requirements for the trustworthiness of the explanation. In particular, the Go game is widely considered as much more complex than most other games (Shin et al., 2020, 2021). In the Go game, even minor alterations to 1-2 stones on the board can fundamentally change the result of the game. Therefore, the explained insights into the value network are supposed to be proved by theories, be verified by experiments, and be accountable for errors, instead of specious intuitive analysis.
Footnote 1: Shape patterns, refer to the various shapes formed by the arrangement of stones on the board.
Specifically, the explanation method needs to address following two new challenges to provide a provable and verifiable explanation for the inference logic of the value network. (1) For models for most other applications, we can explain the model by simply visualizing implicit appearance patterns encoded by the model (Dosovitskiy and Brox, 2016, Simonyan et al., 2014, Yosinski et al., 2015, Zeiler and Fergus, 2014), or estimating attributions of different input variables (Lundberg and Lee, 2017, Selvaraju et al., 2017, Zhou et al., 2016, Zintgraf et al., 2017). However, due to the high complexity of the Go game, we need to explain explicit primitive shape patterns2, which are used by the neural network as primitive logic to play the game. (2) The rigor of the explanation of shape patterns2 must be guaranteed in mathematics. It is because the superior complexity of the Go game can easily lead to specious or groundless explanations that will misguide human players.
Footnote 2: Shape patterns, refer to the various shapes formed by the arrangement of stones on the board.
To this end, (Li and Zhang, 2023, Ren et al., 2023b) have attempted to define and extract interactions encoded by a DNN. Let us consider the Go game shown in Figure 1. Given a game state \(\mathbf{x}\) with \(n\) stones on the board, \(N=\{1,2,...,n\}\), we use interactions to explain the advantage score of white stones estimated by the value network. The value network may encode the
interaction between stones in \(S_{1}=\{1,3,4,5,8\}\). Each interaction \(S\subseteq N\) represents a specific shape corresponding to an AND relationship between stones in \(S\). When all stones in \(S\) are present on the board, the interaction \(S\) is activated and make an effect \(I(S)\) on the output of the value network. The removal of any stone in \(S\) will deactivate the interaction effect from the network output.
However, we find that the previous interaction-based explanation cannot be directly used to discover novel shapes from the value network. We overcome following three major challenges.
\(\bullet\) Besides explaining AND relationships between stones, we need to extend the original definition of interactions to further explain OR relationships between stones encoded by the value network. _I.e._, the presence of any stones in a set of positions \(S\) would make a certain effect \(I_{\text{or}}(S)\).
\(\bullet\) We find that the advantage score estimated the value network is usually shifted/biased, when white stones are far less or far more than black stones in the board. To this end, we develop a method to alleviate the shifting problem to simplify the explanation.
\(\bullet\) We need to show that given a certain state \(\mathbf{x}\) on the Go board, the outputs of the value network can always be mimicked by a small number of AND interactions and OR interactions, no matter how we randomly remove stones from the board.
Furthermore, we notice that shape patterns for the Go game are usually quite complex, _i.e._, each interaction often contains a large number of stones. Too complex shape patterns are usually considered as the specific shapes memorized by the value network for a specific state, instead of a common shape patterns shared by different states. Thus, the complexity of shape patterns boosts the difficulty of understanding the Go game. Thus, we further identify some common combinations of stones that are shared by different interactions/shapes, namely _coalitions_. For example, in Figure 1, the interactions \(S_{1}=\{1,3,4,5,8\},S_{2}=\{1,3,8\},S_{3}=\{1,3,8,9\}\) all contain the coalition T=\(\{1,3,8\}\). We apply [16] to estimate the attribution of each coalition to help human players understand the DNN's logic. We collaborate with professional human Go players to compare the interactions or coalitions encoded by the value network with the human understanding of the Go game, so as to discover advanced shapes beyond human understanding.
We conducted experiments to evaluate attributions of some manually-annotated coalitions, and cooperated with professional human Go players to further explain these attributions. We found many cases that fitted to human understandings of shape patterns, as well as a few cases that conflicted with normal understandings of shape patterns, which provided new insights into the Go game.
## 2 Related work
Many methods have been proposed to visualize the feature/patterns encoded by the DNN [1, 11, 12, 13], or to estimate the attribution/importance of each input variable [10, 11, 12, 14].
Figure 1: Interactions encoded by the value network. Each interaction \(S\) represents a specific shape pattern corresponding to an AND relationship among a set of stones in \(S\). The stones \(x_{6},x_{9},x_{10}\) are removed in the masked board state \(\mathbf{x}_{\text{masked}}\). Because the stone \(x_{9}\) is removed from the board, the interactions \(S_{3}\) and \(S_{4}\) are deactivated in the masked board state \(\mathbf{x}_{\text{masked}}\), _i.e._, \(I(S_{3}|\mathbf{x}_{\text{masked}})=0\), and \(I(S_{4}|\mathbf{x}_{\text{masked}})=0\). The coalition \(T=\{1,3,8\}\) participates in different interactions \(S_{1},S_{2},S_{3}\).
However, the demand to teach human players proposes higher requirements for the explanation method. We need to clarify the explicit logic used by the DNN, which is supposed to be theoretically guaranteed and experimentally verified, instead of a specious understanding. To this end, (1) Ren et al. (2023a) and Ren et al. (2023b) have proven that a well-trained DNN usually encodes a small number of interactions, and the output score of the DNN on a certain input sample can always be well mimicked by numerical effects of a few salient interactions, no matter how the input sample is randomly masked. (2) Li and Zhang (2023) have further found the considerable transferability of interactions over different samples and over different DNNs. (3) Interaction primitives (the Harsanyi interaction) can explain the elementary mechanism of previous explanation metrics, _e.g._, the Shapley value (Shapley, 2016), the Shapley interaction index (Grabisch and Roubens, 1999), and the Shapley Taylor interaction index (Sundararajan et al., 2020a).
Despite of above findings, explaining the DNN for the Go game still proposes new challenges. To this end, we extend the AND interaction to the OR interaction, solve the saturation problem of the advantage score, and compute attributions of common coalitions shared by different interactions, thereby obtaining concise and accurate explanation for shape patterns in the value network.
## 3 Explaining the inference logic of the value network
### Preliminaries: interactions encoded by the DNN
\(\bullet\)**Definitions of the interaction.** In this paper, we use the value network \(v\) for the game of Go as an example to introduce interactions between different stones encoded by the value network. The value network uses the current state \(\mathbf{x}\) on the board to estimate the probability \(p_{\text{white}}(\mathbf{x})\) of white stones winning. To simplify the notation, let us use \(\mathbf{x}=\{x_{1},x_{2},...,x_{n}\}\) to denote both positions and colors of \(n\) stones in the current state. We consider these \(n\) stones, including both white and black stones, as input variables2 of the value network, which are indexed by \(N=\{1,2,...,n\}\). We set a scalar \(v(\mathbf{x})=\log(\frac{p_{\text{white}}(\mathbf{x})}{1-p_{\text{white}}(\mathbf{x})})\in\mathbb{R}\) as the advantage of white stones in the game.
Footnote 2: Although the actual input of the value network is a tensor (Silver et al., 2016), in this paper, we use \(\mathbf{x}=\{x_{1},x_{2},...,x_{n}\}\) to denote the input of the value network for simplicity.
In this way, Harsanyi (1963) has proposed a metric \(I(S)\), namely the _Harsanyi dividend_ or the _Harsanyi interaction_, to measure the interaction between each specific set \(S\subseteq N\) of input variables (stones) encoded by the model \(v\). Each interaction \(S\), _e.g._, \(S_{2}=\{1,3,8\}\) in Figure 1, denotes a certain shape of stones. If all stones in \(S\) are placed on the board, then the interaction will make a numerical effect on the advantage score \(v(x)\). Thus, we can consider the interaction as an AND relationship \(I(S_{2})=w_{S_{2}}\cdot[\textit{exist}(x_{1})\&\textit{exist}(x_{3})\&\textit{exist}(x_{8})]\) encoded by \(v\), where the Boolean function \(\textit{exist}(\cdot)=1\) when the stone is placed on the board; \(\textit{exist}(\cdot)=0\) when the stone is removed. Otherwise, the removal of any stones in \(S\) will deactivate the effect, _i.e._, making \(I(S|\mathbf{x}_{\text{masked}})=0\). Such an interaction effect can be measured from the value network based on the following definition.
\[I(S)\stackrel{{\text{def}}}{{=}}\sum\nolimits_{T\subseteq S}(-1) ^{|S|-|T|}\cdot v(\mathbf{x}_{T}) \tag{1}\]
where \(\mathbf{x}_{T}\) denotes the state when we keep stones in the set \(T\) on the board, and remove all other stones in \(N\setminus T\). Thus, \(v(x_{T})\) measures the advantage score of the masked board state \(\mathbf{x}_{T}\).
\(\bullet\)**Sparsity of interactions and interaction primitives.** Although we can sample \(2^{n}\) different subsets of variables from \(N\), _i.e._, \(S_{1},S_{2},S_{3},...,S_{2^{n}}\subseteq N\), Li and Zhang (2023), Ren et al. (2023b) have discovered and proved that a well-trained DNN usually only encodes a small number of interactions in some common conditions3. In other words, most interactions \(S\subseteq N\) defined in Equation (1) usually have almost zero effect, \(I(S)\approx 0\). Only a few interactions \(S\in\Omega_{\text{salient}}\) have considerable effects, _s.t._\(|\Omega_{\text{salient}}|\ll 2^{n}\). In this paper, we set a threshold \(\xi=0.15\cdot\max_{S}|I(S)|\) to select salient interactions, \(\Omega_{\text{salient}}=\{S:|I(S)|>\xi\}\).
Footnote 3: Please see Appendix B for more detailed introductions of common conditions.
We can consider the small number of salient interactions \(S\in\Omega_{\text{salient}}\) as primitive inference patterns encoded by the value network, namely **interaction primitives**, because Theorem 1 shows that these interaction primitives can always well mimic the network outputs no matter how we randomly mask the input sample \(\mathbf{x}\).
**Theorem 1** (proved by Ren et al. (2023a)).: _Let us randomly mask a given input sample \(\mathbf{x}\) to obtain a masked sample \(\mathbf{x}_{T}\). The output score of the DNN on all \(2^{n}\) randomly masked samples \(\mathbf{x}_{T}\) w.r.t. \(\forall T\subseteq N\) can all be approximated by the sum of effects of a small number of salient interactions._
\[v(\mathbf{x}_{T})=\sum\nolimits_{S\subseteq T}I(S)\approx\sum\nolimits_{S\in\Omega_ {\text{salient}},S\subseteq T}I(S) \tag{2}\]
Theorem 1 shows that when we remove stones in a random set \(N\setminus T\) from the board state \(\mathbf{x}\) and obtain a masked state \(\mathbf{x}_{T}\), the output score \(v(\mathbf{x}_{T})\) of the value network on \(\mathbf{x}_{T}\) can be explained by a small number of salient shape patterns encoded by the value network.
\(\bullet\)**Complexity of the interaction primitive.** The complexity of an interaction primitive \(S\) is defined as the order of the interaction, _i.e._, the number of stones in \(S\), \(\text{order}(S)=|S|\). An interaction primitive of a higher order represents a more complex interaction with more stones.
### Extracting sparse and simple interaction primitives
To teach people about new patterns to play the game of Go, we first extract interaction primitives encoded by the value network. We consider them as shape patterns used by the value network to estimate the winning probability. To this end, we need to address the following three challenges.
\(\bullet\)**Challenge 1. Extending AND interactions to OR interactions.** The original Harsanyi interaction just represents the AND relationship between a set of stones encoded by the network. However, compared to most other applications, the game of Go usually applies much more complex logic (Shin et al., 2020, 2021), so we extend AND interactions in Equation (1) to OR interactions. We simultaneously use these two types of interactions to explain the Go game.
Logically, an OR relationship can be represented as the combination of binary logical operations "AND" and "NOT." For example, we represent the effect of an AND interaction \(S=\{1,2,3\}\) as \(I_{\text{and}}(S)=w_{S}^{\text{and}}\cdot[\textit{exist}(x_{1})\&\textit{exist}(x_{2})\&\textit{exist}(x_{3})]\), where \(\&\) represents the binary logical operation "AND." In comparison, the effect of an OR interaction \(S=\{1,2,3\}\) is represented as \(I_{\text{or}}(S)=w_{S}^{\text{or}}\cdot[\textit{exist}(x_{1})|\textit{exist}(x_{2})| \textit{exist}(x_{3})]=w_{S}^{\text{or}}\cdot\{-[(\textit{-exist}(x_{1}))\&( \text{-exist}(x_{2}))\&(\text{-exist}(x_{3}))]\}\), where \(|\) represents the binary logical operation "OR," and the Boolean function \(\neg\textit{exist}(\cdot)=1\) when the stone is removed, \(\neg\textit{exist}(\cdot)=0\) when the stone is placed on the board.
Therefore, we consider the advantage score of the value network \(v(\mathbf{x}_{T})\)_w.r.t._ any masked sample, \(\forall T\subseteq N\), intrinsically contains the following two terms.
\[\forall T\subseteq N,v(\mathbf{x}_{T})=v_{\text{and}}(\mathbf{x}_{T})+v_{\text{or}}( \mathbf{x}_{T}) \tag{3}\]
where the advantage term \(v_{\text{and}}(\mathbf{x}_{T})\) is exclusively determined by AND interactions, and the advantage term \(v_{\text{or}}(\mathbf{x}_{T})\) is exclusively determined by OR interactions. Later, in Equation (6), we will introduce how to automatically learn/disentangle \(v_{\text{and}}(\mathbf{x}_{T})\) and \(v_{\text{or}}(\mathbf{x}_{T})\) from \(v(\mathbf{x}_{T})\).
Just like in Equation (1), the AND interaction is redefined on the advantage term \(v_{\text{and}}(\mathbf{x}_{T})\), and is given as \(I_{\text{and}}(S)=\sum_{T\subseteq S}(-1)^{|S|-|T|}\cdot v_{\text{and}}(\mathbf{x}_ {T})\). Similarly, an OR interaction \(S\) is measured to reflect the strength of an OR relationship between stones in the set \(S\) encoded by the model \(v_{\text{or}}\). If any stone in \(S\) appears on the board, then the OR interaction \(S\) makes an effect \(I_{\text{or}}(S)\) on the score \(v_{\text{or}}(\mathbf{x})\). Only all stones in \(S\) are removed from the board, the effect \(I_{\text{or}}(S)\) is removed. Thus, we can define the effect \(I_{\text{or}}(S)\) of an OR interaction \(S\) on the model output \(v_{\text{or}}(\mathbf{x})\), as follows.4.
Footnote 4: Please see Appendix D for the proof.
\[I_{\text{or}}(S)=-\sum\nolimits_{T\subseteq S}(-1)^{|S|-|T|}v_{\text{or}}(\bm {x}_{N\setminus T}),\quad S\neq\emptyset \tag{4}\]
**Theorem 2** (proved in Appendix C).: _The OR interaction effect between a set \(S\) of stones, \(I_{\text{or}}(S)\) based on \(v(\mathbf{x}_{T})\), can be computed as a specific AND interaction effect \(I_{\text{and}}^{\prime}(S)\) based on the dual function \(v^{\prime}(\mathbf{x}_{T})\). For \(v^{\prime}(\mathbf{x}_{T})\), original present stones in \(T\) (based on \(v(\mathbf{x}_{T})\)) are considered as being removed, and original removed stones in \(N\setminus T\) (based on \(v(\mathbf{x}_{T})\)) are considered as being present._
**Theorem 3** (proved in Appendix E).: _Let the input sample \(\mathbf{x}\) be randomly masked. There are \(2^{n}\) possible masked samples \(\{\mathbf{x}_{T}\}\) w.r.t. \(2^{n}\) subsets \(T\subseteq N\). The output score on any masked sample \(\mathbf{x}_{T}\) can be represented as the sum of effects of both AND interactions and OR interactions._
\[v(\mathbf{x}_{T})=v(\mathbf{x}_{\emptyset})+v_{\text{and}}(\mathbf{x}_{T})+v_{\text{or}}( \mathbf{x}_{T})=v(\mathbf{x}_{\emptyset})+\sum\nolimits_{S\subseteq T,S\neq\emptyset }I_{\text{and}}(S)+\sum\nolimits_{S\cap T\neq\emptyset,S\neq\emptyset}I_{ \text{or}}(S) \tag{5}\]
To automatically learn the disentanglement of AND interactions and OR interactions, we set \(\forall T\subseteq N,v_{\text{and}}(\mathbf{x}_{T})=\frac{1}{2}v(\mathbf{x}_{T})+p_{T}\) and \(v_{\text{or}}(\mathbf{x}_{T})=\frac{1}{2}v(\mathbf{x}_{T})-p_{T}\), which satisfies \(\forall T\subseteq N,v(\mathbf{x}_{T})=v_{\text{and}}(\mathbf{x}_{T})+v_{\text{or}}( \mathbf{x}_{T})\), so that the learning of the disentanglement of \(v_{\text{and}}(\mathbf{x}_{T})\) and \(v_{\text{or}}(\mathbf{x}_{T})\) is equivalent to the learning of \(\{p_{T}\}_{T\subseteq N}\). \(p_{T}\in\mathbb{R}\) denotes a learnable bias term. Furthermore, we notice that small unexplainable noises in the network output can be enlarged in interactions5. To overcome this problem, we slightly revise the original network output \(v(\mathbf{x}_{T})\) as \(v(\mathbf{x}_{T})+q_{T}\), where \(q_{T}\in\mathbb{R}\) is a small scalar contained within a small range, \(|q_{T}|<\tau\)6. The parameter \(q_{T}\) is learned to represent the unavoidable noises in the network
output, which cannot be reasonably explained by AND interactions or OR interactions. According to the Occam's Razor, we use the following loss function to learn the sparse decomposition of AND interactions and OR interactions.
\[\min_{\mathbf{p},\mathbf{q}\in\mathbb{R}^{2n}}\|\mathbf{I}_{\text{and}}\|_{1}+\|\mathbf{I}_{\text {or}}\|_{1}\quad\text{s.t.}\quad\forall S\subseteq N,|q_{S}|<\tau \tag{6}\]
where \(\|\cdot\|_{1}\) represents L-1 norm function, \(\mathbf{p}=[p_{S_{1}},p_{S_{2}},...,p_{S_{2^{n}}}]^{\top}\) denotes the bias terms for all masked boards. \(\mathbf{q}=[q_{S_{1}},q_{S_{2}},...,q_{S_{2^{n}}}]^{\top}\). \(\mathbf{I}_{\text{and}}=[I_{\text{and}}(S_{1}),I_{\text{and}}(S_{2}),...,I_{\text {and}}(S_{2^{n}})]^{\top}\) and \(\mathbf{I}_{\text{or}}=[I_{\text{or}}(S_{1}),I_{\text{or}}(S_{2}),...,I_{\text{or }}(S_{2^{n}})]^{\top}\) denote AND interactions and OR interactions, respectively. AND interactions \(\{I_{\text{and}}(S)\}_{S\subseteq N}\) are computed by setting \(v_{\text{and}}(\mathbf{x}_{T})=\frac{1}{2}\cdot[v(\mathbf{x}_{T})+q_{T}]+p_{T}\), and OR interactions \(\{I_{\text{or}}(S)\}_{S\subseteq N}\) are computed by setting \(v_{\text{or}}(\mathbf{x}_{T})=\frac{1}{2}\cdot[v(\mathbf{x}_{T})+q_{T}]-p_{T}\) in Equation (4).
\(\bullet\)**Challenge 2. Verifying that the inference logic of the value network can be explained as sparse interaction primitives.** Although Ren et al. (2023) have proved that a well-trained DNN usually just encodes a small number of AND interactions between input variables for inference under some common conditions3, it is still a challenge to strictly examine whether the value network fully satisfies these conditions. Although according to Theorem 2, the above OR interaction can be considered as a specific AND interaction, in real applications, we still need to verify the sparsity of interactions encoded by the value network for the Go game.
Footnote 3: Please see Appendix G for the reason why the saturation problem causes high-order interactions.
Therefore, we experimentally examine the sparsity of interactions on the KataGo Wu (2019), which is a free open-source neural network for the game of Go and has defeated top-level human players. We extract interactions encoded by the value network of the KataGo. Specifically, we use KataGo to generate a board state by letting KataGo take turns to play the moves of black stones and those of white stones. Let there be \(m\) stones on the board. Considering the exponentially large cost of computing interactions, we just select and explain \(n=10\) stones (\(n\leq m\)), including \(\frac{n}{2}\) white stones and \(\frac{n}{2}\) black stones, and limit our attention to interactions between these \(n\) stones. All other stones on the board can be considered as constant background, whose interactions are not computed. Figure 2 shows the strength \(|I(S)|\) of effects of different AND interactions and OR interactions in a descending order. It shows that only a few interactions have salient effects, 80\(\%\)-85\(\%\) interactions have negligible effects. It verifies that interactions encoded by the value network are sparse.
\(\bullet\)**Challenge 3. How to ensure that the inference logic of the value network can be explained as simple interaction primitives?** We find a problem that most interaction primitives extracted from the KataGo are high-order interaction primitives (see Figure 4). The high order of interaction primitives significantly boosts the difficulty of extracting common shape patterns widely used in different games. It is because high-order interactions are usually considered to be "special shapes" in a specific game, instead of simple (low-order) shape patterns frequently used in different games. For example, as Figure 3 (b) shows, 3-order interaction primitives \(S_{1},S_{2},S_{3}\) extracted from the board state \(\mathbf{x}^{(1)}\) can be transferred to another board state \(\mathbf{x}^{(2)}\). However, the 8-order interaction primitive \(S_{4}\) extracted from \(\mathbf{x}^{(1)}\) cannot be transferred to another board state \(\mathbf{x}^{(2)}\).
The reason for the emergence of high-order interactions is that most training samples for the value network are usually biased to states with similar numbers of white stones and black stones, because in real games, the board always contains similar numbers of white stones and black stones. Such a bias leads to the following saturation problem, which makes most interaction primitives be high-order primitives7. We use \(\Delta n(T)=n_{\text{while}}(T)-n_{\text{black}}(T)\in\{-n/2,-n/2+1,...,n/2\}\) to measure the unbalance level of the masked state \(\mathbf{x}_{T}\), where \(n_{\text{while}}(T)\) and \(n_{\text{black}}(T)\) denote the number of white stones and that of black stones on \(\mathbf{x}_{T}\), respectively. As Figure 3 (a) shows, we compute the average advantage score over all masked states \(\mathbf{x}_{T}\) with the same unbalance level \(k\in\{-n/2,-n/2+1,...,n/2\}\), \(A_{k}=\mathbb{E}_{\mathbf{x}}\mathbb{E}_{T\subseteq N:\Delta n(T)=k}\log(\frac{p_{ \text{shift}}(\mathbf{x}_{T})}{1-p_{\text{shift}}(\mathbf{x}_{T})})\). We find that
Figure 2: Strength of effects of all AND interactions and OR interactions in descending order. Only a small number of interactions have salient effects on the output of the value network.
this average advantage score \(A_{k}=\mathbb{E}_{\mathbf{x}}\mathbb{E}_{T\subseteq N:\Delta n(T)=k}\log(\frac{p_{ \text{white}}(\mathbf{x}_{T})}{1-p_{\text{white}}(\mathbf{x}_{T})})\) is not roughly linear with the \(k\) value, but is saturated when \(|k|\) is large. This is the main reason for high-order interactions [7].
In order to alleviate the above saturation problem, we revise the advantage score \(v(\mathbf{x}_{T})\) in Equation (1) to remove the value shift caused by the saturation problem, _i.e._, \(u(\mathbf{x}_{T})=v(\mathbf{x}_{T})-a_{k}\). Given a masked state \(\mathbf{x}_{T}\), we compute its unbalance level \(k=\Delta n(T)=n_{\text{white}}(T)-n_{\text{black}}(T)\in\{-n/2,-n/2+1,...,n/2\}\). \(a_{k}\) is initialized as the average advantage score \(A_{k}\). We extend the loss function in Equation (6) as follows to learn the parameters \(\mathbf{a}=[a_{-\frac{n}{2}},a_{-\frac{n}{2}+1},...,a_{\frac{n}{2}}]^{\top}\in \mathbb{R}^{n+1}\).
\[\min_{\mathbf{p},\mathbf{q}\in\mathbb{R}^{2^{n}},\mathbf{a}\in\mathbb{R}^{n+1}}\|\mathbf{I}_{ \text{and}}\|_{1}+\|\mathbf{I}_{\text{or}}\|_{1}\quad\text{s.t.}\quad\forall S \subseteq N,|q_{S}|<\tau \tag{7}\]
We learn parameters \(\mathbf{p}\), \(\mathbf{q}\), and \(\mathbf{a}\) to obtain the sparse decomposition of AND interactions \(\mathbf{I}_{\text{and}}=[I_{\text{and}}(S_{1}),I_{\text{and}}(S_{2}),...,I_{\text{ and}}(S_{2^{n}})]^{\top}\), and OR interactions \(\mathbf{I}_{\text{or}}=[I_{\text{or}}(S_{1}),I_{\text{or}}(S_{2}),...,I_{\text{or }}(S_{2^{n}})]^{\top}\). AND interactions \(\{\mathbf{I}_{\text{and}}(S)\}_{S\subseteq N}\) are computed by setting \(v_{\text{and}}(\mathbf{x}_{T})=\frac{1}{2}\cdot[u(\mathbf{x}_{T})+q_{T}]+p_{T}\), and OR interactions \(\{I_{\text{or}}(S)\}_{S\subseteq N}\) are computed by setting \(v_{\text{or}}(\mathbf{x}_{T})=\frac{1}{2}\cdot[u(\mathbf{x}_{T})+q_{T}]-p_{T}\) in Equation (4). The small threshold \(\tau=0.38\) is set to be the same as in Equation (6)
_Penalizing high-order interactions._ Besides, we can further add another loss to Equation 7 to penalize high-order interactions, _i.e._, \(Loss=\|\mathbf{I}_{\text{and}}\|_{1}+\|\mathbf{I}_{\text{or}}\|_{1}+r\cdot\|\mathbf{I}_{ \text{and}}^{\text{high}}\|_{1}+\|\mathbf{I}_{\text{or}}^{\text{high}}\|_{1}\), where \(\mathbf{I}_{\text{and}}^{\text{high}}\) denotes a 386-dimension vector that corresponds to 386 interactions of the 6-th-10-th orders in the vector \(\mathbf{I}_{\text{and}}\). In this loss, we set \(r=5.0\) to boost the penalty of high-order interactions.
_Experiments._ We conduct experiments to check whether above methods can reduce the complexity (order) of the extracted interactions, compared with the original interactions extracted by methods in Equation (6). Specifically, we follow experimental settings in Challenge 2 to generate a board, and compute AND-OR interactions between selected \(n\) stones. Then, we compute the average strength of AND-OR interactions of different orders, \(\mathbb{E}_{S:|S|=m}[|I_{\text{and}}(S)|]\) and \(\mathbb{E}_{S:|S|=m}[|I_{\text{or}}(S)|]\), respectively. Figure 4 shows the average strength of interaction effects. For both AND interactions and OR interactions, we observe that the revised method generates much weaker high-order interactions than the original method in Equation (6). This verifies the effectiveness of the revised method to reduce the complexity of the extracted interactions.
_Sparsity of interactions extracted by the revised method._ We follow experimental settings in Challenge 2 to generate 50 game states, and visualize the strength of all AND interactions and all OR interactions of all these 50 game states in a
Figure 4: Average strength of effects for interactions of different orders. For different games, the revised method extracts weaker high-order interactions than the original method.
descending order. Figure 5 (a) shows that only a few interactions have salient effects, more than 90\(\%\) interactions have small effects, which verifies the sparsity of interactions extracted by the revised method.
_Still satisfying the universal matching property in Theorem 3._ Theoretically, the AND-OR interactions extracted by our revised method can still satisfy the universal matching property in Theorem 3. Furthermore, given a board state \(\mathbf{x}\), we conduct experiments to examine whether we can use the extracted AND-OR interactions to approximate the network outputs \(v(\mathbf{x}_{T})\) on all different randomly masked board states \(\{\mathbf{x}_{T}\}_{T\subseteq N}\). To this end, for each arbitrarily masked board states \(\mathbf{x}_{T}\), we measure the approximation error \(\Delta v_{T}=|v^{\text{real}}(\mathbf{x}_{T})-v^{\text{approx}}(\mathbf{x}_{T})|\) of using AND-OR interactions to mimic the real network output \(v^{\text{real}}(\mathbf{x}_{T})\), where \(v^{\text{approx}}(\mathbf{x}_{T})=v(\mathbf{x}_{\emptyset})+\sum_{S\subseteq T,S \neq\emptyset}I_{\text{and}}(S)+\sum_{S\cap T\neq\emptyset,S\neq\emptyset}I_{ \text{or}}(S)\) represents the score approximated by AND-OR interactions according to Theorem 3. In Figure 6, the solid curve shows the real network outputs on all \(2^{n}\) randomly masked board states when we sort all \(2^{n}\) network outputs in an ascending order. The shade area shows the smoothed approximation error, which is computed by averaging approximation errors of neighboring 50 masked board states. Figure 6 shows that the approximated outputs \(v^{\text{approx}}(\mathbf{x}_{T})\) can well match with the real outputs \(v^{\text{real}}(\mathbf{x}_{T})\) over different randomly masked states, which indicates that the output of the value network can be explained as AND-OR interactions.
### Discovering novel shapes from the value network
In the above section, we have extracted sparse and simple interaction primitives from the value network. In this section, we aim to discover novel shapes from these interaction primitives, and use the discovered novel shapes to teach people about the game of Go.
We have examined the sparsity of interaction primitives in experiments. We can usually extract about 100-250 interaction primitives to explain the output score of a single board state. However, the number of primitives is still too large to teach people, and we need a more efficient way to discover novel shapes encoded by the value network. Therefore, we visualize all interaction primitives, and then identify some specific combinations of stones that frequently appear in different interaction primitives. We refer to these combinations as "_common coalitions_." For example, given a board state \(\mathbf{x}\) with \(n\) stones, indexed by \(N=\{1,2,...,n\}\) in Figure 1, we can extract some salient interactions from the board state \(\mathbf{x}\), such as \(S_{1}=\{1,3,4,5,8\},S_{2}=\{1,3,8\},S_{3}=\{1,3,8,9\}\), _etc_. The coalition \(T=S_{1}\cap S_{2}\cap S_{3}=\{1,3,8\}\) participates in different interactions \(T\subseteq S_{1},S_{2},S_{3}\). We can consider this coalition \(T\) as a _classical shape pattern_ encoded by the value network.
Therefore, we further compute the attribution \(\varphi(T)\) of each coalition \(T\) to the advantage score \(v(\mathbf{x})\) estimated by the value network. In this way, a positive attribution \(\varphi(T)>0\) means that the shape pattern of the coalition \(T\) tends to enhance the advantage of white stones. In comparison, a negative attribution \(\varphi(T)<0\) means that the shape pattern of the coalition \(T\) tends to decrease the advantage score. \(\varphi(T)\approx 0\) means that although the coalition \(T\) is well modeled by the value network, the coalition \(T\) has contradictory effects when it appears in different interactions, thereby not making a significant effect on the advantage score.
There are a lot of attribution methods (Lundberg and Lee, 2017; Selvaraju et al., 2017; Zhou et al., 2016; Zintgraf et al., 2017) to estimate the attribution/importance score of different input variables of an AI model, _e.g._, estimating the attributions of
Figure 5: (a) Strength of all revised AND interactions and revised OR interactions of all 50 games in a descending order. Only a few interactions have salient effects on the output of the value network. (b) Interaction context of the coalition.
different image patches to the image-classification score, or the attributions of different tokens in natural language processing. However, there is no a widely accepted method to estimate the attribution of a coalition of input variables, because most attribution methods cannot generate self-consistent attribution values8. Therefore, we apply the method [Xinhao Zheng, 2023] to define the attribution of a coalition \(T\). This method extends the theory of the Shapley value and well explains the above inconsistency problem. Specifically, the attribution score \(\varphi(T)\) of the coalition \(T\) is formulated as the weighted sum of effects of AND-OR interactions, as follows.
Footnote 8: We use the following example to introduce the inconsistency problem. We can simply consider a coalition \(T\) (_e.g._, \(T=\{1,2,3\}\)) of input variables as a singleton variable \([T]\), then we have a total of \(n-2\) input variables in \(N^{\prime}=\{[T],4,5,...,n\}\). Let \(\varphi([T])\) denote the attribution of \([T]\) computed on the new partition \(N^{\prime}\) of the \(n-2\) variables. Alternatively, we can also consider \(x_{1}\), \(x_{2}\), \(x_{3}\) as three individual variables, and compute their attributions \(\varphi(1)\), \(\varphi(2)\), \(\varphi(3)\) given the original partition of input variables \(N=\{1,2,...,n\}\). However, for most attribution methods, \(\varphi([T])\neq\varphi(1)+\varphi(2)+\varphi(3)\). This is the inconsistency problem of attributions.
\[\varphi(T)=\sum\nolimits_{S\supseteq T}\frac{|T|}{|S|}[I_{\text{and}}(S)+I_{ \text{or}}(S)] \tag{8}\]
\[\varphi(T)-\sum\nolimits_{i\in T}\phi(i)=\sum\nolimits_{S\subseteq N,S\cap T \neq\emptyset,S\cap T\neq T}\frac{|S\cap T|}{|S|}[I_{\text{and}}(S)+I_{\text{or }}(S)] \tag{9}\]
Let there be some AND interactions and OR interactions containing the coalition \(T\). Then, Equation (8) shows that for each interaction \(S\supseteq T\) containing the coalition \(T\), we must allocate a ratio \(\frac{|T|}{|S|}\) of its interaction effect as a numerical component of \(\varphi(T)\). In addition, Appendix H shows a list of theorems and properties of the attribution of the coalition defined in Equation (8), which theoretically guarantee the faithfulness of the attribution metric \(\varphi(T)\). For example, Equation (9) explains the difference \(\varphi(T)-\sum_{i\in T}\phi(i)\) between the coalition's attribution \(\varphi(T)\) and the sum of Shapley values \(\phi(i)\) for all input variables \(i\) in \(T\). The difference comes from those interactions that only contain partial variables in \(T\), not all variables in \(T\). Please see Appendix H for more theorems.
_Experiments._ Given a board state, we extract interaction primitives encoded by the value network, _i.e._, \(\{S:|I(S)|>\xi\}\), where \(\xi=0.15\cdot\text{max}_{S}|I(S)|\). Then, we manually annotate 50 coalitions based on the guidance from professional human Go players. Figure 7 visualizes sixteen coalitions selected from four game states. Figure 5 (b) shows the interaction context of the coalition. Please see Appendix I.3 for more details about the computation of the attribution of the interaction context.
Figure 7: Estimated attributions of different coalitions (shape patterns). Stones in the coalition are high-lighted by red circles.
### Human players' interpretation of the classic shapes/coalitions
In order to interpret shape patterns (coalitions) encoded by the value network, we collaborate with the professional human Go player9. Based on Figure 7, they find both shape patterns that fit common understandings of human players and shape patterns that conflict with human understandings.
Footnote 9: During the review phase, the Go players are anonymous, because they are also authors.
**Cases that fit human understandings.** For the Game 1 in Figure 7 (1.a - 1.d), \(\varphi(\{1,2,3,8\})<\varphi(\{2,3,8\})\) and \(\varphi(\{1,2,3,7\})<\varphi(\{2,3,7\})\). It means that when the white stone \(x_{1}\) participates in the combination of white stones \(x_{2},x_{3}\), the advantage of white stones become lower, _i.e_., the stone \(x_{1}\) is a low-value move. Go players consider that the effect of the combination of white stones \(x_{1},x_{2},x_{3}\) is low. For the Game 2 in Figure 7 (2.a, 2.b), \(\varphi(\{1,5,8\})>\varphi(\{1,2,8\})\) means that the value network considers that the white stone \(x_{5}\) has higher value than \(x_{2}\). Go players consider that in this game state, the white stone \(x_{5}\) protects the white stones \(x_{1},x_{2},x_{3},x_{4}\), and the white stones \(x_{1},x_{3}\) attack the black stones \(x_{6},x_{7}\), but the white stone \(x_{2}\) has much less value than other stones. For the Game 3 in Figure 7 (3.a - 3.c), \(\varphi(S_{1}=\{1,3,8\})>\varphi(S_{3}=\{1,2,3,8\})\) and \(\varphi(S_{2}=\{2,3,8\})>\varphi(S_{3}=\{1,2,3,8\})\), subject to \(S_{3}=S_{1}\cup S_{2}\). Go players consider that the existence of the local shape \(S_{1}=\{1,3,8\}\) makes the move of the stone \(x_{2}\) have a low value, _i.e_., given the context \(S_{1}\), the stone \(x_{2}\) wastes a move, thereby losing some advantages. Figure 7 shows some strange shape patterns that go beyond the understandings of human Go players.
**Cases that conflict with human understandings.** For Game 3 in Figure 7 (3.d, 3.e), \(\varphi(\{6,7,8\})=1.00\) and \(\varphi(\{3,6,7\})=-1.15\), Go players are confused that the coalition \(\{6,7,8\}\) is advantageous for white stones, and the coalition \(\{3,6,7\}\) is advantageous for black stones. For Game 4 in Figure 7 (4.a, 4.b), \(\varphi(\{1,2,3,4\})=1.34\) and \(\varphi(\{1,2,3,9\})=-1.71\). It means that the coalition \(\{1,2,3,4\}\) is advantageous for white stones, and the coalition \(\{1,2,3,9\}\) is advantageous for black stones.
## 4 Conclusion
In this paper, we extract sparse interactions between stones memorized by the value network for the game of Go. We regard common coalitions shared by different interactions as shape patterns, and estimate attribution values of these common coalitions. Then, we examine the fitness and conflicts between the automatically extracted shape patterns and conventional human understanding of the game of Go, so as to help human players learn novel shapes from the value network. We collaborate with professional human Go players to provide deep insights into shape patterns that are automatically extracted from the value network.
## Ethic Statement
This paper aims to extract sparse and simple interaction primitives between stones encoded by the value network for the game of Go, thereby teaching people to learn from the value network. Previous methods usually extract AND-OR interactions to represent the primitives encoded by the AI model. However, we discover that although AND-OR interactions have some good mathematical properties, the interaction primitives (shape patterns) extracted by this method are usually extremely complex, _i.e_., the shape patterns usually contain many stones. Such complexity of the extracted shape patterns makes it difficult for people to learn from the value network. Thus, we propose a method to extract sparse and simple interactions encoded by the value network. There are no ethic issues with this paper.
## Reproducibility Statement
We have provided proofs for the theoretical results of this study in Appendix A, B, C, D, E, F, G, H. We have also provided experimental details in Appendix I and more experimental results in Appendix J. Furthermore, we will release the code when the paper is accepted.
|
2301.11936 | Quantum Ridgelet Transform: Winning Lottery Ticket of Neural Networks
with Quantum Computation | A significant challenge in the field of quantum machine learning (QML) is to
establish applications of quantum computation to accelerate common tasks in
machine learning such as those for neural networks. Ridgelet transform has been
a fundamental mathematical tool in the theoretical studies of neural networks,
but the practical applicability of ridgelet transform to conducting learning
tasks was limited since its numerical implementation by conventional classical
computation requires an exponential runtime $\exp(O(D))$ as data dimension $D$
increases. To address this problem, we develop a quantum ridgelet transform
(QRT), which implements the ridgelet transform of a quantum state within a
linear runtime $O(D)$ of quantum computation. As an application, we also show
that one can use QRT as a fundamental subroutine for QML to efficiently find a
sparse trainable subnetwork of large shallow wide neural networks without
conducting large-scale optimization of the original network. This application
discovers an efficient way in this regime to demonstrate the lottery ticket
hypothesis on finding such a sparse trainable neural network. These results
open an avenue of QML for accelerating learning tasks with commonly used
classical neural networks. | Hayata Yamasaki, Sathyawageeswar Subramanian, Satoshi Hayakawa, Sho Sonoda | 2023-01-27T19:00:00Z | http://arxiv.org/abs/2301.11936v2 | # Quantum Ridgelet Transform:
###### Abstract
Ridgelet transform has been a fundamental mathematical tool in the theoretical studies of neural networks. However, the practical applicability of ridgelet transform to conducting learning tasks was limited since its numerical implementation by conventional classical computation requires an exponential runtime \(\exp(O(D))\) as data dimension \(D\) increases. To address this problem, we develop a quantum ridgelet transform (QRT), which implements the ridgelet transform of a quantum state within a linear runtime \(O(D)\) of quantum computation. As an application, we also show that one can use QRT as a fundamental subroutine for quantum machine learning (QML) to efficiently find a sparse trainable subnetwork of large shallow wide neural networks without conducting large-scale optimization of the original network. This application discovers an efficient way in this regime to demonstrate the lottery ticket hypothesis on finding such a sparse trainable neural network. These results open an avenue of QML for accelerating learning tasks with commonly used classical neural networks.
Machine Learning, ICML
## 1 Introduction
Quantum machine learning (QML) is an emerging field of research to take advantage of quantum computation for accelerating machine-learning tasks (Biamonte et al., 2017; Ciliberto et al., 2018; Schuld and Petruccione, 2021). Quantum computation can achieve significant speedups compared to the best existing algorithms with conventional classical computation in solving various computational tasks (Nielsen and Chuang, 2011; de Wolf, 2019), such as Shor's algorithm for integer factorization (Shor, 1997). QML indeed has advantages in learning data obtained from quantum states (Sweke et al., 2021; Huang et al., 2021, 2022; Chen et al., 2022), yet machine learning commonly deals with classical data rather than quantum states. For a classical dataset constructed carefully so that its classification reduces to a variant of Shor's algorithm, QML achieves the classification superpolynomially faster than classical algorithms (Liu et al., 2021); however, the applicability of such QML to practical datasets has been unknown. Meanwhile, motivated by the success of neural networks (Goodfellow et al., 2016), various attempts have been made to apply quantum computation to more practical tasks for neural networks. For example, one widely studied approach in QML is to use parameterized quantum circuits, often called "quantum neural networks", as a potential substitute for conventional classical neural networks; however, problematically, the parameterized quantum circuits do not successfully emulate essential components of the neural networks, e.g., perceptrons and nonlinear activation functions, due to linearlity of the transformation implemented by the quantum circuits (Schuld and Petruccione, 2021). Thus, a significant challenge in QML has been to develop a novel technique to bridge the gap between quantum computation and classical neural networks, so as to clarify what advantage QML could offer on top of the empirically proven merit of the classical neural networks.
To address this challenge, we here develop a fundamental quantum algorithm for making the tasks for classical neural networks more efficient, based on ridgelet transform. Ridgelet transform, one of the well-studied integral transforms in signal processing, is a fundamental mathematical tool for studying neural networks in the over-parameterized regime (Murata, 1996; Candes, 1998; Rubin, 1998; Starck et al., 2010; Sonoda et al., 2021, 2022;b). Let \(f:\mathbb{R}^{D}\rightarrow\mathbb{R}\) denote a function with \(D\)-dimensional input, to be learned with a neural network. For an activation function \(g:\mathbb{R}\rightarrow\mathbb{R}\) such as the rectified linear unit (ReLU), a shallow feed-forward neural network with a single hidden layer is represented by \(f(\mathbf{x})\approx\sum_{n=1}^{N}w_{n}g(\mathbf{a}_{n}^{\top}\mathbf{x}-b_{n})\), where \(N\) is the number of nodes in the hidden layer, and \(w_{n}\) is the weight of the map \(g(\mathbf{a}_{n}^{\top}\mathbf{x}-b_{n})\) parameterized by \((\mathbf{a}_{n},b_{n})\) at node \(n\in\{1,\dots,N\}\)(Goodfellow et al., 2016). In the over-parameterized (continuous) limit \(N\rightarrow\infty\), the representation simplifies into an **integral representation** of the neural network (Barron, 1993; Murata, 1996; Candes, 1998;
Sonoda Murata (2017), i.e.,
\[f(\mathbf{x})=S[w](\mathbf{x})\coloneqq\int_{\mathbb{R}^{D}\times\mathbb{R}}\!\!d\mathbf{a} \,db\,w(\mathbf{a},b)g(\mathbf{a}^{\top}\mathbf{x}-b), \tag{1}\]
where \((\mathbf{a},b)\) runs over all possible parameters in the continuous space, and \(w:\mathbb{R}^{D}\times\mathbb{R}\to\mathbb{R}\) at each \((\mathbf{a},b)\) corresponds to the weight \(w_{n}\) at the node \(n\) with parameter \((\mathbf{a}_{n},b_{n})=(\mathbf{a},b)\). With a ridgelet function \(r:\mathbb{R}^{D}\to\mathbb{R}\) that we appropriately choose corresponding to \(g\), the \(D\)-dimensional **ridgelet transform**\(R[f]\) is defined as an inverse transform of \(S[w]\) in (1), characterizing a weight \(w\) to represent \(f\), given by
\[w(\mathbf{a},b)=R[f](\mathbf{a},b)\coloneqq\int_{\mathbb{R}^{D}}d\mathbf{x}\,f(\mathbf{x})r( \mathbf{a}^{\top}\mathbf{x}-b). \tag{2}\]
A wide class of function \(f\) is known to be representable as (1); moreover, if \(g\) and \(r\) satisfy a certain admissibility condition, we can reconstruct \(f\) from the ridgelet transform of \(f\), i.e., \(f\propto S[R[f]]\), up to a normalization factor Sonoda Murata (2017). For theoretical analysis, an essential benefit of the integral representation is to simplify the analysis by the linearity; that is, we can regard (1) as the linear combination of an non-orthogonal over-complete basis of functions, i.e., \(\{g(\mathbf{a}^{\top}\mathbf{x}-b):(\mathbf{a},b)\in\mathbb{R}^{D}\times\mathbb{R}\}\), with weight \(w(\mathbf{a},b)\) given by the ridgelet transform of \(f\).
Progressing beyond using the ridgelet transform for theoretical analysis, our key idea is to study its use for conducting tasks for neural networks. However, \(D\)-dimensional ridgelet transform has been computationally hard to use in practice since the existing algorithms for ridgelet transform with conventional classical computation require \(\exp(O(D))\) runtime as \(D\) increases Doetteretterli (2003); Carre Andres (2004); Helbert et al. (2006); Sonoda Murata (2014). After all, the \(D\)-dimensional ridgelet transform is a transform of \(D\)-dimensional functions in an \(\exp(O(D))\)-size space (see Sec. 2 for detail), and classical algorithms for such transforms conventionally need \(\exp(O(D))\) runtime; e.g., fast Fourier transform may be a more established transform algorithm but still needs \(O(n\log(n))=\exp(O(D))\) runtime for the space of size \(n=\exp(O(D))\). To solve these problems, we discover that we can employ quantum computation. Our results are as follows.
1. (Sec. 2) To make exact implementation of ridgelet transform possible for computer with a finite number of bits and quantum bits (qubits), we formulate a new discretized version of ridgelet transform, which we call **discrete ridgelet transform**. We prove that our discretized ridgelet transform can be used for exactly representing any function on the discretized domain.
2. (Sec. 3) We develop a quantum algorithm to apply the \(D\)-dimensional discrete ridgelet transform to a quantum state of \(O(D)\) qubits, i.e., a state in an \(\exp(O(D))\)-dimensional space, only within linear runtime \(O(D)\). We call this quantum algorithm **quantum ridgelet transform (QRT)**. QRT is exponentially faster in \(D\) than the \(\exp(O(D))\) runtime of the best existing classical algorithm for ridgelet transform in the \(\exp(O(D))\)-size space, in the same spirit as **quantum Fourier transform (QFT)**Coppersmith (1994) being exponentially faster than the corresponding classical algorithm of fast Fourier transform.
3. (Sec. 4) As an application, we demonstrate that we can use QRT to learn a sparse representation of an unknown function \(f\) by sampling a subnetwork of a shallow wide neural network to approximate \(f\) well. We analytically show the advantageous cases of our algorithm and also conduct a numerical simulation to support the advantage. This application is important as a demonstration of the **lottery ticket hypothesis**Frankle Carbin (2019), as explained in the following.
Contribution to QML with neural networksState-of-the-art neural networks have billions of parameters to attain high learning accuracy, but such large-scale networks may be problematic for practical use, e.g., with mobile devices and embedded systems. Pruning techniques for neural networks gain growing importance in learning with neural networks. The lottery ticket hypothesis by Frankle Carbin (2019) claims that, in such a large-scale neural network, one can find a sparse trainable subnetwork. However, it is computationally demanding to search for the appropriate subnetwork in the large-scale neural network.
To apply QML to this pruning problem, our idea is to use QRT for preparing a quantum state so that, by measuring the state, we can sample the parameters of the important nodes for the subnetwork with high probability. To make this algorithm efficient, we never store all parameters of the large original neural network in classical memory but represent them by the amplitude of the quantum state prepared directly from given data. Conventionally, quantum computation can use QFT to achieve superpolynomial speedups over classical computation for various search problems Simon (1994); Shor (1997); Bernstein Vazirani (1997); Yamakawa Zhandry (2022). By contrast, we make quantum computation applicable to searching in the parameter space of neural networks, by developing QRT to be used in place of QFT.
Consequently, our results show that QRT can be used as a fundamental subroutine for QML to accelerate the tasks for the classical neural networks. A potential drawback may be that our current technique is designed simply for the shallow neural networks with a single hidden layer; however, studies of shallow neural networks capture various essential features of neural networks. We leave the generalization to deep neural networks for future research, but our development opens a promising route in this direction.
## 2 Discrete ridgelet transform
### Formulation of discrete ridgelet transform
In this section, we formulate the **discrete ridgelet transform**. Then, we also derive a **Fourier slice theorem** that characterizes our discrete ridgelet transform using Fourier transform. Although multiple definitions of discrete versions of ridgelet transform have been proposed, none of them has such Fourier expression (Do & Vetterli, 2003; Carre & Andres, 2004; Helbert et al., 2006). By contrast, the significance of the Fourier slice theorem is that it makes the ridgelet analysis tractable with the well-established techniques for the Fourier transform, which we will use in Sec. 3 for constructing the quantum algorithm as well.
Our formulation assumes the following.
* Since computers using a finite number of bits and qubits cannot exactly deal with real number, we use a finite set \(\mathbb{Z}_{P}\coloneqq\{0,1,\ldots,P-1\}\) in place of the set of real number \(\mathbb{R}\), where \(P\) is a precision parameter representing the cardinality of \(\mathbb{Z}_{P}\). This is conventional in data representation; e.g., for a gray scale image, we may use \(8\) bits \(\{0,1,\ldots,2^{8}-1\}\) to represent the intensity of each pixel. In our setting, we can make \(P\) larger to achieve better precision in the data representation; e.g., for a more precise representation of the gray scale image, we may use \(16\) bits \(\{0,1,\ldots,2^{16}-1\}\) in place of the \(8\) bits for each pixel. For this improvement of the precision, we may normalize the data by rescaling while the intervals in the discretization are fixed to \(1\) for simplicity of presentation. When we write a sum, \(\mathbf{x}\), \(\mathbf{a}\), and \(\mathbf{u}\) run over \(\mathbb{Z}_{P}^{D}\), and \(y\), \(b\), and \(v\) over \(\mathbb{Z}_{P}\) unless specified otherwise. Let \(\mathcal{F}_{D}\) and \(\mathcal{F}_{1}\) denote the \(D\)-dimensional and \(1\)-dimensional discrete Fourier transforms on \(\mathbb{Z}_{P}^{D}\) and \(\mathbb{Z}_{P}\), respectively, i.e., \[\mathcal{F}_{D}[f](\mathbf{u}) \coloneqq P^{-\frac{D}{2}}\sum_{\mathbf{x}}f(\mathbf{x})\mathrm{e}^{\frac{ -2\pi\mathrm{i}\mathbf{u}^{\top}\mathbf{x}}{P}},\] (3) \[\mathcal{F}_{1}[g](v) \coloneqq P^{-\frac{1}{2}}\sum_{b}r(b)\mathrm{e}^{\frac{-2\pi \mathrm{i}\mathbf{v}}{P}}.\] (4)
* We assume in the following that \(P\) is taken as a prime number, considering \(\mathbb{Z}_{P}\) as a finite field. This is not a restrictive assumption in achieving the better precision since we can take an arbitrarily large prime number as \(P\). For example, the maximum of \(32\)-bit signed integers \(P=2^{31}-1\) can be used.
* An activation function \(g:\mathbb{R}\to\mathbb{R}\) is assumed to be normalized as \[\sum_{b}g(b)=0,\quad\|g\|_{2}^{2}\coloneqq\sum_{b}|g(b)|^{2}=1.\] (5) Indeed, any non-constant function, such as ReLU, can be used as \(g\) with normalization by adding and multiplying appropriate constants.
* Corresponding to \(g\), we choose a ridgelet function \(r:\mathbb{R}\to\mathbb{R}\) in such a way that \(g\) and \(r\) satisfy an admissibility condition \[C_{g,r}\coloneqq\sum_{v}\mathcal{F}_{1}[g](v)\overline{\mathcal{F}_{1}[r](v)} \neq 0,\] (6) where \(\overline{\cdots}\) denotes the complex conjugate. We also normalize \(r\) as \(\|r\|_{2}^{2}=1\). Note that it is conventional to choose \(r=g\), leading to \(C_{g,g}=\|\mathcal{F}_{1}[g]\|_{2}^{2}=\|g\|_{2}^{2}=1\), while our setting allows any non-unique choice of \(r\) satisfying (6) in general.
Then, by replacing the integral over \(\mathbb{R}\) in \(S\) of (1) and \(R\) of (2) with the finite sum over \(\mathbb{Z}_{P}\), we correspondingly define the **discretized neural network** and the **discrete ridgelet transform** of \(f:\mathbb{R}^{D}\to\mathbb{R}\) as, respectively,
\[\mathcal{S}[w](\mathbf{x})\coloneqq P^{-\frac{D}{2}}\sum_{\mathbf{a},b}w( \mathbf{a},b)g((\mathbf{a}^{\top}\mathbf{x}-b)\bmod P), \tag{7}\] \[\mathcal{R}[f](\mathbf{a},b)\coloneqq P^{-\frac{D}{2}}\sum_{\mathbf{x}}f( \mathbf{x})r((\mathbf{a}^{\top}\mathbf{x}-b)\bmod P), \tag{8}\]
where \(P^{-\frac{D}{2}}\) represents a normalization constant. Note that the discretized neural network \(\mathcal{S}[w](\mathbf{x})\) has discrete parameters \((\mathbf{a},b)\in\mathbb{Z}_{P}^{D}\times\mathbb{Z}_{P}\) for nodes in the hidden layer, but the weights \(w(\mathbf{a},b)\in\mathbb{R}\) of the nodes are real numbers.
The following theorem, **Fourier slice theorem**, characterizes the discrete ridgelet transform \(R[f]\) in terms of Fourier transforms of \(f\) and \(h\). In the case of continuous ridgelet transform, ridgelet analysis can be seen as a form of wavelet analysis in the Radon domain, which combines wavelet and Radon transforms (Fadili & Starck, 2012). The Fourier slice theorem for the continuous ridgelet transform follows from that for continuous Radon transform. However, problematically, discrete versions of the Radon transforms are nonunique and usually more involved, not necessarily having exact expression in terms of discrete Fourier transform (Beylkin, 1987; Kelley & Madisetti, 1993; Gotz & Druckmuller, 1996; Brady, 1998; Boag et al., 2000; Brandt et al., 2000; Press, 2006). As a result, the existing definitions of discrete versions of ridgelet transform have no such Fourier expression either (Do & Vetterli, 2003; Carre & Andres, 2004; Helbert et al., 2006). By contrast, we here show that our formulation of the discrete ridgelet transform \(\mathcal{R}[f]\) has the following exact characterization in the Fourier transform. See Appendix A for proof.
**Theorem 2.1** (Fourier slice theorem for discrete ridgelet transform).: _For any function \(f:\mathbb{R}^{D}\to\mathbb{R}\) and any point \(\mathbf{a}\in\mathbb{Z}_{P}^{D},v\in\mathbb{Z}_{P}\) in the discretized space, it holds that_
\[\mathcal{F}_{1}[\mathcal{R}[f](\mathbf{a},\cdot)](v)=\mathcal{F}_{D}[f](v\mathbf{a} \bmod P)\ \overline{\mathcal{F}_{1}[r](v)}. \tag{9}\]
### Exact representation of functions as neural networks
Using the discrete ridgelet transform in Sec. 2.1, we here show that any function \(f\) on the discretized domain has an **exact representation** in terms of a shallow neural network with a finite number of parameters in the discretized space. In the continuous case, any square-integrable function \(f\) is represented as the continuous limit of the shallow neural networks, i.e., \(f=S[w]\) in (1), with the weight given by the ridgelet transform \(w\propto R[f]\)(Sonoda and Murata, 2017). With discretization, it is nontrivial to show such an exact representation due to finite precision in discretizing the real number. Nevertheless, we here show that any function \(f(\mathbf{x})\) for \(\mathbf{x}\in\mathbb{Z}_{P}^{D}\) can be exactly represented as \(f(\mathbf{x})=\mathcal{S}[w](\mathbf{x})\) as well, with the weight given by our formulation of the discrete ridgelet transform \(w\propto\mathcal{R}[f]\), which we may call exact representation as a discretized neural network.
In particular, the following theorem shows that any \(D\)-dimensional real function \(f\) on the discrete domain can be exactly represented as a linear combination of non-orthogonal basis functions \(\{g((\mathbf{a}^{\top}\mathbf{x}-b)\bmod P):(\mathbf{a},b)\in\mathbb{Z}_{P}^{D}\times \mathbb{Z}_{P}\}\) with coefficients given by \(\mathcal{R}[f]\). Due to the non-orthogonality, the coefficients may not be unique, and different choices of the ridgelet function \(r\) in (8) lead to different \(\mathcal{R}[f]\) while any of the choices can exactly reconstruct \(f\). The proof is based on the Fourier slice theorem in Theorem 2.1, crucially using the existence of an inverse element for each element of a finite field \(\mathbb{Z}_{P}\); thus, it is essential to assume that \(P\) is a prime number. See Appendix B for proof.
**Theorem 2.2** (Exact representation of function as discretized neural network).: _For any function \(f:\mathbb{R}^{D}\rightarrow\mathbb{R}\) and any point \(\mathbf{x}\in\mathbb{Z}_{P}^{D}\) in the discretized domain, we have_
\[f(\mathbf{x})=C_{g,r}^{-1}\;\mathcal{S}[\mathcal{R}[f]](\mathbf{x}), \tag{10}\]
_where \(C_{g,r}\) is a constant defined in (6)._
## 3 Quantum ridgelet transform
In this section, we introduce **quantum ridgelet transform (QRT)**, an efficient quantum algorithm for implementing the discrete ridgelet transform formulated in Sec. 2. In various quantum algorithms, we may use quantum Fourier transform (QFT) as a fundamental subroutine. In addition to QFT, various discrete transforms can be efficiently implemented with quantum computation, such as wavelet transform (Hoyer, 1997; Fijany and Williams, 1998; Labunets et al., 2001a; Arguello, 2009; Taha, 2016; Li et al., 2019, 2022), Radon transform (Ma et al., 2022), fractional Walsh transform (Labunets et al., 2001b), Hartley transform (Tseng and Hwang, 2004), and curvelet transform (Liu, 2009). However, the existing discrete versions of ridgelet transform (Do and Vetterli, 2003; Carre and Andres, 2004; Helbert et al., 2006) were lacking implementation by quantum computation. In contrast, our QRT opens a way to use the discrete ridgelet transform as a fundamental subroutine for QML to deal with tasks for classical neural networks.
Basic notions and notations of quantum computation to describe our quantum algorithms are summarized in Appendix C. In classical computation, we may use \(\lceil\log_{2}(P)\rceil\) bits for representing \(\mathbb{Z}_{P}\), where \(\lceil x\rceil\) denotes the ceiling function, i.e., the smallest integer that is not smaller than \(x\). The quantum algorithm uses a \(\lceil\log_{2}(P)\rceil\)-qubit quantum register for \(\mathbb{Z}_{P}\). This quantum register for \(\mathbb{Z}_{P}\) is represented as a \(2^{\lceil\log_{2}(P)\rceil}\)-dimensional complex vector space \(\mathcal{H}_{P}\coloneqq(\mathbb{C}^{2})^{\otimes\lceil\log_{2}(P)\rceil}\). Using the conventional bra-ket notation, each state in the standard orthonormal basis of the registers is written as a ket (i.e., a vector) \(\ket{\mathbf{x}}\in\mathcal{H}_{P}^{\otimes D}\) for representing \(\mathbf{x}\in\mathbb{Z}_{P}^{D}\) and \(\ket{\mathbf{a},b}\coloneqq\ket{\mathbf{a}}\otimes\ket{b}\in\mathcal{H}_{P}^{\otimes D }\otimes\mathcal{H}_{P}\) for \((\mathbf{a},b)\in\mathbb{Z}_{P}^{D}\times\mathbb{Z}_{P}\), respectively.
The task of QRT is to transform a given unknown quantum state \(\ket{\psi}=\sum_{\mathbf{x}}\psi(\mathbf{x})\ket{\mathbf{x}}\) into \(\mathbf{R}\ket{\psi}=\sum_{\mathbf{a},b}\mathcal{R}[\psi](\mathbf{a},b)\ket{\mathbf{a},b}\), where \(\mathbf{R}\) is a matrix given by
\[\mathbf{R}\coloneqq P^{-\frac{D}{2}}\sum_{\mathbf{x},\mathbf{a},b}r((\mathbf{a}^{\top}\mathbf{x}-b) \bmod P)\ket{\mathbf{a},b}\bra{\mathbf{x}}. \tag{11}\]
Our algorithm works with the following assumption along with those assumed in Sec. 2.1.
* We choose the ridgelet function \(r:\mathbb{R}\rightarrow\mathbb{R}\) in such a way that a quantum state representing \(r\) by its amplitude \(\ket{r}\coloneqq\sum_{y}r(y)\ket{y}\) can be prepared efficiently in runtime \(O(\mathrm{polylog}(P))\). This assumption is not restrictive since we can use the quantum algorithm by Grover and Rudolph (2002) to meet the assumption for representative choices of \(r\) that are integrable efficiently, such as ReLU, tanh, and sigmoid.
Algorithm 1 shows our quantum algorithm for QRT. We construct the algorithm by implementing the discrete Fourier transform in the Fourier slice theorem (Theorem 2.2) by QFT. QFT applies a unitary matrix representing discrete Fourier transform
\[\mathbf{F}_{P}\coloneqq\sum_{v,b}P^{-\frac{1}{2}\mathrm{e}^{\frac{-2\pi\mathrm{i \hbar}}{P}}}\ket{v}\bra{b} \tag{12}\]
to a given quantum state of \(\mathcal{H}_{P}\) within runtime \(O(\mathrm{polylog}(P))\)(Mosca and Zalka, 2004). The runtime of QFT \(\mathbf{F}_{P}\) is exponentially faster in \(P\) than the classical algorithm for fast Fourier transform in the space of size \(P\). The following theorem shows that this speedup is also the case in QRT compared to classical algorithms for computing ridgelet transform. See Appendix C for proof.
**Theorem 3.1** (Runtime of quantum ridgelet transform).: _The runtime of QRT in Algorithm 1 is_
\[O(D\times\mathrm{polylog}(P)). \tag{13}\]
## 4 Application of quantum ridgelet transform to lottery ticket hypothesis
### Setting of finding winning ticket of neural networks
In this section, as an application of quantum ridgelet transform (QRT) in Sec. 3, we propose an algorithm for finding a sparse subnetwork approximating a large neural network efficiently by quantum computation, based on the **lottery ticket hypothesis** on neural networks. The lottery ticket hypothesis by Frankle and Carbin (2019) claims that a randomly-initialized fully-connected neural network contains a subnetwork that is initialized in such a way that, when trained in isolation, it can match the accuracy of the original network after training for at most the same number of iterations. This hypothesis has been confirmed numerically in various settings. The theoretical analysis of deep neural networks is inevitably hard in general, and studies of shallow neural networks are also important for capturing essences of neural networks, which we here consider in a setting of regression from given data as described in the following.
For \(D\in\{1,2,\ldots\}\) with a fixed prime number \(P\), we consider a family of problems to approximate an unknown function \(f^{(D)}:\mathbb{R}^{D}\rightarrow\mathbb{R}\) by a shallow neural network, i.e.,
\[\hat{f}^{(D)}(\mathbf{x})\coloneqq\sum_{n=1}^{N}w_{n}g((\mathbf{a}_{n}^{\top}\mathbf{x}-b_{ n})\bmod P). \tag{16}\]
Let \(p_{\mathrm{data}}^{(D)}\) be a probability mass function for the input data, which is assumed to be supported on \(\mathbb{Z}_{P}^{D}\). Suppose that we are given \(M\) input-output pairs of examples \((\mathbf{x}_{1},y_{1}=f(\mathbf{x}_{1})),\ldots,(\mathbf{x}_{M},y_{M}=f(\mathbf{x}_{M}))\in \mathbb{Z}_{P}^{D}\times\mathbb{R}\). Let \(\hat{p}_{\mathrm{data}}^{(D)}\) denote the empirical distribution of \(\mathbf{x}_{1},\ldots,\mathbf{x}_{M}\). Given \(\epsilon>0\), we will analyze empirical risk minimization (Bach, 2021), i.e., minimization of the empirical risk \(\sum_{\mathbf{x}}\hat{p}_{\mathrm{data}}^{(D)}(\mathbf{x})|f^{(D)}(\mathbf{x})-\hat{f}^{( D)}(\mathbf{x})|^{2}\) to \(O(\epsilon)\). If obvious, we may omit \(D\) in superscripts; e.g., we may write \(f^{(D)}\) as \(f\).
The setting of our analysis, along with the assumptions in Sec. 3, is as follows. Based on the exact representation of \(f(\mathbf{x})\) in terms of the neural network \(\mathcal{S}[w](\mathbf{x})\) in Theorem 2.2, we can approximate \(f\) by a neural network
\[f(\mathbf{x})\!\approx\!\mathcal{S}[w_{\lambda}^{*}](\mathbf{x})\!=\!\!\sum_{\mathbf{a},b} P^{-\frac{D}{2}}w_{\lambda}^{*}(\mathbf{a},b)g((\mathbf{a}^{\top}\mathbf{x}\!-\!b)\bmod P), \tag{17}\]
where \(w_{\lambda}^{*}\) is the optimal solution of the ridge regression with the empirical distribution, i.e.,
\[w_{\lambda}^{*}(\mathbf{a},b)\coloneqq\arg\min_{w}\{\tilde{J}(w)\}, \tag{18}\]
\(\tilde{J}(w)\coloneqq J(w)+\lambda\Omega(w)\), \(J(w)\coloneqq\sum_{\mathbf{x}}\hat{p}_{\mathrm{data}}(\mathbf{x})|f(\mathbf{x})-\mathcal{S }[w](\mathbf{x})|^{2}\), \(\Omega(w)\coloneqq\|P^{-\frac{D}{2}}w\|_{2}^{2}=\sum_{\mathbf{a},b}|P^{-\frac{D}{2} }w(\mathbf{a},b)|^{2}\), and \(\lambda>0\) is a hyperparameter for regularization. Learning a general class of function \(f^{(D)}\) on \(\mathbb{Z}_{P}^{D}\) would be inevitably demanding as its representation would require \(\exp(O(D))\) parameters to specify the values \(f^{(D)}(\mathbf{x})\) for all \(\mathbf{x}\in\mathbb{Z}_{P}^{D}\) in the worst case. We have shown such a general representation in Theorem 2.2. By contrast, our goal here is to achieve the approximation feasibly with much fewer parameters, using a subnetwork of the large original network \(\mathcal{S}[w_{\lambda}^{*}](\mathbf{x})\).
To this goal, recall that it is conventional in statistical learning theory to consider a reasonably restricted class of functions, e.g., those with bounded norms (Bach, 2021); correspondingly, we work on a setting where the norm \(\|P^{-\frac{D}{2}}w_{\lambda}^{*}\|_{1}\coloneqq\sum_{\mathbf{a},b}|P^{-\frac{D}{2 }}w_{\lambda}^{*}(\mathbf{a},b)|\) of the weights for representing \(f^{(D)}\) should be bounded even on the large scales \(D\rightarrow\infty\). In particular, let \(((\mathbf{a}_{j},b_{j})\in\mathbb{Z}_{P}^{D}\times\mathbb{Z}_{P}:j\in\{1,\ldots, P^{D+1}\})\) denote a sequence of parameters of all nodes in the hidden layer of \(\mathcal{S}[w_{\lambda}^{*}](\mathbf{x})\) aligned in the descending order of \(w_{\lambda}^{*}\), i.e., \(|w_{\lambda}^{*}(\mathbf{a}_{1},b_{1})|\geq|w_{\lambda}^{*}(\mathbf{a}_{2},b_{2})| \geq\cdots\). These nodes of \(\mathcal{S}[w_{\lambda}^{*}]\) are ordered in the same way; i.e., the weight of the \(j\)th node is \(w_{\lambda}^{*}(\mathbf{a}_{j},b_{j})\). Then, we assume the following.
* Following the convention of assumptions in the previous works by, e.g., Donoho (1993); Hayakawa and Suzuki (2020), we assume that there exist constants \(\alpha,\beta>0\) such that it holds uniformly for any \(D\) and \(j\) that \[|P^{-\frac{D}{2}}w_{\lambda}^{*}(\mathbf{a}_{j},b_{j})|\leqq\alpha j^{-(1+\beta)},\] (19) which specifies the decay rate for large \(j\), leading to \(\|P^{-\frac{D}{2}}w_{\lambda}^{*}\|_{1}\leqq\sum_{j=1}^{\infty}\alpha j^{-(1+ \beta)}\leqq\alpha+\int_{1}^{\infty}\frac{\alpha}{x^{1+\beta}}dx=\alpha+\frac {\alpha}{\beta}<\infty\). We also write the \(L^{2}\) norm as \[\gamma\coloneqq\|P^{-\frac{D}{2}}w_{\lambda}^{*}\|_{2}^{2},\] (20) which is upper bounded by \(\gamma\leqq\sum_{j=1}^{\infty}\alpha^{2}j^{-2(1+\beta)}\leqq\alpha^{2}+\int_ {1}^{\infty}\frac{\alpha^{2}}{x^{2(1+\beta)}}dx=\alpha^{2}+\frac{\alpha^{2}}{ 1+2\beta}\). Functions \((f^{(D)}:D=1,2,\ldots)\) satisfying (19) are called \((\alpha,\beta)\)-class functions, which are to be learned in our setting.
For given \(\epsilon>0\), our analysis focuses on the task of finding a sparse representation \(\hat{f}\) to approximate \(\mathcal{S}[w_{\lambda}^{*}]\) up to \(\epsilon\), i.e.,
\[\sum_{\mathbf{x}}\hat{p}_{\mathrm{data}}(\mathbf{x})|\mathcal{S}[w_{\lambda}^{*}](\bm {x})-\hat{f}(\mathbf{x})|^{2}=O(\epsilon), \tag{21}\]
with keeping the number of nodes \(N\) in the hidden layer of \(\hat{f}\) in (16) as small as possible. We assume that \(\lambda>0\) is chosen appropriately so that (21) leads to \(\sum_{\mathbf{x}}\hat{p}_{\mathrm{data}}(\mathbf{x})|f(\mathbf{x})-\hat{f}(\mathbf{x})|^{2}=O(\epsilon)\), achieving the empirical risk minimization with \(\hat{f}\). Note that if we fix \(D\), then for any \(f\), we may be able to find sufficiently large \(\alpha\) and small \(\beta\) to meet (19), but our analysis will show that assuming smaller \(\alpha\) and larger \(\beta\) guarantees smaller \(N\) to achieve (21) for the \((\alpha,\beta)\)-class functions for arbitrary \(D\).
### Quantum algorithm for sampling from optimized probability distribution
We here construct a quantum algorithm for sampling from an **optimized probability distribution** (defined later as \(p_{\lambda,\Delta}^{*}(\mathbf{a},b)\) in (22)) of parameters of nodes in the hidden layer of the large original network \(\mathcal{S}[w_{\lambda}^{*}]\) in (17), which we can use for efficiently finding a sparse subnetwork of \(\mathcal{S}[w_{\lambda}^{*}]\) to approximate \(f\) well. The original network \(\mathcal{S}[w_{\lambda}^{*}]\) has \(\exp(O(D))\) nodes to represent any function \(f\), as with Theorem 2.2. By contrast, studies on the lottery ticket hypothesis provide numerical evidences that \(f\) in practice can usually be approximated by a sparse subnetwork with much fewer parameters. One existing way to find such a subnetwork is to train the overall large network and then perform masking to eliminate the low-weight nodes while keeping those with higher weights (Frankle and Carbin, 2019). However, this approach is inefficient since one needs large-scale optimization to train the large original network before the pruning. Then, more recent studies by Zhou et al. (2019); Ramanujan et al. (2020); Wang et al. (2020); Malach et al. (2020); Orseau et al. (2020); Pensia et al. (2020) have suggested that one should be able to find the subnetwork only by pruning the initial network directly, even without the optimization for training. Still, to perform this pruning appropriately, one needs to perform a large-scale search for the subnetwork within the parameter space of the large original neural network. As \(D\) increases, it would become infeasible to deal with the large original network for training or searching as long as we use the existing methods based on classical computation.
To address this problem, our key idea is to represent the weights of the \(\exp(O(D))\) nodes in the hidden layer of the neural network \(\mathcal{S}[w_{\lambda}^{*}]\) efficiently as the amplitude of quantum state of only \(O(D)\) qubits. Roughly speaking, as in Theorem 2.2, these weights can be given by the discrete ridgelet transform \(\mathcal{R}[f]\) of \(f\), which is implementable efficiently by QRT. In particular, if we initially have a quantum state \(\ket{f}=\sum_{\mathbf{x}}f(\mathbf{x})\ket{\mathbf{x}}\), then QRT of \(\ket{f}\) can prepare \(\mathbf{R}\ket{f}=\sum_{\mathbf{a},b}\mathcal{R}[f](\mathbf{a},b)\ket{\mathbf{a},b}\) in time \(\widetilde{O}(D)\) as shown in Theorem 3.1, where \(\widetilde{O}\) may ignore polylogarithmic factors. A measurement of this quantum state \(\mathbf{R}\ket{f}\) in basis \(\{\ket{\mathbf{a},b}\}\) provides a measurement outcome \((\mathbf{a},b)\) sampled from a probability distribution proportional to the square of the amplitude, i.e., \(|\mathcal{R}[f](\mathbf{a},b)|^{2}\). In this way, we can find parameter \((\mathbf{a},b)\) for a node with large \(|\mathcal{R}[f](\mathbf{a},b)|^{2}\) with high probability, in runtime \(\widetilde{O}(D)\) per sampling. The state is corrupted by the measurement, and to perform the sampling \(N\) times, we repeat the preparation and measurement \(N\) times. To sample \((\mathbf{a},b)\) for all high-weight nodes with high probability in a theoretically guaranteed way, for \(\Delta>0\), we introduce an **optimized probability distribution**
\[p_{\lambda,\Delta}^{*}(\mathbf{a},b)\coloneqq\frac{1}{Z}\frac{|P^{-\frac{D}{2}}w_{ \lambda}^{*}(\mathbf{a},b)|^{2}}{|P^{-\frac{D}{2}}w_{\lambda}^{*}(\mathbf{a},b)|^{2}+ \Delta}, \tag{22}\]
where \(Z\) is a constant for normalization \(\sum_{\mathbf{a},b}p_{\lambda,\Delta}^{*}(\mathbf{a},b)=1\). Appropriate \(\Delta\) for our task in (21) will be specified later in Theorem 4.2. To sample from \(p_{\lambda,\Delta}^{*}\), we prepare
\[\ket{p_{\lambda,\Delta}^{*}}\coloneqq\frac{1}{\sqrt{Z}}\sum_{\mathbf{a},b}\frac{ P^{-\frac{D}{2}}w_{\lambda}^{*}(\mathbf{a},b)}{\sqrt{|P^{-\frac{D}{2}}w_{\lambda}^{*}( \mathbf{a},b)|^{2}+\Delta}}\ket{\mathbf{a},b}, \tag{23}\]
followed by performing the measurement of \(\ket{p_{\lambda,\Delta}^{*}}\) in the same way as described above.
We explicitly construct an input model for our quantum algorithm by quantum circuit as follows.
* As an input, our algorithm uses quantum circuits to prepare \(\ket{\hat{p}_{\mathrm{data}}}=\sum_{\mathbf{x}}\sqrt{\hat{p}_{\mathrm{data}}(\mathbf{x })}\ket{\mathbf{x}}\) and \(\ket{\psi_{\mathrm{in}}}\propto\sum_{\mathbf{x}}\hat{p}_{\mathrm{data}}(\mathbf{x})f(\mathbf{ x})\ket{\mathbf{x}}\). Regarding \(\ket{\hat{p}_{\mathrm{data}}}\), if we prepare \(\ket{\hat{p}_{\mathrm{data}}}\) and measure it in basis \(\{\ket{\mathbf{x}}\}\), we can randomly sample \(\mathbf{x}\) according to the empirical distribution \(\hat{p}_{\mathrm{data}}(\mathbf{x})\). For a classical algorithm, sampling from
\(\hat{p}_{\rm data}\) can be easily realized in time \(O(D\operatorname{polylog}(M))\) over \(M\) examples of \(D\)-dimensional input, by sampling \(m\in\{1,\ldots,M\}\) from the uniform distribution over \(O(\log(M))\) bits and outputting \(\mathbf{x}_{m}\in\mathbb{Z}_{D}^{D}\) out of \(\mathbf{x}_{1},\ldots,\mathbf{x}_{M}\) stored in random access memory (RAM). As for the quantum algorithm, the preparation of \(|\hat{p}_{\rm data}\rangle\) with maintaining quantum superposition may be more technical. But we show that this preparation is also implementable in time \(O(D\operatorname{polylog}(M))\), by storing the \(M\) examples upon collecting them in a sparse binary-tree data structure (Kerenidis and Prakash, 2017) with quantum RAM (QRAM) (Giovannetti et al., 2008; b), where QRAM is implemented explicitly as a parallelized quantum circuit of depth \(O(\operatorname{polylog}(M))\)(Matteo et al., 2020; Hann et al., 2021). Using the same data structure, we also show that the preparation of \(|\psi_{\rm in}\rangle\) is implemented within the same runtime \(O(D\operatorname{polylog}(M))\). As a whole, our assumption is to store the \(M\) examples in these data structures upon collecting them, so that each preparation of \(|\hat{p}_{\rm data}\rangle\) and \(|\psi_{\rm in}\rangle\) has runtime \(O(D\operatorname{polylog}(M))=\widetilde{O}(D)\). See Appendix D for detail.
The following theorem shows that we have a quantum algorithm that can prepare and measure \(|p^{*}_{\lambda,\Delta}\rangle\) to sample from \(p^{*}_{\lambda,\Delta}\) within a linear runtime \(\widetilde{O}(\frac{D}{\lambda\Delta})\). Our algorithm is based on the analytical formula for the solution of ridge regression in (18); in particular, with \(r=g\), the formula leads to
\[|p^{*}_{\lambda,\Delta}\rangle\propto\left(\mathbf{W}_{\lambda}+\frac{\Delta}{ \gamma}\mathbf{I}\right)^{-\frac{1}{2}}\!\left(\mathbf{R}\hat{\mathbf{P}}_{\rm data}\mathbf{R}^ {\top}+\lambda\mathbf{I}\right)^{-1}\!\!\mathbf{R}\,|\psi_{\rm in}\rangle\,, \tag{24}\]
where \(\mathbf{W}_{\lambda}\coloneqq\gamma^{-1}\sum_{\mathbf{a},\mathbf{b}}|P^{-\frac{D}{2}}w^{* }_{\lambda}(\mathbf{a},b)|^{2}\left|\mathbf{a},b\right\rangle\left\langle\mathbf{a},b\right|\) and \(\hat{\mathbf{P}}_{\rm data}\coloneqq\sum_{\mathbf{a}}\hat{p}_{\rm data}(\mathbf{x})\left| \mathbf{x}\right\rangle\left\langle\mathbf{x}\right|\). The algorithm is also explicitly presented as Algorithm 2 in Appendix D. See Appendix D for proof of its runtime as well.
**Theorem 4.1** (Runtime of quantum algorithm for sampling from the optimized probability distribution).: _Given \(\lambda,\Delta>0\), for any \(D\), a quantum algorithm can prepare and measure \(|p^{*}_{\lambda,\Delta}\rangle\) to sample from \(p^{*}_{\lambda,\Delta}(\mathbf{a},b)\) within runtime \(\widetilde{O}\left(\frac{D}{\lambda\Delta}\times\gamma\right)\) per sampling, where \(\gamma\) is a constant in (20)._
Remarkably, our construction of the quantum algorithm for Theorem 4.1 is based on two significant technical contributions. First, estimation of classical description of \(|p^{*}_{\lambda,\Delta}\rangle\) would need \(\exp(O(D))\) runtime and may cancel out the advantage of QML (Aaronson, 2015), but we avoid such slowdown. In particular, our quantum algorithm prepares \(|p^{*}_{\lambda,\Delta}\rangle\) directly from the \(M\) examples and then measure it to obtain parameter \((\mathbf{a},b)\) for a high-weight node of \(\mathcal{S}[w^{*}_{\lambda}]\) per single preparation and measurement. In this way, we circumvent the costly process of expectation-value estimation throughout our algorithm. Second, we develop a technique for implementing the inverses of \(\exp(O(D))\times\exp(O(D))\) matrices in (24) with quantum computation efficiently, yet without imposing restrictive assumptions. In particular, the inverses of \(\exp(O(D))\times\exp(O(D))\) matrices are hard to compute in classical computation, and conventional techniques in QML have required sparsity or low-rankness of the matrices to implement matrix inversion with large quantum speedups (Gilyen et al., 2019). More recent quantum-inspired classical algorithms also require the low-rank assumption (Tang, 2019). However, the matrices to be inverted in our algorithm are not necessarily sparse or low-rank, and thus imposing such assumptions would limit the applicability of QML. By contrast, we avoid imposing these assumptions by directly clarifying the quantum circuits for implementing these matrices efficiently with QRT. In the existing research, this type of technique for avoiding the sparsity and low-rankness assumptions in QML was established only for Fourier transform (Yamasaki et al., 2020; Yamasaki and Sonoda, 2021). Our development discovers wide applicability of such techniques even to a broader class of transforms including ridgelet transform.
### Quantum algorithm for finding winning ticket of neural networks and performance analysis
Using the quantum algorithm for sampling from \(p^{*}_{\lambda,\Delta}(\mathbf{a},b)\) in Theorem 4.1, we describe **an algorithm for finding a winning ticket**, i.e., a sparse trainable subnetwork of the large original network \(\mathcal{S}[w^{*}_{\lambda}]\) in (17) for approximating \(f\). We also analyze its performance with theoretical guarantee.
We here describe our algorithm for finding a winning ticket, which is also explicitly presented as Algorithm 3 in Appendix E. In our algorithm, we repeat the sampling from \(p^{*}_{\lambda,\Delta}(\mathbf{a},b)\) in total \(N\) times by the quantum algorithm in Theorem 4.1, where \(N\) is given later in (26). Letting \(\hat{\mathbb{W}}\) denote the set of sampled parameters in these \(N\) repetitions, we approximate \(\mathcal{S}[w^{*}_{\lambda}]\) by the subnetwork
\[\mathcal{S}[w^{*}_{\lambda}]\!\approx\!\hat{f}(\mathbf{x})\!=\!\!\!\!\sum_{(\mathbf{a},b )\in\hat{\mathbb{W}}}\hat{w}^{*}(\mathbf{a},b)g((\mathbf{a}^{\top}\mathbf{x}\!-\!b)\! \!\mod P), \tag{25}\]
where we write \(\hat{\mathbf{w}}^{*}\coloneqq(\hat{w}^{*}(\mathbf{a},b)\in\mathbb{R}:(\mathbf{a},b)\in \hat{\mathbb{W}})\). Each sampling provides parameter \((\mathbf{a},b)\in\hat{\mathbb{W}}\) of each node in the hidden layer of this subnetwork but not the value of \(\hat{w}^{*}(\mathbf{a},b)\). Once we fix \(\{g((\mathbf{a}^{\top}\mathbf{x}-b)\!\!\mod P):(\mathbf{a},b)\in\hat{\mathbb{W}}\}\) of the subnetwork, we then train \(\hat{\mathbf{w}}^{*}\) efficiently by the established procedure of convex optimization such as stochastic gradient descent (SGD) (Harvey et al., 2019), using the \(M\) examples. In this way, we achieve our task (21) with trainability.
The following theorem guarantees \(\Delta\) and \(N\) required for achieving our task (21). By combining Theorems 4.1 and 4.2, the overall runtime of our algorithm is \(\widetilde{O}(N\times\frac{D}{\lambda\Delta}\gamma)=\widetilde{O}(\frac{D}{ \lambda e^{1+\beta/\beta}})\), dominated by the \(N\) repetitions of
the sampling in Theorem 4.1. A comparison with classical algorithms analogous to our sampling-based approach is made in Sec. 4.4. See Appendix E for proof.
**Theorem 4.2** (Bounds for finding winning ticket of neural networks).: _Given \(\epsilon,\delta>0\), there exist \(\Delta\) and \(N\) satisfying_
\[\Delta=\Omega\left(\epsilon^{1+\frac{1}{\beta}}\right),\ N=O\left(\epsilon^{- \frac{1}{2\beta}}\log\left(\epsilon^{-1}\delta^{-1}\right)\right), \tag{26}\]
_such that the algorithm described above returns a subnetwork \(\hat{f}\) of the neural network \(\mathcal{S}[w_{\lambda}^{*}]\) with the number of nodes in the hidden layer of \(\hat{f}\) smaller than \(N\), and \(\hat{f}\) achieves the task of approximating \(\mathcal{S}[w_{\lambda}^{*}]\) to \(O(\epsilon)\) in (21) with high probability greater than \(1-\delta\)._
### Advantage of using quantum ridgelet transform
We numerically demonstrate the advantage of our algorithm for winning the lottery ticket of the neural network in Theorem 4.2. For fair comparison between our algorithm and a similar approach for classical algorithms, recall that the essential idea of our algorithm is to avoid computationally hard optimization of the initial neural network by sampling the nodes in its hidden layer to decide the basis functions \(\{g((\boldsymbol{a}^{\top}\boldsymbol{x}-b)\ \mathrm{mod}\ P):( \boldsymbol{a},b)\in\hat{\mathbb{W}}\}\) in (25), followed by efficiently training their coefficients \(\hat{\boldsymbol{w}}^{\star}\) via convex optimization. This idea can be regarded as a generalized form of random features by Rahimi & Recht (2008, 2009), where one randomly samples feature maps, i.e., \(\{g((\boldsymbol{a}^{\top}\boldsymbol{x}-b)\ \mathrm{mod}\ P)\}\), and then find their coefficients by convex optimization. The difference is that we use the optimized distribution \(p_{\lambda,\Delta}^{*}(\boldsymbol{a},b)\) depending on \(f\) and \(\hat{p}_{\mathrm{data}}\), but the random features conventionally performs sampling from a distribution independent of \(f\) and \(\hat{p}_{\mathrm{data}}\), e.g., a uniform distribution \(p_{\mathrm{uniform}}(\boldsymbol{a},b)\coloneqq\frac{1}{P^{D+1}}\). Note that a more recent work by Bach (2017) has proposed to sample optimized random features depending on the data distribution (but still not on \(f\) itself), and this sampling is also efficiently achievable with a quantum algorithm shown by Yamasaki et al. (2020); Yamasaki & Sonoda (2021); however, without QRT in this work, it would not be straightforward to apply such techniques to neural networks. Despite this difference, a quantitative advantage of our algorithm over the random features would be still unclear without numerical simulation, due to the non-orthogonality of the basis functions \(\{g((\boldsymbol{a}^{\top}\boldsymbol{x}-b)\ \mathrm{mod}\ P)\}\) used for the neural network.
We numerically show that our algorithm can find a subnetwork achieving a significantly better empirical risk than that obtained from the random features, as illustrated in Fig. 1. In our numerical experiment, choosing \(D=1\) and \(P=127\), we set the function \(f\) to be learned as a sine function, the empirical distribution as the uniform distribution \(\hat{p}_{\mathrm{data}}(\boldsymbol{x})=\frac{1}{P^{D}}\), and the activation function \(g\) and the ridgelet function \(r\) as ReLU. The sampling from \(p_{\lambda,\Delta}^{*}\) was classically simulated via rejection sampling, and convex optimization of \(\hat{w}^{*}\) was solved by MOSEK (ApS, 2022) and YALMIP (Lofberg, 2004). For \(N\leqq 120\), we plotted the achievable empirical risk with \(N\) repetitions of the sampling from \(p_{\lambda,\Delta}^{*}\) and that from \(p_{\mathrm{uniform}}\). See Appendix F for detail. The advantage of our algorithm in finding a sparse trainable subnetwork over the random features can be order of magnitude in terms of the empirical risk achievable by the subnetwork. We emphasize that our classical simulation using rejection sampling of \(p_{\lambda,\Delta}^{*}\) is not scalable as \(D\) increases; by contrast, our results make it possible to take the advantage in higher dimension \(D\gg 1\) if we can use quantum computation for accelerating the task.
## 5 Conclusion
We have formulated the discrete ridgelet transform that can be characterized via Fourier slice theorem and can represent any function exactly in the discretized domain. Furthermore, as a fundamental subroutine for quantum machine learning (QML), we have constructed quantum ridgelet transform (QRT), a quantum algorithm for applying \(D\)-dimensional discrete ridgelet transform to a quantum state efficiently in linear time \(\widetilde{O}(D)\). We have also clarified an application of QRT for finding a sparse trainable subnetwork of a large-scale neural network to approximate a function to be learned, opening an efficient way to demonstrate lottery ticket hypothesis. These results discover a promising use of QML to accelerate the tasks for classical neural networks. Also from a broader perspective, our quantum algorithms may need a fault-tolerant quantum computer that is actively under development, and our achievement lays a solid theoretical foundation for further hardware development and social
Figure 1: The empirical risks achievable with the subnetworks of the large original neural network found by \(N\) repetitions of sampling from the optimized distribution \(p_{\lambda,\Delta}^{*}(\boldsymbol{a},b)\) in (22) via our algorithm in Theorem 4.2 (blue thick line), and that from the uniform distribution via random features (red dashed line). Each line represents the average over \(20\) executions of the algorithms, while each error bar represents the unbiased estimation of the standard deviation for these executions. The advantage of using our algorithm over the simple application of the random features can be order of magnitude in terms of the empirical risk in this regime.
implementation toward realizing quantum computation.
## Acknowledgements
Hayata Yamasaki was supported by JSPS Overseas Research Fellowship, JST PRESTO Grant Number JPMJPR201A and MEXT Quantum Leap Flagship Program (MEXT QLEAP) JPMXS0118069605, JPMXS0120351339. Sathyawageswar Subramanian was supported by Royal Commission for the Exhibition of 1851 Research Fellowship in Science and Engineering. Sho Sonoda was supported by JST PRESTO Grant Number JPMJPR2125.
|
2302.03862 | CRAFT: Criticality-Aware Fault-Tolerance Enhancement Techniques for
Emerging Memories-Based Deep Neural Networks | Deep Neural Networks (DNNs) have emerged as the most effective programming
paradigm for computer vision and natural language processing applications. With
the rapid development of DNNs, efficient hardware architectures for deploying
DNN-based applications on edge devices have been extensively studied. Emerging
Non-Volatile Memories (NVMs), with their better scalability, non-volatility and
good read performance, are found to be promising candidates for deploying DNNs.
However, despite the promise, emerging NVMs often suffer from reliability
issues such as stuck-at faults, which decrease the chip yield/memory lifetime
and severely impact the accuracy of DNNs. A stuck-at cell can be read but not
reprogrammed, thus, stuck-at faults in NVMs may or may not result in errors
depending on the data to be stored. By reducing the number of errors caused by
stuck-at faults, the reliability of a DNN-based system can be enhanced. This
paper proposes CRAFT, i.e., Criticality-Aware Fault-Tolerance Enhancement
Techniques to enhance the reliability of NVM-based DNNs in the presence of
stuck-at faults. A data block remapping technique is used to reduce the impact
of stuck-at faults on DNNs accuracy. Additionally, by performing bit-level
criticality analysis on various DNNs, the critical-bit positions in network
parameters that can significantly impact the accuracy are identified. Based on
this analysis, we propose an encoding method which effectively swaps the
critical bit positions with that of non-critical bits when more errors (due to
stuck-at faults) are present in the critical bits. | Thai-Hoang Nguyen, Muhammad Imran, Jaehyuk Choi, Joon-Sung Yang | 2023-02-08T03:39:11Z | http://arxiv.org/abs/2302.03862v1 | CRAFT: Criticality-Aware Fault-Tolerance Enhancement Techniques for Emerging Memories-Based Deep Neural Networks
###### Abstract
Deep Neural Networks (DNNs) have emerged as the most effective programming paradigm for computer vision and natural language processing applications. With the rapid development of DNNs, efficient hardware architectures for deploying DNN-based applications on edge devices have been extensively studied. Emerging Non-Volatile Memories (NVMs), with their better scalability, non-volatility and good read performance, are found to be promising candidates for deploying DNNs. However, despite the promise, emerging NVMs often suffer from reliability issues such as stuck-at faults, which decrease the chip yield/memory lifetime and severely impact the accuracy of DNNs. A stuck-at cell can be read but not reprogrammed, thus, stuck-at faults in NVMs may or may not result in errors depending on the data to be stored. By reducing the number of errors caused by stuck-at faults, the reliability of a DNN-based system can be enhanced. This paper proposes CRAFT, i.e., Criticality-Aware Fault-Tolerance Enhancement Techniques to enhance the reliability of NVM-based DNNs in the presence of stuck-at faults. A data block remapping technique is used to reduce the impact of stuck-at faults on DNNs accuracy. Additionally, by performing bit-level criticality analysis on various DNNs, the critical-bit positions in network parameters that can significantly impact the accuracy are identified. Based on this analysis, we propose an encoding method which effectively swaps the critical bit positions with that of non-critical bits when more errors (due to stuck-at faults) are present in the critical bits. Experiments of CRAFT architecture with various DNN models indicate that the robustness of a DNN against stuck-at faults can be enhanced by up to \(10^{5}\) times on CIFAR-10 dataset and up to 29 times on ImageNet dataset with only a minimal amount of storage overhead i.e., 1.17%. Being orthogonal, CRAFT can be integrated with existing fault-tolerance schemes to further enhance the robustness of DNNs against stuck-at faults in NVMs.
Deep learning hardware, Emerging Memories, Fault-Tolerance, Neural Networks, Stuck-at Faults
## I Introduction
Deep Neural Networks (DNNs), a subset of Machine learning (ML) algorithms, have demonstrated impressive effectiveness in various applications such as computer vision, natural language processing, big data analysis and etc. A typical DNN consists of multiple hidden layers sandwiched between an input layer and an output layer. This hierarchical design allows DNN to solve complex programming tasks that appear to be infeasible with conventional programming approaches. However, despite the potential, DNNs often require enormous amount of computational power and hardware overhead, which makes it difficult to deploy them in real-time computing applications often running on mobile devices. As a result of the rapid development of DNNs, there is an enormous increase in the demand for efficient and scalable hardware architectures for DNNs' deployment. Realizing the high computational cost of DNNs, various methodologies have been proposed to achieve hardware-efficient architectures for DNNs [1, 2]. These techniques often focus on reducing the storage required by DNNs through network compression [1] and precision reduction [2]. Such methods have proven to be efficient, however, DNNs often need to sacrifice accuracy in exchange for a reduced implementation cost in resource-constrained devices.
Memory plays a key role in the applications involving large amount of data like DNNs. Current charge-based memory technologies such as Dynamic Random-Access Memory (DRAM), Static Random-Access Memory (SRAM) and Flash are facing challenges in continuing technology scaling [4]. Moreover, as the technology scales down, conventional memory technologies become highly prone to charge leakage which makes them a less attractive choice for data-intensive DNN applications. To cope with these issues posed by the conventional technologies, several emerging non-volatile memory technologies (NVMs) such as Resistive Random-Access Memory (ReRAM) and Phase Change Memory (PCM) have been extensively investigated over the past decade. With better scaling potential, better read performance and non-volatility [5], emerging NVMs are considered to be the potential replacement of the current charge-based memory technologies.
Beside being used for storage, thanks to their analog characteristic, emerging NVMs have also played a major role in designing high-performance and energy-efficient In-memory Computing (ICM) based accelerators for DNNs [6, 7, 8]. Such accelerators use emerging NVMs cells (i.e., ReRAM, PCM) to store the network's parameters and perform the matrix-vector
multiplication in-place by organizing NVMs cells in a crossbar manner. With in-place computations, NVMs based IMC architecture eliminates the data movement between memory and separate computing units, which is found to be very costly in conventional von Neumann architectures. These features make the emerging memories an ideal choice for the future hardware implementations of DNNs.
Despite their promising features, emerging NVMs often suffer from hard error (i.e., stuck-at faults) [9, 10, 11] due to their low endurance and immature manufacturing process. A stuck-at fault occurs when the resistance/conductance state of an emerging NVMs cell can not be changed by a write operation. A stuck-at cell can still be read but not reprogrammed, thus, errors caused by stuck-at faults only arise when the stuck-at cell's state is not aligned with the desired data. Building on this insight, several fault-tolerance techniques have been proposed to increase the lifetime of an emerging NVMs-based memory system [12, 13, 14, 15, 16]. In NVMs-based DNN architectures, despite the inherent fault-tolerance of DNNs, a small number of stuck-at NVMs cells (especially those corresponding to the critical bits) can still cause a catastrophic loss to DNN's accuracy [17, 18]. Therefore, it is necessary to develop effective fault-tolerance enhancement techniques to mitigate such errors in NVMs-based DNN architectures.
Existing works on tolerating stuck-at faults in emerging NVMs have aimed for neuromorphic applications [19, 20, 21, 22, 23]. Despite being effective, these techniques often rely on an expensive retraining process of DNNs or a utilization of frequent auxiliary bits leading to a high storage overhead. On the other hand, several architectural techniques have also been proposed to tackle the problem of stuck-at errors in emerging NVMs, in general [12, 13, 14, 15, 16]. Such techniques also require a large amount of hardware storage and complex encoding/decoding mechanisms, making them infeasible for resource-constrained hardware with real-time performance requirements. To address the problems of previous existing works, we propose multiple lightweight yet effective techniques, collectively named CRAFT, to tolerate errors caused by the stuck-at faults in NVMs based DNN architectures. The first technique, called Intra-Block Address Remapping, effectively remaps the weights inside a block of data so that the impact of stuck-at faults on DNN's accuracy is minimized. The second method addresses the problem of single-bit error by simply inverting the data in the data block. Results of these two techniques have been presented in our earlier work [24]. To further enhance the robustness of the NVMs based DNNs, a novel Criticality-Aware Bits Switching method is proposed which further enhances the DNN's accuracy in the presence of stuck-at faults by addressing the bit criticality in DNNs.
Rest of the paper is organized as follows. Section II covers the background of DNNs, emerging NVMs and stuck-at faults in emerging NVMs. Related works are presented in Section III. Section IV introduces the proposed Criticality-Aware Fault-Tolerance Enhancement Techniques (CRAFT). Finally, we evaluate the effectiveness of the CRAFT against existing techniques in Section V. Section VI concludes the paper.
## II Background
### _Deep Neural Networks (DNNs)_
Artificial neural networks (ANNs) are the computer algorithms inspired by the biological brain of animals. A layer of an ANN often consists of multiple nodes (i.e., neurons) connected to next layer through multiple connections (i.e., synapses/weights). Typical ANNs are made up of an input layer, an output layer and multiple hidden layers in between. A subset of ANN, Deep Neural Network (DNN), is an ANN with a large number of hidden layers (hence the name _"Deep"_ Neural Network). Over the last decade, DNNs have made major breakthroughs in various fields, rendering the conventional programming approaches in these domains as obsolete. Especially, in the field of computer vision, Convolutional Neural Networks (CNNs) [3, 25] have attracted a lot of interest due to their exceptional effectiveness.
Fig. 1 shows a typical CNN (ResNet-18) architecture. A CNN is often composed of three types of layers : Convolutional, Fully Connected (FC), Pooling and Normalization. The convolutional layer is often used for extracting features of the input data by convolving the input with multiple relatively small size filters. The output of the convolutional layer is then fed into the pooling layer to reduce the spatial size of the representation. In ResNet architecture, as shown in the figure, the output data is propagated through multiple residual blocks consisting of two convolutional layers and a shortcut connection. Such blocks allow the CNN to increase its depth (i.e., numbers of layer) while preventing the vanishing/exploding gradient effect. At the end of the network, data undergoes a fully connected layer for classification followed by a softmax layer which outputs the probability of each class.
The hierarchical structure of DNNs allows them to outperform the conventional programming algorithms in solving complex problems by breaking them into simpler ones. However, DNNs require immense hardware resources for storing parameters and performing computations. This makes it extremely challenging to deploy a large-scale DNN on
Fig. 1: Typical Convolutional Neural Network architecture (ResNet-18 [3])
resource-constrained hardware like mobile devices. To tackle this challenge, several techniques have been proposed to reduce the network size, thus making DNNs easier to deploy [1, 26]. The cost of using these techniques is the accuracy loss of DNNs. Depending on application, the accuracy loss may or may not be acceptable. In parallel with these approaches, several researches have investigated emerging non-volatile memories (NVMs) to provide high bandwidth, storage density and non-volatility for DNNs deployment. With such advantages over traditional charge-based memories, emerging NVMs are seen as ideal candidates for efficient and high-performance DNNs applications.
### _Emerging Non-Volatile Memories (NVMs)_
Prominent emerging NVMs that are well-suited for DNN-based applications include Phase Change Memory (PCM) and Resistive RAM (ReRAM) [10]. Phase Change Memory consists of a chalcogenide phase-change material (e.g., Ge\({}_{2}\)Sb\({}_{2}\)Te\({}_{3}\)) sandwiched between two electrodes. Data can be stored in PCM by modulating the phase-change material's state which is either crystalline or amorphous. The PCM cell has a low resistance in the crystalline state and a high resistance in amorphous state. ReRAM, on the other hand, consists of a conducting material (typically HfO\({}_{2}\)) placed in between two electrodes [27]. The resistance state of a ReRAM cell can be changed by altering the concentration of defects in the conductive filament. A high defects concentration toward the bottom electrode changes the state of the ReRAM cell to the low-resistance state and a high defects concentration towards the top electrode leads to the high-resistance state. Both PCM and ReRAM have promising features of non-volatility, high switching speed and better endurance compared to existing Flash memory [10]. However, due to certain intrinsic characteristics of the underlying technology and an immature manufacturing process, these memories often face reliability issues such as hard errors (stuck-at faults) [12], resistance drift [28, 29, 30, 31] and write disturbance [19, 32]. This poses a challenge when employing emerging NVMs for DNNs applications.
### _Stuck-at Faults and DNNs Accuracy_
#### Ii-C1 Stuck-at Faults in Emerging NVMs
Stuck-at faults are a type of hard faults in emerging NVMs where the resistance state of a NVMs cell is locked at a certain state. When the cell resistance is fixed at the low resistance state, the fault is regarded as Stuck-at-Zero (SA0), otherwise, if the cell resistance is stuck at the high resistance state, this fault is considered to be Stuck-at-One (SA1). Depending on the data to be stored and the stuck-at state, a stuck-at fault may or may not cause error in the system. Fig. 2 depicts a phenomenon of stuck-at faults in emerging NVMs. The first row shows the correct data expected to be stored and read from the memory. Second row shows the location and state of the stuck-at faults in emerging NVMs and the third row indicates the erroneous data read from the memory. As shown in the figure, the last two stuck-at fault locations do not introduce any error in the data because these cells' desired data is in-line with their stuck-at states. On the other hand, a mismatch between desired data and the stuck-at state causes error, which unintentionally flips the corresponding bit. Therefore, by aligning the desired data with the stuck-at resistance state, the number of readout errors in the system can be reduced. Previous works have used this property of stuck-at faults to mitigate their impact on the system [14].
According to the experiments using real fabricated ReRAM array in [33], stuck-at zeros (SA0) and stuck-at ones (SA1) can be clustered in the entire column/row or distributed randomly in a ReRAM array. This stochastic nature of SAF makes it hard to model using any specific distributions, thus, many previous studies [19, 23] have chosen the uniform distribution to model SAFs to reduce complexity during the fault estimation process. The same consideration is also applied in our paper. Furthermore, since the proposed method does not focus on any specific type of eNMs or SAFs, using uniform distribution for evaluations of SAFs allows CRAFT to be generalized and applicable for any use case.
#### Ii-C2 Impact of Stuck-at Faults on Accuracy of DNNs
A small number of stuck-at faults, especially in the critical bits, can cause a catastrophic change in DNN models accuracy [18, 23]. Fig. 3 shows the impact of stuck-at faults on different DNN models' accuracy evaluated on CIFAR-10 (a popular image dataset for computer vision). As illustrated in the figure, when the Bit Error Rate (BER) for stuck-at faults increases to a certain point, the classification error of the DNN model increases exponentially. For example, for ResNet-18 (a state-of-the-art Convolutional Neural Network) [3], when the BER increases beyond \(2\times 10^{-6}\), the classification error stays at around 90% level, which is the same as if the network is randomly guessing the results regardless of the input and trained parameters. Similar results are observed when considering different DNN models or datasets. Experimental details with additional results are discussed in Section V.
## III Related Works
Several memory-centric works have been proposed to address the problem of stuck-at faults in emerging NVMs. Error Correcting Pointers (ECP) [12] detect and correct stuck-at faults by keeping the stuck-at cell address and data in additional storage. [14] presents a method to enhance the correction capability of an ECC by a simple inversion operation. SAFER [13] dynamically partitions the data such that only single error is presented in each partition and then uses a single-bit error correction code for recovery. [34] proposes a method to reduce the storage overhead of ECP by allocating different number of error correction entries to different lines according to the number of hard errors presented
Fig. 2: Example of Stuck-at faults in emerging NVMs
in the line. Although being effective in tolerating stuck-at faults of eNVMs-based system, these works often incur a large storage/hardware overhead, which is infeasible in the case of most resource-constrained edge devices. The proposed encoding techniques adds a minimum hardware overhead (1.17% in terms of storage) to the system yet still efficiently enhances the robustness of DNN against SAFs.
Apart from memory-centric approaches, several works have been proposed to enhance the stuck-at faults tolerance capability of DNN based systems. The work in [19] exploits the self-healing capability of DNNs and proposes a retraining method to reduce the impact of stuck-at cells in DNN accelerators. Such scheme can be effective in recovering the accuracy degradation from stuck-at errors; however, re-training is required when implementing these techniques, which is difficult to do when the DNN has been deployed to the edge devices. [35] redesigns the traditional error correction output code of DNNs using a collaborative logistic classifier, thus enhancing the DNN robustness against stuck-at faults. Despite being effective, this work also requires re-training (i.e., fine-tuning) to recover the accuracy impacted by SAFs. The need for re-training does not apply to the proposed fault-tolerant technique in this paper, since it is designed specifically for DNN inference on resource-constrained edge devices.
More relevant to our proposed techniques, many data remapping and redundancy based techniques have been proposed [20, 21, 22]. Specifically, [20] introduces mapping technique and redundant crossbar arrays to compensate the accuracy loss of DNN model caused by the stuck-at faults in ReRAM crossbar array. Since this technique utilizes redundant crossbar array that has the same size as the original array, the storage overhead and energy consumption of such technique is considerably large. [21] classifies weights according to their criticality to the model's accuracy, remaps the significant weights to the fault-free memory cells and fine-tunes the DNN model to enhance the accuracy. By relying on the criticality of DNNs to address SAFs, such work is able to ease the re-training process and reduce storage overhead. Nonetheless, the storage overhead caused by such technique can still be as large as 5%, which is much higher compared to our proposed technique. The method in [23] uses matrix transformations to make the weights in the ReRAM crossbar array more robust to stuck-at faults. Similar to other redundancy-based methods, [23] also adds a significant amount of hardware overhead compared to the proposed technique. Specifically, such a scheme can come at the expense of 8.19 \(\times\) power consumption and 9.23 \(\times\) area overhead. The proposed technique in this paper only adds six additional bits for encoding/decoding, making it highly efficient in term of storage/energy overhead
The existing memory-centric methods as well as DNN focused techniques often either require a large amount of additional hardware overhead or costly retraining process. In this paper, we propose a set of techniques that incur only minimal hardware overhead yet effectively enhance the fault-tolerance capability of DNNs in the presences of stuck-at faults. Moreover, the proposed techniques are orthogonal to the existing methods and can be implemented together to further enhance the robustness of DNNs.
## IV CRAFT: Criticality-Aware Fault-Tolerance Enhancement Techniques
The state of a stuck-at cell can be detected (by a read operation) but cannot be re-programmed to a different state. Leveraging this fact, we present multiple remapping and encoding techniques, collectively named CRAFT, to reduce the number of stuck-at errors in the DNN parameters (weights and biases). The proposed fault-tolerance techniques include _Intra-Block Address Remapping_, _Weight Inversion_ and _Criticality-Aware Bits Switching_.
### _Intra-Block Address Remapping_
The parameters (weights and biases) of DNN are often stored as a group in a data block. For instance, a typical 64B cache line sized data block can store up to sixteen 32-bit floating point weights/biases. The proposed remapping method operates within the typical data block to preserve memory access locality. Fig. 4 shows a simple example of 2-bit address remapping using the proposed _Intra-Block Address Remapping_ technique. An XOR operation of the address of each weight within a data block remaps the weight to a different location within the same block. This remapping allows to reduce the stuck-at faults by increasing the number of stuck-at cell states aligned with the desired bits. As shown in the figure, without remapping, the number of error due to stuck-at cells is five. After the proposed remapping technique (XOR the address with 01) is applied, the resulting errors due to stuck-at faults is reduced to one only. By consi
Fig. 4: Example of Intra-Block Address Remapping technique
Fig. 3: Impact of stuck-at faults on accuracy of different DNN models (ResNet-18, VGG-19 and MobileNet-V2 on CIFAR-10)
through different XOR operations, the number of errors can be further reduced. The proposed method finally chooses the mapping which causes minimal impact (instead of the fewest errors, as explained in the next section) on DNNs accuracy. Fig. 5 illustrates the proposed remapping technique using 4-bit XOR operation. As shown in the figure, sixteen different mappings can be obtained using 4-bit XOR operation.
#### Iii-B1 Minimizing the Impact of Faults on DNN's Accuracy
Errors that occur in significant bits of DNN parameters are more harmful to the network accuracy than the errors in insignificant bits. This can be easily understood using a simple example shown in Fig. 6. As illustrated in the figure, frequent stuck-at faults in the insignificant bit positions result in only a small change from the actual weight value while fewer faults in the significant bit positions cause a greater change to the weight value. This is indicated by measuring the deviation in decimal value of the erroneous weight from that of the actual weight. As shown in Fig. 6, three stuck-at faults in the insignificant bit position result in deviation of 1 (in decimal), while a single stuck-at fault in the significant bit position causes a deviation of 32 (in decimal). In floating-point representation, this criticality difference is even more evident due to the greater difference in the significance of exponent bits as compared to the other bit positions. Furthermore, in case of DNN, certain weights are more critical than the other weights, thus, merely minimizing the number of faults in the memory would not be always helpful. Therefore, the proposed remapping method chooses to minimize the deviation (from the original value) of weights instead of simply minimizing the number of stuck-at faults presented in memory. The proposed deviation minimization technique can be formulated as:
\[w^{\prime}\gets w_{ri}\text{ s.t }\delta=\min_{\delta\in\Delta}\sum_{i=0}^{N}|w_{ri} -w_{oi}| \tag{1}\]
where \(w^{\prime}\) is the final weight that is used for inference, \(w_{ri}\) and \(w_{oi}\) refer to the new weight after remapping (obtained while considering stuck-at faults) and the original weight, respectively. \(N\) is the number of weights within a selected data block. \(\delta\) is the minimum net deviation and \(\Delta\) is the set of all possible net deviations for different remappings of the weights.
The proposed _Intra-Block Address Remapping_ technique with with a deviation minimization algorithm is illustrated in Fig. 7. For illustration simplicity, four 4-bit weights with 2-bit XOR operation for address remapping are depicted in the example. Five stuck-at faults (three SA1 and two SA0) are randomly distributed across the memory block, as shown in the figure. Initially, without any remapping, the readout data results in five errors which leads to the net deviation of 13 (in decimal). Using 2-bit XOR operation, four possible mappings can be obtained. As illustrated in the figure, the mapping which uses XOR 11 operation leads to the minimum net deviation (= 2 in decimal) from the actual weight. Therefore, the proposed
Fig. 5: Intra-Block Address Remapping for 16 weights using 4-bit XOR operation
Fig. 6: Example of fault significance in a data block. Three errors in insignificant bit positions result in insignificant change in value while one error in significant bit position causes large deviation from the actual value.
Fig. 7: An example of _Intra-Block Address Remapping_ technique with deviation minimization. Net deviation for each mapping is calculated by summing up all weight value deviations in the data block. Mapping with minimum net deviation is chosen as minimal mapping for inference.
technique chooses this mapping as the minimal mapping to store weights in the stuck-at-faults-prone emerging NVMs.
### _Weight Inversion_
When a data block contains only a single stuck-at fault, the error caused by this fault can be tolerated by simply inverting the data block. Based on this intuition, to further enhance the error-tolerance of a DNN architecture, a simple weight inversion encoding is proposed. Fig. 8 illustrates an example of the proposed _Weight Inversion_ technique. As shown in the figure, an inversion operation is incorporated when there is a single stuck-at fault in the data block. After inversion and decoding, the read data results in zero error and zero deviation from the actual weights. The _Weight Inversion_ method is combined with the _Intra-Block Address Remapping_ technique by considering the possible mappings with original weight values as well as with the inverted weight values.
### _Criticality-Aware Bits Switching_
The proposed _Intra-Block Address Remapping_ and _Weight Inversion_ techniques efficiently enhance the fault-tolerance of DNNs by orders of magnitude, as shown by the evaluation results in Sec. V. However, by only minimizing the net deviation between the erroneous and the actual weight values, errors that are critical to DNNs and unmaskable using these techniques can still significantly impact the DNNs accuracy. To address this, we introduce another encoding technique, incorporated on top of the first two techniques, that focuses on minimizing errors in the critical bit positions of the DNNs. The bit position criticality of DNNs against stuck-at faults is first analysed. Based on the results of this analysis, the _Criticality-Aware Bits Switching_ technique is proposed.
#### V-B1 Bit Position Criticality in DNNs
Certain bit positions in DNNs' parameters are more significant to the accuracy than the rest of bit positions. In general, as discussed in Sec. IV-A1, errors in the higher order bit positions (MSBs) cause a greater impact to DNNs accuracy than errors in lower orders bit positions (LSBs). However, in order to design an efficient error-tolerance mechanism for DNNs, quantification of bit position criticality is also necessary. Fig. 9 shows an experiment in which each bit position of DNNs' parameters is randomly disturbed with stuck-at faults. The experiments are performed on ResNet-18 using CIFAR-10 dataset with the classification error corresponding to each bit position averaged over 100 iterations. DNN parameters in both floating-point and quantized precisions are considered. Fig. 9(a) shows the bit position criticality of the full-precision (32-bit) network and Fig. 9(b) illustrates the bit sensitivity of unsigned 8-bit quantized network. The stuck-at bit error rate (BER) is fixed (arbitrarily) as \(10^{-3}\) for both networks.
As seen in the figures, only a few MSBs errors can cause a high impact to DNNs' accuracy while errors in the other bit positions have a negligible impact on the accuracy. Specifically, for network with 32-bit floating point parameters (Fig. 9(a)), two MSBs positions that cause significant impact to DNN's accuracy are the 30th and 26th bit position. The reason for that can be explained by considering the DNN's weight distribution. As shown in previous work [36], all weights in the DNN have a value less than 1, hence, the value of the 30th bit is always fixed at 0. This explains why a bit flip error in the 30th bit causes a catastrophic accuracy degradation. Another bit position that can cause a severe impact to DNN's accuracy would be the second-zero occurrence bit (SZOB) position (considering that the 30th bit is the first-zero occurrence position). As reported in [36], the SZOB can be in the 25th, 26th or 27th bit position depending on the network, e.g., the SZOB is found at 26th bit position in 96% of the weights in AlexNet and VGG16. As illustrated in Fig. 9, the SZOB of ResNet-18 is found to be at the 26th position, and thus causes a severe impact to DNN's accuracy compared to other bit positions. It is also worth noting that bit flip errors occurring in the sign bit (bit 31st) show an insignificant influence on the accuracy. This is because most weights in DNN have small value and thus would cause a small change in deviation when the sign is flipped. For e.g., if a weight value is equal to 0.05 and a sign bit error makes the weight changed to -0.05, the deviation in term of absolute value is only 0.1, which can be tolerable by the DNN. The same case can happen when errors occur in the mantissa bits (bit 0th-24th) which can only produce a maximum deviation of 1.5. On the other hand, if an error occurs in the exponent bit of the weight, the deviation between error weight and original weight can be as large as \(3.40\times 10^{38}\) which can severely impact the accuracy of DNN, as in case of bit 30th shown in Fig. 9(a).
For 8-bit quantized network, because unsigned 8-bit representation is used in this experiment, it is seen that only the 7th bit position (MSB) causes a severe degradation to the network accuracy. Note that 32-bit floating point network's accuracy is deteriorated much more than that of 8-bit quantized network.
Fig. 8: An example of the proposed Weight Inversion encoding technique
Fig. 9: Bit position criticality of DNNs against stuck-at faults. Each experiment is performed on ResNet-18 using CIFAR-10 dataset. The classification error of each bit position is averaged over 100 iteration. (a) 32-bit Floating Point network (b) 8-bit Quantized network
This is due to the difference in the range of representation for each parameter in the network. Parameters in 8-bit quantized networks have a much smaller dynamic range than those of the full-precision network and thus cause a higher drop in the network accuracy. Similar observation has also been made in several previous works such as [17, 18] as well as our evaluation result in Sec. V. The results shown in Fig. 9 are found to be consistent for different DNN models and datasets, therefore, the aforementioned observation can be applicable for any configuration of DNNs. Based on the analysis of bit criticality, in the next section, we propose a novel _Criticality-Aware Bits Switching_ technique for enhancing the stuck-at fault-tolerance in DNNs.
#### Iv-C2 Criticality-Aware Bits Switching
Since the previously introduced techniques address stuck-at fault on a data block level, it is not guaranteed that errors in critical bits of each weight will be tolerated. Therefore, the proposed _Criticality-Aware Bits Switching_ technique operates on each individual weight in order to minimize the number of critical stuck-at errors within each weight. Since applying bits switching to each individual weight would require more auxiliary bits to encode/decode the data, the proposed method uses one unified switching operation for all weights within a data block to keep the storage overhead minimal. By doing this, the method only requires one bit per (512-bit) block for encoding/decoding which is a negligible hardware overhead. Nevertheless, the proposed method is flexible and can be applied in a more fine-grained manner with additional auxiliary bits to achieve greater robustness.
Fig. 10 depicts an example of the proposed _Criticality-Aware Bits Switching_ technique. For the sake of illustration, three 8-bit weights with three critical bit positions being stuck-at faults are shown. The proposed method is incorporated by rotating the four MSBs (\(7^{\text{th}}\), \(6^{\text{th}}\), \(5^{\text{th}}\), \(4^{\text{th}}\)-bit) of each weight to the left and four LSBs (\(3^{\text{rd}}\), \(2^{\text{nd}}\), \(1^{\text{st}}\), \(0^{\text{th}}\)-bit) to the right. For example, in the first weight, the original data is "0111 0101" (in binary). When programmed to emerging NVMs, a stuck-at cell in the \(5^{\text{th}}\) bit position will cause a net deviation of \(2^{5}=32\) (in decimal) which in turns causes a severe impact on DNNs accuracy. After criticality-aware bit switching, the encoded data becomes "0101 0111" and the error caused by stuck-at fault in the critical cell is eliminated when reading from the data block. Intuitively, by switching MSBs with LSBs, errors in the MSBs and LSBs are also switched and thus, the method reduces the impact of errors in the MSBs significantly. It is important to note that, by doing intra-weight rotation, errors in the LSBs can also be increased. However, as discussed in the previous section, these errors does not cause a high impact on DNN's accuracy and the overall deviation in weight value would still be smaller. Based on the observation in the previous section, for the networks that uses 8-bit quantized parameters, four MSBs are switched with four LSBs. For 32-bit floating point networks, any MSBs rotation greater than five bits can be effective for MSBs fault-tolerance. In the proposed method, we choose to rotate ten MSBs with the LSBs in 32-bit weights to provide enough safety of margin. By combining the proposed _Criticality-Aware Bits Switching_ technique with the net deviation minimization, the robustness of the DNNs can be enhanced significantly, as shown in the evaluation results in Sec. V.
### _Implementation and Overhead_
Since most of DNN applications are often trained once and then deployed to multiple edge devices, the emerging NVMs can be programmed after the training process is done. This eliminates a deployment-time overhead of the proposed system. When used only for inference, the proposed techniques do not have any significant impact on performance because the remapping and encoding operations add only a few logic gate delays to the critical path.
Fig. 11 illustrates the implementation of the proposed CRAFT architecture. Specifically, Fig. 11(a) and Fig. 11(b) show the overall architecture and the remapping+encoding logic of CRAFT, respectively. During inference, the DNNs parameters read from the emerging NVMs get remapped/decoded based on the mapping/encoding that leads to minimal impact of stuck-at faults. As shown in Fig. 11(b), the _Intra-Block Address Remapping_ technique uses simple XOR logic to map the address to the new address while the _Weight Inversion_ method uses NOT operation to flip the data. The _Criticality-Aware Bits Switching_ technique uniformly rotates the weight data by simple rewiring logic. These remapping and encoding operations being trivial add negligible timing overhead during inference.
The exact storage overhead of CRAFT can be calculated by dividing the number of auxiliary bits with the total number of data bits. For a typical 64B data block size and 32-bit floating-point DNN weights, CRAFT adds four auxiliary bits for address remapping, one bit for weight inversion and one bit for weight rotation (bits switching). Thus, the proposed techniques in CRAFT incur only 1.17% storage overhead which is negligible. The effectiveness of CRAFT can be further improved with more fine-grained implementation while using smaller data blocks for remapping/encoding at the cost of increase in the number of auxiliary bits.
## V Evaluation
The proposed stuck-at faults tolerance techniques are evaluated using different experiments that consider various DNN models, datasets and parameter configurations. In the following, details of the experimental setup followed by the evaluation results and discussion are presented.
Fig. 10: Example of _Criticality-Aware Bits Switching_. MSBs positions are switched with LSBs positions so that the impact of errors in MSBs is minimized.
### _Experimental Setup_
The simulations for stuck-at faults in DNNs are performed by using the Pytoch framework [37]. All simulations are performed on an Intel(r) Xeon(r) CPU E5-1650 v4 with two Nvidia Titan XP GPUs having 24Gb of RAM. Table I lists the specifications of the evaluated DNN models and datasets. The proposed fault-tolerance enhancement techniques of the CRAFT architecture are evaluated for various state-of-the-art DNNs using popular datasets. The DNN models used in the experiments include _ResNet, VGG, MobileNet_ and _Inception_. The two datasets considered are _CIFAR-10_ and _ImageNet_. The _CIFAR-10_ dataset contains 60,000 RGB images, of which, 50,000 images are for training and 10,000 images are for testing. Training set images are preprocessed by padding 4 pixels along the height and width and randomly cropping to a 32\(\times\)32 patch. Furthermore, random horizontal flip operation is also performed on the training set images. DNN models evaluated on _CIFAR-10_ dataset (i.e., ResNet-18, VGG-19 and MobileNet-V2) are trained using stochastic gradient descent with 0.9 momentum. The cross entropy loss function is used as the objective function to classify ten classes of the input images [3]. The learning rate is kept constant at 0.1 during the training process.
solutions for the same purpose. As discussed earlier, the proposed techniques are flexible in terms of trade-off between the storage overhead and degree of robustness against stuck-at faults (more auxiliary bit can be used to enhance the fault-tolerance capability of the system at the cost of increased storage overhead). Therefore, we compare the proposed method with existing ECC techniques with similar storage overhead.
In conventional memory system, ECC code such as (72, 64) Hamming code is often used for addressing soft errors. A typical (72, 64) Hamming code normally incurs more than 10% of storage overhead, which is 10\(\times\) larger than the combined overhead of all of the proposed techniques of CRAFT. Moreover, ECC schemes for addressing soft errors in conventional memory system are not as effective in emerging NVMs, as shown and discussed in [12]. Instead, Error Correcting Pointers (ECP) are often preferred for tolerating hard errors in emerging NVMs. For a fair comparison, we benchmark the proposed methods with the ECP variant that incurs a comparable storage overhead. For \(d\)-bit data, \(ECP_{n}\) is able to correct up to \(n\) bits with the storage overhead of \(\frac{1+n+n+.\lceil\log_{2}d\rceil}{d}\). For our experiments, we consider \(ECP_{1}\) which can correct a single bit error in 512-bit data block at the cost of \(2.15\%\) storage overhead, which is still higher than that of CRAFT.
As discussed in Sec. IV, for 32-bit data (one full-precision weight or four quantized weights), the proposed techniques together require approximately 1.17% storage overhead. This amount is 2\(\times\) less than the evaluated ECP. Despite having a minimal storage overhead, CRAFT is found to be more effective as compared to ECP, regardless of the DNN model type or dataset size. The detailed results are presented in the next section.
### _Results and Discussion_
The evaluation results of the proposed methods for CIFAR-10 and ImageNet are shown in Fig. 12 and Fig. 13, respectively. The solid black line shows the baseline models (considering stuck-at faults) without any error-correction scheme. The dotted blue line indicates the models which incorporate ECP to mitigate stuck-at faults. The red dashed line illustrates models that use _Intra-Block Addressing Remapping_ and _Weight Inversion_ technique for stuck-at fault-tolerance. The green dash-dotted line shows results for CRAFT architecture which includes _Weight Remapping + Weight Inversion + Criticality-Aware Bits Switching_. As seen from the results, CRAFT improves the robustness of the baseline models significantly and outperforms the ECP method by a significant margin.
Specifically, Fig. 12 shows the evaluation results of different fault-tolerance methods for ResNet-18 using CIFAR-10 dataset. The baseline and ECP classification error starts to increase exponentially at the bit error rate (BER) of \(1.5\times 10^{-7}\) and \(5\times 10^{-6}\), respectively. On the other hand, the classification error can be maintained below 10% at around \(5\times 10^{-5}\) BER when Weight Remapping and Weight Inversion are applied and
Fig. 12: Comparison of different fault-tolerance techniques for various DNNs using CIFAR-10 dataset and 32-bit floating-point parameters (a) ResNet-18 (b) VGG-19 (c) MobileNet-V2
Fig. 13: Comparison of different fault-tolerance techniques for various DNNs using Imagenet dataset and 8-bit quantized parameters (a) ResNet-50 (b) VGG-19 (c) Inception-V4
at \(2\times 10^{-4}\) when used together with Criticality-Aware Bits Switching in CRAFT. In other words, CRAFT can increase the fault-tolerance by up to more than 1200\(\times\) compared to the baseline and 3\(\times\) compared to only using Weight Remapping + Inversion methods. A similar trend can also be observed in other networks using the CIFAR-10 dataset such as VGG-19 and MobileNet-V2. For example, in VGG-19, while ECP can only improve the robustness of the model up to 284\(\times\), CRAFT can increase the robustness to 12,320\(\times\) compared with the baseline model, which is orders of magnitude higher than ECP. The efficiency of CRAFT compared to the baseline model when using MobileNet-V2 is found to be 231\(\times\), which is 10\(\times\) higher than ECP and 4\(\times\) better than previously proposed methods [24].
To confirm the general applicability of the proposed techniques, we also perform different experiments on larger DNNs (ResNet-50, Inception-V4, etc...) with larger dataset (i.e., ImageNet) (Fig. 13). As discussed in Sec. IV-C1, the 8-bit quantized networks show a significant improvement in robustness compared to full-precision networks in Fig. 12 due to the smaller dynamic range representation. Regardless of such property, the proposed techniques still ensure a significant increase in robustness against stuck-at faults, as shown in Fig. 13. For example, for ResNet-50 using ImageNet (Fig. 13(a)), the network accuracy starts to drop at around \(2\times 10^{-4}\) BER while CRAFT can maintain the accuracy up to \(5\times 10^{-3}\) BER, indicating 29\(\times\) more robustness than the baseline model. This trend is found to be consistent in other DNNs evaluated on ImageNet dataset. Specifically, CRAFT can enhance the robustness of VGG-19 and Inception-V4 up to 12\(\times\) and 23\(\times\), respectively. A summary of the robustness improvement over the baseline configurations using different fault-tolerance techniques is given in Table. II. The improvement in robustness is obtained by measuring the BER at which the technique causes the classification error to increase by more than 5%. As shown in the table, CRAFT outperforms the existing techniques by orders of magnitude (up to \(10^{4}\) times over the baseline). While achieving a significant improvement in robustness against SAFs, as mentioned in Sec. IV-D, CRAFT incurs a minimum amount of storage overhead (\(\approx\)1.17%), which makes CRAFT highly practical when implementing in resource-constrained edge devices.
Stuck-at faults in emerging NVMs are expected to become more frequent as technology scaling to smaller nodes. The two distinguishing features of CRAFT are consideration for the criticality of errors and flexibility to address more errors by employing a fine-grained remapping using smaller block size. This approach makes CRAFT a robust and scalable method which can tackle future trends of hard errors in NVMs. Finally, the proposed techniques of CRAFT are orthogonal to the existing error correcting techniques, therefore, further enhancement in the fault-tolerance can be achieved by implementing CRAFT along with the conventional techniques.
## VI Conclusion
Hard errors such as stuck-at faults can severely impact the accuracy of Deep Neural Networks based systems which use emerging Non-Volatile Memories. This paper introduces a set of robust techniques, collectively named CRAFT, to enhance the error-tolerability of DNNs against stuck-at faults. The proposed techniques are simple and light-weight yet effective to tackle the problem of stuck-at faults in DNNs-based system. Working in a hierarchical manner, the proposed CRAFT architecture remaps the weights, encodes them using a simple inversion method and switches the bits within the weights based on their criticality, thus minimizing the impact of stuck-at faults on neural network's accuracy. The evaluation results show that CRAFT is able to enhance the robustness of the system by orders of magnitude. Specifically, for DNNs evaluated on CIFAR-10, CRAFT can increase the robustness by up to \(10^{4}\) times compared to the baseline model. For DNNs using ImageNet, CRAFT enhances the robustness of the model by up to 29 times. Being orthogonal, the proposed techniques of CRAFT can be easily incorporated with other existing methods to further increase the fault-tolerance of DNNs.
|
2306.11749 | Continuous and discontinous compressible flows in a converging-diverging
channel solved by physics-informed neural networks without data | Physics-informed neural networks (PINNs) are employed to solve the classical
compressible flow problem in a converging-diverging nozzle. This problem
represents a typical example described by the Euler equations, thorough
understanding of which serves as a guide for solving more general compressible
flows. Given a geometry of the channel, analytical solutions for the steady
states indeed exist and they depend on the ratio between the back pressure of
the outlet and stagnation pressure of the inlet. Moreover, in the diverging
region, the solution may branch into subsonic flow, supersonic flow, and a
mixture of both with a discontinuous transition where a normal shock takes
place. Classical numerical schemes with shock-fitting/capturing methods have
been designed to solve this type of problem effectively, whereas the original
PINNs fail in front of the hyperbolic non-linear partial differential
equations. We make a first attempt to exploit the power of PINNs to directly
solve this problem by adjusting the weights of different components of the loss
function, to acquire physical solutions and meanwhile avoid trivial solutions.
With a universal setting yet no exogenous data, we are able to solve this
problem accurately, that is, for different given pressure ratios PINNs provide
different branches of solutions at both steady and unsteady states, some of
which are discontinuous in nature. | Liang Hong, Song Zilong, Zhao Chong, Bian Xin | 2023-06-19T06:01:19Z | http://arxiv.org/abs/2306.11749v2 | Continuous and discontinous compressible flows in a converging-diverging channel solved by physics-informed neural networks without data
###### Abstract
Physics-informed neural networks (PINNs) are employed to solve the classical compressible flow problem in a converging-diverging nozzle. This problem represents a typical example described by the Euler equations, thorough understanding of which serves as a guide for solving more general compressible flows. Given a geometry of the channel, analytical solutions for the steady states indeed exist and they depend on the ratio between the back pressure of the outlet and stagnation pressure of the inlet. Moreover, in the diverging region, the solution may branch into subsonic flow, supersonic flow, and a mixture of both with a discontinuous transition where a normal shock takes place. Classical numerical schemes with shock-fitting/capturing methods have been designed to solve this type of problem effectively, whereas the original PINNs fail in front of the hyperbolic non-linear partial differential equations. We make a first attempt to exploit the power of PINNs to directly solve this problem by adjusting the weights of different components of the loss function, to acquire physical solutions and meanwhile avoid trivial solutions. With a universal setting yet no exogenous data, we are able to solve this problem accurately, that is, for different given pressure ratios PINNs provide different branches of solutions at both steady and unsteady states, some of which are discontinuous in nature.
**AMS subject classifications**: 35L51, 76L05, 76N06
**Key words**: unsteady compressible flow, normal shock, physics-informed neural networks, direct numerical simulation.
## 1 Introduction
Euler equations embrace the conservation laws for inviscid fluids, which are often compressible at high speed [1]. It is infeasible to derive an analytical solution for this type of equations
except in a few special cases. A discontinuous solution associated with shock wave may be generated due to the hyperbolic and non-linear properties of the partial differential equations (PDEs) when the fluid moves at speeds comparable to its speed of sound [2]. This further poses challenges for the development of numerical schemes and therefore, have induced many ingenious efforts in the past a few decades [3, 4]. There is generally a trade-off between accuracy and stability in the numerical methods [5]. The low order methods can produce stable but less accurate results, where sharp profiles of shocks are smoothed. To the contrary, the high-order methods are able to generate relatively accurate results, but are often troubled by instabilities and Gibbs phenomenon near the discontinuities [6]. Furthermore, the requirement of stability also imposes a strict CFL limit to the numerical methods, resulting in small time steps in simulations. These facts render the simulations by traditional numerical methods both human-intelligence concentrated and computationally intensive [7, 8, 9, 10].
Recently, machine learning methods, in particular, the deep neural networks (DNNs) have received enormous attentions and are trying to replace human bounding by network training and infering. Due to the tremendous successes in other fields, they also quickly became an alternative approach to solve PDEs [11, 12, 13, 14]. When solving the Euler equations, DNNs can be embedded into the traditional numerical methods to facilitate accurate results [15, 16, 17]. As another paradigm driven by the date science, DNNs can be trained by a large amount of analytical/experimental/simulation data, corresponding to the so-called _supervised learning_. Once trained, the DNNs can offer solutions for the PDEs in interpolated and even slighly extrapolated space of parameters much faster than the traditional numerical schemes. However, data are not always abundant in realistic applications or only partially accessible at best. To address this issue, physics-informed neural networks (PINNs) are proposed and trained by combining physics laws in the form of PDEs together with available data [18]. In contrast to the undecorated DNNs, PINNs can be trained with data in shortage or even missing to solve a forward problem described by known PDEs. This corresponds to an _unsupervised or weakly supervised learning_. Moreover, PINNs can also deal with an inverse problem at ease, where some coefficient values in the PDEs are unclear, such as the viscosity in the Navier-Stokes equations [19]. Since its invention, there have already been many innovative works to improve the accuracy and efficiency of PINNs [20, 21, 22, 23, 24, 25]. Concerning the discontinuous solutions of PDEs, quite a few work have been conducted with PINNs. In the seminal work of PINNs [18], a viscous term is added to smooth the shock produced by the Burgers' equation. Mao et al. [26] choose to distribute more sampling points in the discontinuous region identified beforehand, forming a cluster of points for a better training. Compared with uniformly or randomly sampled distribution of points, their strategy achieves a higher precision. The conservative PINNs proposed by Jagtap et al. [27] solve the continuous and discontinuous regions separately, where in the discontinuous part a larger network and more data are selected for training. Their work demonstrates that a special division of the training area can get more accurate solutions than that of other conventional divisions. Moreover, Patel et al. propose control-volume based PINNs [28] and combine them with a finite volume method, when no derivative exists in the discontinuous part. They define a loss function for a single control volume and obliterate derivative operation by integrating the equation, to ensure a correct solution.
Different from previous works, we aim to employ PINNs to solve a forward problem of compressible flows _without exogenous data_, corresponding to an unsupervised learning. In particular, we are interested in a typical flow problem taking place in a so-called de Laval nozzle or converging-diverging (CD) nozzle. Its actual solution depends on the ratio between the pressures at the outlet and inlet, and in the diverging region it may branch into subsonic flow, super
sonic flow, and a mixture of both with discontinuous transition where a normal shock takes place. Since both analytical and numerical solutions for steady states exist in textbooks[1, 3], this problem serves as an ideal benchmark to examine PINNs' performance for continuous and discontinuous compressible flows. A thorough understanding of the procedure for seeking solutions of this simple problem may provide a guidance to solve more general compressible flows, whether there is auxiliary data or not.
A typical three-dimensional CD nozzle with axis-symmetry is shown in Fig. 1(a). We assume that the velocity has only one component in the x direction \(V(x)\) and it changes with the area \(A(x)\) of the cross section, which varies smoothly and slowly. Therefore, one-dimensional equations are adequate to describe the flows. The smallest area named as throat controls the flow rate when the flow is supersonic in the diverging part. According to the pressure ratio between the back pressure \(P_{b}\) at the outlet and stagnation pressure \(P_{0}\) at the inlet, flow characteristics vary, as shown in Fig. 1(b). In particular, flows with a mixture of subsonic-supersonic-subsonic properties take place, as curves D or E, when \(P_{b}\) is moderate. Normal shock waves are generated for the transition from supersonic back to subsonic flows to raise the inner pressure up to conform with the back pressure at the outlet. The generation of shock wave is a non-isentropic process and manifests itself as a discontinuity in the context of continuum mechanics, which renders reliable solutions of the traditional numerical methods difficult.
In this work, we adopt a specific version of PINNs with a universal setting to resolve the flows in the CD nozzle, whether there is a shock wave or not, without any auxiliary data. By varying \(P_{b}\), it offers different branches of solutions correctly. If there is a normal shock, it identifies the shock location accurately and meanwhile provides a sharp solution. The structure of the paper is arranged as follows. In Sec. 2, flow equations together with the structure and parameters of PINNs are introduced. In Sec. 3, hard constraints on the boundaries are introduced and weights of different loss functions are universally determined so that different flow characteristics are predicted at steady states. In Sec. 4 unsteady flows are solved and the influence of different hyper-parameters of the neural networks is analyzed to successfully improve the accuracy of solutions.
Figure 1: (a) one dimensional approximation to the flow within a converging-diverging (CD) nozzle; (b) pressure distributions and flow characteristics along the CD nozzle according to various ratios between the back/outlet pressure and stagnation/inlet pressure. Curves A and B correspond to subsonic flows; curve F corresponds to a subsonic-supersonic flow; curve D and E correspond to flows with norm shocks in-between supersonic-subsonic transitions.
In Sec. 5, governing equations in conservative form are discussed and finally, in Sec. 6 a summary is made.
## 2 The method
The one-dimensional Euler's equations are adopted to describe the flows and more specifically, a simple differential form is initially considered
\[\left\{\begin{aligned} A\frac{\partial\rho}{\partial t}+Av \frac{\partial\rho}{\partial x}+A\rho\frac{\partial v}{\partial x}+v\rho\frac{ \partial A}{\partial x}&=0,\\ A\rho\frac{\partial v}{\partial t}+Av\rho\frac{\partial v}{ \partial x}+A\frac{\partial P}{\partial x}&=0,\\ A\rho\frac{\partial T}{\partial t}+Av\rho\frac{\partial T}{ \partial x}+P\left(A\frac{\partial v}{\partial x}+v\frac{\partial A}{\partial x }\right)&=0,\\ P-\rho RT&=0.\end{aligned}\right. \tag{2.1}\]
Here \(\rho\), \(v\), \(T\), and \(P\) are density, velocity, temperature, and pressure, respectively. \(R\) is the universal gas constant. We fix the stagnation properties such as density \(\rho_{0}\), temperature \(T_{0}\), and pressure \(P_{0}\) at the inlet (left) and adjust the the back pressure \(P_{b}\) at the outlet (right), to generate different flow regimes in the CD nozzle, as shown in Fig. 1. A typical set of stagnation values are \(\rho_{0}\!=\!1.52kg/m^{3}\), \(T_{0}\!=\!286.1K\), and \(P_{0}\!=\!1.247\times 10^{5}N/m^{2}\), respectively.
Flow properties are made dimensionless by \(\rho_{0}\), \(T_{0}\) and throat area \(A^{*}\) as follows:
\[\rho^{\prime}\!=\!\frac{\rho}{\rho_{0}},T^{\prime}\!=\!\frac{T}{T_{0}},A^{ \prime}\!=\!\frac{A}{A^{*}}.\]
Therefore, the sound speed at stagnation is \(a_{0}\!=\!\sqrt{\gamma RT_{0}}\), where \(\gamma\!=\!1.4\) is the specific-heat ratio of air. Furthermore, \(P^{\prime}\!=\!P/P_{0}\), \(v^{\prime}\!=\!v/a_{0}\), and \(x^{\prime}\!=\!x/\sqrt{A^{*}}\), \(t^{\prime}\!=\!ta_{0}/\sqrt{A^{*}}\). Eqs. (2.1) become dimensionless as
\[\left\{\begin{aligned} A^{\prime}\frac{\partial\rho^{ \prime}}{\partial t^{\prime}}+A^{\prime}v^{\prime}\frac{\partial\rho^{\prime }}{\partial x^{\prime}}+A^{\prime}\rho^{\prime}\frac{\partial v^{\prime}}{ \partial x^{\prime}}+v\rho\frac{\partial A^{\prime}}{\partial x^{\prime}}& =0,\\ A^{\prime}\left(\gamma\rho^{\prime}\frac{\partial v^{\prime}}{ \partial t^{\prime}}+\gamma v^{\prime}\rho^{\prime}\frac{\partial v^{\prime}}{ \partial x^{\prime}}+\frac{\partial P^{\prime}}{\partial x^{\prime}}\right)& =0,\\ A^{\prime}\rho^{\prime}\frac{\partial T^{\prime}}{\partial t^{ \prime}}\left(\frac{1}{\gamma\!-\!1}\right)+A^{\prime}v^{\prime}\rho^{\prime} \frac{\partial T^{\prime}}{\partial x^{\prime}}\left(\frac{1}{\gamma\!-\!1} \right)+P^{\prime}\left(A^{\prime}\frac{\partial v^{\prime}}{\partial x^{ \prime}}+v\frac{\partial A^{\prime}}{\partial x^{\prime}}\right)&=0,\\ P^{\prime}-\rho^{\prime}T^{\prime}&=0.\end{aligned}\right. \tag{2.2}\]
In the rest of this paper, all physical quantities shall appear in dimensionless form. Therefore, we remove all " \({}^{\prime}\) "s in the equations for convenience.
We define the cross-section of the CD nozzle as a parabolic function along the \(x\) axis: \(A(x)\!=\!1+2.2*(x\!-\!1.5)^{2}\), where the minimum area \(A^{*}\!=\!1\) takes place at \(x\!=\!1.5\), as shown in Fig. 2. This geometry is nothing special and only taken for convenience. The methodology presented later also applies to more general geometries.
The structure of the neural networks (NNs) is presented in Fig. 3, where it is trained without any auxiliary data, except for the boundary and initial conditions. The
Figure 3: The structure of PINNs. On the left is a simple feedforward neural network to be trained while on the right is the physics information expressed in PDEs. A loss function composed of boundary conditions, initial conditions, and physics equations together guides the training of the neural network.
Figure 2: A converging-diverging nozzle with cross-section area \(A\) described by a parabolic function of \(x\) along the axis with \(x\in(0,2.25)\), where the throat has the minimum area \(A^{*}=1\) at \(x=1.5\). \(\rho_{0}\), \(T_{0}\) and \(P_{0}\) are the stagnation density, temperature, and pressure at the inlet (left), respectively. \(P_{b}\) is the back pressure at the outlet (right). We adjust \(P_{b}\) to vary the pressure ratio and to generate different flow regimes in the nozzle.
is expressed as follows
\[Loss=Loss_{BC}+Loss_{IC}+Loss_{F}, \tag{2.3}\]
where \(Loss_{BC}\), \(Loss_{IC}\), and \(Loss_{F}\) correspond to the sub-loss function of boundary conditions, initial conditions, and PDEs, respectively. The sub-loss function of the PDEs has four components originated from the four equations in Eqs. (2.2)
\[\left\{\begin{aligned} A\frac{\partial\rho}{\partial t}+Av \frac{\partial\rho}{\partial x}+A\rho\frac{\partial v}{\partial x}+v\rho\frac {\partial A}{\partial x}&=F_{1}(x,t),\\ A\left(\gamma\rho\frac{\partial v}{\partial t}+\gamma v\rho\frac {\partial v}{\partial x}+\frac{\partial P}{\partial x}\right)&=F _{2}(x,t),\\ A\rho\frac{\partial T}{\partial t}\left(\frac{1}{\gamma-1} \right)+Av\rho\frac{\partial T}{\partial x}\left(\frac{1}{\gamma-1}\right)+P \left(A\frac{\partial v}{\partial x}+v\frac{\partial A}{\partial x}\right)& =F_{3}(x,t),\\ P-\rho T&=F_{4}(x,t).\end{aligned}\right. \tag{2.4}\]
Here \(F_{1}(x,t)\), \(F_{2}(x,t)\), \(F_{3}(x,t)\), and \(F_{4}(x,t)\) represent residuals of the mass, momentum, energy, and state equations, respectively. We assume that pressure \(P\) and density \(\rho\) are two individual variables so that the NNs have both variables as outputs. Accordingly, \(P\) and \(\rho\) form a sub-loss function \(F_{4}(x,t)\) via the residual from the equation of state. Each component of the loss function has an associated weight \(\omega_{i}\) and in part they determine the optimization of the network parameters:
\[F=\omega_{1}F_{1}+\omega_{2}F_{2}+\omega_{3}F_{3}+\omega_{4}F_{4}. \tag{2.5}\]
By default \(w_{i}=1\) for \(i=1,2,3,4\). All the sub-loss functions are expressed in the form of mean squared errors (MSEs)
\[Loss_{BC} =MSE_{BC}, \tag{2.6}\] \[Loss_{IC} =MSE_{IC},\] (2.7) \[Loss_{F} =MSE_{F}=\sum_{i=1}^{4}\frac{\omega_{i}}{N_{F}}\sum_{j=1}^{N_{F} }\left|F_{i}\left(x_{j},t_{j}\right)\right|^{2}, \tag{2.8}\]
where \(F_{i}\left(x_{j},t_{j}\right)\) is the corresponding residual at point \((x_{j},t_{j})\) among all \(N_{F}\) training poings. The actual expressions of \(Loss_{BC}\) and \(Loss_{IC}\) are problem-dependent and shall be described in later sections. Because the NNs adopt the chain rule of derivatives, we do not need to approximate the partial differential terms by any special numerical scheme as in the traditional numerical methods. In the process of minimizing the loss function towards zero, the back propagation algorithm optimizes the parameters (weights and biases) of the NNs. In this context, the optimization process is also named as training. Once trained, the NNs can predict values \(\rho\), \(v\), \(T\), and \(P\) for any given \(x\) and \(t\).
## 3 Steady state solutions
We commence to solve for flow problems within the CD nozzle at steady states, therefore the terms containing partial derivative with respect to time in Eqs. (2.2) and (2.4) are temporarily
discarded. For the steady states, we can obtain accurate solutions via analytical methods and make use of them to evaluate the performance of PINNs. In Sec. 3.1 we solve the flows in the diverging part of the nozzle by imposing the critical states of the throat as boundary conditions. In Sec. 3.2, we capture the shock wave by modifying the NNs and obtain accurate solutions at a high resolution of sampling points. In Sec. 3.3, we investigate the effects due to the number of training points on the accuracy of solutions.
### Diverging channel
The flow characteristics are relatively simple in the converging part of the CD nozzle, whereas it is rather complex in the diverging part. Therefore, we first impose the physical quantities at the throat as inlet boundary conditions and calculate flows in the diverging part alone. The analytical solutions to this problem are expressed as follows,
\[\left\{\begin{aligned} \left[1\!+\!\frac{1}{2}\left(\gamma\!-\! 1\right)Ma^{2}\right]^{\frac{1}{\left(\gamma-1\right)}}=&\frac{ 1}{\rho},\\ \frac{1}{Ma}\left[\frac{1\!+\!\frac{1}{2}\left(\gamma\!-\!1 \right)Ma^{2}}{\frac{1}{2}\left(\gamma\!+\!1\right)}\right]^{\frac{1}{2}\left( \gamma+1\right)\left(\gamma-1\right)}=&\frac{1}{A^{*}},\\ 1\!+\!\frac{1}{2}\left(\gamma\!-\!1\right)Ma^{2}=& \frac{1}{T},\\ \left[1\!+\!\frac{1}{2}\left(\gamma\!-\!1\right)Ma^{2}\right]^{ \frac{\gamma}{\left(\gamma-1\right)}}=&\frac{1}{P},\end{aligned}\right. \tag{3.1}\]
which are valid for both subsonic and supersonic flows. As they are not in explicit form, some iterative procedures are necessary and we adopt a web-based applet to calculate accurate solutions with 8 decimal digits as references [29].
More specifically, we take the critical state of air reaching the speed of sound at the throat. Accordingly, the inlet boundary conditions for the diverging channel are
\[\left(\rho_{x=1.5},\upsilon_{x=1.5},T_{x=1.5},P_{x=1.5}\right)=\left(0.634,0.9 12,0.833,0.528\right).\]
Given the geometry, analytical solution may be subsonic, supersonic and a mixture of both with a discontinuous shock. Both subsonic and supersonic solutions are smooth and unique, such as \(C\) and \(F\) curves on Fig. 1(b), and are available from Eqs. (3.1). Moreover, there are infinitely many solutions, each of which has a discontinuous shock, such as \(D\) and \(E\) curves on Fig. 1(b) and more are not shown. Each solution is unique corresponding to one specific boundary condition at the outlet. Out of curiosity, however, we intentionally leave the outlet boundary for free, to interrogate what PINNs generate.
The computational domain is for \(x\!\in\![1.5,2.25]\) and the loss function expressed as MSEs for the inlet boundary is defined as
\[MSE_{BC}=MSE_{\rho}+MSE_{v}+MSE_{T}+MSE_{P}, \tag{3.2}\]
with:
\[\begin{split} MSE_{\rho}&=\frac{1}{N_{BC}}\sum_{j=1}^{N_ {BC}}(\left|\rho_{NN}\left(x_{j}=1.5\right)-\rho(x_{j}=1.5)\right|^{2}),\\ MSE_{v}&=\frac{1}{N_{BC}}\sum_{j=1}^{N_{BC}}(\left|v _{NN}\left(x_{j}=1.5\right)-v(x_{j}=1.5)\right|^{2}),\\ MSE_{T}&=\frac{1}{N_{BC}}\sum_{j=1}^{N_{BC}}(\left|T _{NN}\left(x_{j}=1.5\right)-T(x_{j}=1.5)\right|^{2}),\\ MSE_{P}&=\frac{1}{N_{BC}}\sum_{j=1}^{N_{BC}}(\left|P _{NN}\left(x_{j}=1.5\right)-P(x_{j}=1.5)\right|^{2}).\end{split} \tag{3.3}\]
Here the values with subscript "\({}_{NN}\)" are the ones predicted by the NN, while the bare values are imposed ones. NNs' parameters for weight and bias are initialized by Glorot scheme [30]. This convention also applies later on. The NN has 3 hidden layers, each layer has 30 neurons, and \(\tanh\) is the activation function. We choose \(N_{BC}=1\) and \(N_{F}=100\) points in the x direction, for the inlet boundary and collocation points within the physical domain, respectively. To investigate clearly the influence of other parameters on the results, we intentionally choose all training points uniformly distributed.
Figure 4: Evolution of the loss function for a typical instance of PINNs for solving the flows in the diverging channel: with Adam and L-BFGS optimizers, each component of the loss function descends towards minimization with more training epochs.
The training proceeds with Adam optimizer of learning rate 0.0001 for 2000 epochs and continues with L-BFGS optimizer for 500 epochs [31, 32]. A typical evolution of the loss function is shown in Fig. 4, where each component descends clearly towards minimization with more training epochs. After completion of the training, 50 evenly distributed points in the computational domain are selected for predicting \(\rho\), \(Ma\!=\!v/c\!=\!v/\sqrt{\gamma RT}\), \(T\) and \(P\), which corresponds to a simple forward pass of the NNs. Since both sets of points are evenly distributed, the prediction points are actually a subset of the training points. This convention also applies later on, unless otherwise stated.
With the same setting, we run multiple instances of PINNs to predict the flow solutions. Surprisingly, for each instance we may obtain one of two different solutions randomly. One is for supersonic flows and the other is for subsonic flows, as shown in Fig. 5. Moreover, both the
Figure 5: Solutions for flows in the diverging channel. Only inlet boundary conditions are imposed while the outlet boundary conditions are intentionally left free. Each running instance of PINNs generates an accurate supersonic or subsonic solution randomly, due to the inherent randomness during the initialization of the NN’s parameters. Furthermore, PINNs deliberately avoid the subtle branch of infinite many solutions, which involve discontinuous shocks.
subsonic and supersonic solutions of PINNs are in excellent agreement with the analytical ones, described by Eqs. (3.1). After many instances of training and prediction, this observation is repeatable, that is, without providing the outlet boundary condition, PINNs are always able to find one of the two smooth solutions. We attribute this uncertain outcome to the random initialization of the NN's parameters [30]. Nevertheless, without an appropriate boundary condition for the outlet, PINNs deliberately circumvent the discontinuous solutions.
### Converging-diverging nozzle
We continue to consider the whole geometry of the CD nozzle. Stagnation values of density, temperature, and pressure are given at the inlet, while a back pressure \(P_{b}\) is provided at the outlet. Depending on the value of \(P_{b}\), different flow characteristics occur. For smooth solutions of both subsonic and supersonic flows, the analytical expressions in Eq. (3.1) are utilized. For a solution with a normal shock in the diverging part, we refer to the Rankine-Hugoniot equations as follows:
\[\left\{\begin{aligned} \frac{(\gamma+1)Ma_{1}^{2}}{(\gamma-1)Ma_{1}^{2 }+2}=&\frac{\rho_{1}}{\rho_{2}},\\ \frac{(\gamma-1)Ma_{1}^{2}+2}{2\gamma Ma_{1}^{2}-(\gamma-1)}=& Ma_{2}^{2},\\ [2+(\gamma-1)Ma_{1}^{2}]\frac{2\gamma Ma_{1}^{2}-(\gamma-1)}{( \gamma+1)^{2}Ma_{1}^{2}}=&\frac{T_{1}}{T_{2}},\\ \frac{1}{\gamma+1}[2\gamma Ma_{1}^{2}-(\gamma-1)]=& \frac{P_{1}}{P_{2}},\end{aligned}\right. \tag{3.4}\]
which relates the physical values before and after the shock. A similar solution strategy is adopted to calculate two accurate and smooth solutions with 8 decimal digits pieced together at the shock as references [29].
The computational domain is: \(x\in[0,2.25]\). The following boundary conditions are applied in PINNs:
\[(\rho_{x=0},T_{x=0},P_{x=0})=(1.0,1.0,1.0),\quad P_{x=2.25}=P_{b}.\]
Therefore, the loss function for the boundary conditions is expressed as MSEs as follows:
\[MSE_{BC}=MSE_{\rho}+MSE_{T}+MSE_{P} \tag{3.5}\]
with:
\[MSE_{\rho}=\frac{1}{N_{BC}}\sum_{j=1}^{N_{BC}}(\left|P_{NN}\left(x_{j}=0 \right)-\rho(x_{j}=0)\right|^{2}),\]
\[MSE_{T}=\frac{1}{N_{BC}}\sum_{j=1}^{N_{BC}}(\left|T_{NN}\left(x_{j}=0\right)- T(x_{j}=0)\right|^{2}), \tag{3.6}\]
\[MSE_{P}=\frac{1}{N_{BC}}\sum_{j=1}^{N_{BC}}(\left|P_{NN}\left(x_{j}=0\right)- P(x_{j}=0)\right|^{2}+\left|P_{NN}\left(x_{j}=2.25\right)-P(x_{j}=2.25)\right|^{2}).\]
The NN has 3 hidden layers and each layer 30 neurons. \(N_{BC}=2\) and \(N_{F}=200\) are evenly distributed in the \(x\) direction. The training proceeds with Adam optimizer for 2000 epochs and
continues with L-BFGS optimizer for 500 epochs. After training, 50 evenly distributed points are selected for predicting solutions.
Firstly, \(P_{b}=0.07726\), the flow is subsonic in the converging part and completely supersonic in the diverging part. The solutions from PINNs are shown in the Fig. 6, where the reference solutions according to Eqs. (3.1) are also presented for comparison. We observe that PINNs with a default setting offer excellent accuracy for this type of compressible flow, where there is smooth transition from a subsonic flow to a supersonic flow at the throat \(x=1.5\), corresponding to curve F on Fig. 1(b).
Furthermore, we set \(P_{b}=0.81017\). According to Eqs. (3.1) and (3.4), a normal shock wave is expected at \(x=1.875\) in the diverging part of the nozzle. With a default setting, PINNs offer solutions that are far away from the references, as shown in Fig. 7. An examination at the evolution of the loss function during training reveals that the troublemakers are the losses for the boundary condition and momentum equation, as shown in Fig. 8. Increasing the number of training epochs
Figure 6: PINNs’ solutions for supersonic continuous flows: with \(P_{b}=0.07726\), the solutions of \(\rho\), \(Ma\), \(T\), and \(P\) are smooth, with subsonic flow in the converging part while supersonic flow in the diverging part. References are taken from the analytical solutions governed by Eq. (3.1).
Figure 7: PINNs’ solutions for subsonic and supersonic discontinuous flows. With \(P_{b}=0.81017\), the analytical solutions of \(\rho\), \(Ma\), \(T\), and \(P\) are governed by by Eqs. (3.1) and (3.4). Solutions should be smooth and subsonic in the converging part and become discontinuous at \(x=1.875\) in the diverging part, where a normal shock wave is expected. PINNs with default settings fail to reproduce the correct solutions and increasing the number of training points and/or epochs does not help.
Figure 8: Evolution of the loss function of PINNs for solving the flows in the CD nozzle with \(P_{b}=0.81017\), where a normal shock is expected in the diverging part: with Adam and L-BFGS optimizers, components of the loss function are reluctant to descend towards minimization, especially those for boundary (BC) and momentum equation (F2).
for ten times from 2500 to 25000 epochs does not improve the descent of the loss function. Enhancing the number of sampling points ten times from 200 to 2000 does not help either (results are not shown).
A few notes are in order. Since we employ PINNs as a direct numerical simulation tool without prior data, they not aware of the shock location in advance and can not distribute more sampling points around the shock. This renders an accurate prediction of the discontinuous flow by PINNs challenging. After a closer inspection on Fig. 7(b), we observe that the velocity/Mach number profile is almost flat around zero. This indicates that during the training the optimizer of NNs ignores the residual of momentum equation. Leaving a flat velocity implies all velocity terms can be discarded from \(F_{1}\), \(F_{2}\) and \(F_{3}\) in Eq. (4), since \(\partial v/\partial x\approx 0\). We interpret \(v(x)=constant\) as a trivial, but wrong solution. For the loss of \(F_{1}\) and \(F_{3}\), the term \(\partial A/\partial x\) is known from the geometry, which demands the residuals towards zero, as shown in Fig. 8. However, with the trivial solution of \(v(x)=constant\), the pressure cannot satisfy two Dirichlet boundary conditions for the inlet and outlet simultaneously that is, \(\partial P/\partial x=0\) is impossible, as shown in Fig. 7(d). With \(\partial v/\partial x\approx 0\), \(\partial P/\partial x\) is the only term left in \(F_{2}\), which does not descend towards zero. Consequently, the losses for boundary condition and \(F_{2}\) are both large and do not descend easily, as shown in Fig. 8.
Motivated by the observations and speculations above, we introduce two small modifications into the vanilla version PINNs. Firstly, we choose to redistribute the weights between the components of the loss function for the PDEs as \(\omega_{1}:\omega_{2}:\omega_{3}:\omega_{4}=1:20:1:1\), to enhance the descent of
Figure 9: Evolution of the loss function of PINNs for solving the flows in the CD nozzle with \(P_{b}=0.81\), where a normal shock is expected in the diverging part: with Adam and L-BFGS optimizers. With more weight on the momentum equation and hard constrain on the Dirichlet boundary conditions, all components of the loss function descend easily.
Figure 10: PINNs’ solutions for subsonic and supersonic flows with discontinuity. With \(P_{b}=0.81017\), the analytical solutions of \(\rho\), \(Ma\), \(T\), and \(P\) are governed by by Eqs. (3.1) and (3.4). Solutions should be smooth and subsonic in the converging part and become discontinuous at \(x=1.875\) in the diverging part, where a normal shock wave is expected. PINNs with a proper weight on the momentum loss function and hard constraint on the Dirichlet boundary conditions identify shock location and reproduce accurately discontinuous flows at ease.
momentum residual. In addition, we employ hard constraints of pressure to satisfy the Dirichlet boundary conditions exactly, instead of minimizing the MSEs. The NN still has 3 hidden layers and each layer has 30 neurons. Moreover, 2000 uniformly distributed points in the x direction are selected as training points. The training proceeds with Adam for 20000 epochs and continues with L-BFGS for 5000 epochs. The upgraded evolution of loss function during training is shown in Fig. 9, where a clear descent for all components is observed. Furthermore, new predictions of PINNs are in Fig. 10, where a normal shock wave is observed at \(x=1.875\) and meanwhile sharp profiles are reproduced accurately for \(\rho\), \(Ma\), \(T\) and \(P\) with 50 points. We emphasize that with this new setting, the location of the shock is identified by PINNs automatically without any other efforts. We note that both the modified weights and hard constraints on the Dirichlet boundary conditions are necessary for the accurate solutions.
Lastly, we set \(P_{b}=0.95055\) so that a subsonic flow is expected in the entire nozzle. Results
Figure 11: PINNs’ solutions for subsonic flows. For \(P_{b}=0.95055\), a subsonic flow is expected in the entire nozzle as curve B on Fig. 1(b). PINNs with a default setting cannot deal with continuous problem properly, whereas PINNs with modified weights and hard constrains on the Dirichlet boundary conditions predict the flows accurately.
of two versions of PINNs are presented in Fig. 11. We observe that PINNs with a default setting provides a trivial and wrong solution of \(v(x)\approx 0\) and also incorrect solutions for \(\rho\), \(T\) and \(P\). With modified weights of on the momentum loss and hard constrains on the Dirichlet boundary conditions, PINNs are able to predict accurately the continuous subsonic flow in the entire nozzle, which turns trend after the throat at \(x=1.5\). This corresponds to curve B on Fig. 1(b).
With this new setting, PINNs are also able to reproduce Figs. 5 and 6, results of which are omitted. Therefore, we shall continue to use this version of PINNs for the rest of the work.
### Effects of resolution
In this section, the effects of the number of training points for PINNs are explored. We exemplify this study by \(P_{b}=0.81017\), where a normal shock is expected in the diverging part of the nozzle. Four different numbers of training points are selected: 200, 500, 1000 and 2000, which are uniformly distributed in the x direction. After training, physical quantities at 50 uniformly distributed points are selected for predictions. As shown in Fig. 12, with all four resolutions of training, results are stable and accurate for density, Mach number, temperature and pressure. It is worth noting that there is no Gibbs phenomenon, which is typically observed in traditional numerical methods.
Figure 12: The number of training points is 200,500,1000 and 2000, respectively. The prediction has 50 points. They are all evenly distributed in the \(x\) direction.
Time-dependent solutions
A more realistic flow in a CD nozzle has transit states from rest to the steady state. There are no analytical solutions for this time-dependent flow, which makes a numerical procedure necessary. In this section, we leverage the power of PINNs to tackle this time-dependent problem, possibly with normal shocks. In Sec. 4.1, PINNs are employed to solve the time-dependent flows, for which steady state is supersonic in the diverging part of the nozzle. In Sec. 4.2, the influences of NNs' parameters, such as the size, the number of training points and different distributions of training points on the solution accuracy are discussed. In Sec. 4.3, different initial conditions for the time-dependent flows are investigated, for which steady state has a normal shock.
### Subsonic-supersonic continuous flow
In this section, we solve the unsteady flow in the nozzle for the initial and boundary conditions given as follows
\[\begin{split}(\rho_{x}^{t=0},v_{x}^{t=0},T_{x}^{t=0},P_{x}^{t=0} )=(1.0,0.0,1.0,1.0),\\ (\rho_{x=0}^{t},T_{x=0}^{t},P_{x=0}^{t})=(1.0,1.0,1.0),\quad P_{x= 2.25}^{t}=P_{b}=0.07726.\end{split}\]
As shown in the previous section, the flow is supersonic in the diverging part of the nozzle for a steady flow. For a time-dependent flow, the computational domain in space and time is \(x\in[0,2.25]\) and \(t\in[0,8]\), respectively. Furthermore, the loss function for ICs expressed as MSEs is defined as
\[MSE_{IC}=MSE_{\rho}^{IC}+MSE_{v}^{IC}+MSE_{T}^{IC}+MSE_{P}^{IC} \tag{4.1}\]
with:
\[\begin{split} MSE_{\rho}^{IC}=&\frac{1}{N_{IC}} \sum_{j=1}^{N_{IC}}(\big{|}\rho_{NN}\big{(}x_{j},0\big{)}-\rho(x_{j},0)\big{|} ^{2}),\\ MSE_{v}^{IC}=&\frac{1}{N_{IC}}\sum_{j=1}^{N_{IC}}( \big{|}v_{NN}\big{(}x_{j},0\big{)}-v(x_{j},0)\big{|}^{2}),\\ MSE_{T}^{IC}=&\frac{1}{N_{IC}}\sum_{j=1}^{N_{IC}}( \big{|}T_{NN}\big{(}x_{j},0\big{)}-T(x_{j},0)\big{|}^{2}),\\ MSE_{P}^{IC}=&\frac{1}{N_{IC}}\sum_{j=1}^{N_{IC}}( \big{|}P_{NN}\big{(}x_{j},0\big{)}-P(x_{j},0)\big{|}^{2}).\end{split} \tag{4.2}\]
In addition, the boundary conditions for pressure are implemented as hard constraints as before. The NN has 3 hidden layers and each layer has 30 neurons. Moreover, 100 points are selected for boundary conditions and 150 points for initial conditions. The training starts with Adam optimizer for 10000 epochs and continues with L-BFGS for 15000 epochs.
The results of PINNs at 5 discrete time instants are shown in Fig. 13, where the flow quickly reaches supersonic state in the diverging part and become steady state for \(t\geqslant 6\). We note that the time evolution for each physical value (\(\rho\), \(Ma\), \(T\), and \(P\)) is always monotonic along the \(x\) direction. However, the same evolution of all physical values indicate overshoot in time, that is, steady profiles of \(\rho\), \(Ma\), \(T\), and \(P\) for \(t\geqslant 6\) are in-between profiles at \(t=1\) and \(t=2\). Furthermore,
Figure 13: PINNs’ solutions for time-dependent subsonic-supersonic flows at \(t=0,1,2,3,6,8\). Density, Mach number, temperature and pressure reach steady states for \(t\geq 6\).
Figure 14: PINNs’s solutions for time-dependent subsonic-supersonic flows with continuous evolutions for density, Mach number, temperature and pressure.
we present continuous maps for the physical values in both space and time in Fig. 14. Similarly as in the discrete instants in Fig. 13, all four physical values in the continuous color maps are monotonic along the \(x\) direction, but overshoot in time before reaching steady states.
The training loss of PINNs for this problem is shown in Fig. 15, where a catenation of L-BFGS after Adam is effective to reduce the loss to a much lower level.
### Exploration of neural networks' parameters for discontinuous flows
The computational domain is given as \(x\in[0,2.25]\) and \(t\in[0,25]\). The initial and boundary conditions for the time-dependent flow are as follows
\[(\rho_{x}^{t=0},r_{x}^{t=0},T_{x}^{t=0},P_{x}^{t=0})=(1.0,0.0,1.0,1. 0)\,,\] \[(\rho_{x=0}^{t},T_{x=0}^{t},P_{x=0}^{t})=(1.0,1.0,1.0),\quad P_{x= 2.25}^{t}=P_{b}=0.81017.\]
The steady state with these boundary conditions was studied in section 3.2, which corresponds to a flow with a normal shock at \(x=1.875\) in the diverging part. When solving the unsteady process with discontinuity, we find that the NNs with the previous setting result in a poor prediction for the solutions at steady state. It is understandable, as we have one extra dimension of time for the physical quantities to evolve. Therefore, we commence to explore the effects of the parameters of NNs and examine PINNs' solutions at steady states after going through the time-dependent states.
We employ four sets of parameters as listed in Table 1, where we have two architectures of NNs: 3 hidden layers \(\times\) 30 neurons and 4 hidden layers \(\times\) 50 neurons. For the training
Figure 15: Training loss of PINNs for the continuous time-dependent flows.
points, we have \(100\times 100\) regular points uniformly distributed in the whole space-time domain \(x\times t\in[0,2.25]\times[0,25]\). As an attempt to enhance resolution, we add \(30\times 30\) extra points uniformly distributed in the space-time domain of the diverging region of the nozzle \(x\times t\in[1.5,2.25]\times[0,25]\). For the initial and boundary conditions, \(150\) and \(100\) points are universally applied, respectively. Each training starts with Adam optimizer for \(10000\) epochs and continues with L-BFGS for \(15000\) epochs.
For the time being, we discard PINNs' results at transit states and present solutions at steady states. We observe that the first setup of \(NN_{a}\) reproduces the steady states qualitatively, as shown in Fig. 16, where an norm shock can be identified albeit at a location biased towards upstream. The overall profiles of \(\rho\), \(Ma\), \(T\) and \(P\) are accurate for the subsonic region before the shock, and deteriorate evidently after the shock. Next, we consider two individual improvements over the NNs: one is with enhanced number of layers and neurons corresponding to setup \(NN_{b}\); another is with enlarged number of sampling points in the diverging region of the nozzle corresponding to setup \(NN_{c}\). Both setups improve the results substantially, but the solutions are still not sufficiently accurate, as shown in Figs. 22 and 23 in Appendix A. Furthermore, we look at PINNs' results with setup \(NN_{d}\) in Fig. 17, which represents the combined efforts of improvement on the NN's achitecture and increased number of training points. We observe that solutions from PINNs solving an unsteady process predict the steady states correctly, with the same accuracy as the solutions of a steady flow solved by PINNs.
Lastly, we present the complete evolution of \(\rho\), \(Ma\), \(T\) and \(P\) in Fig. 18. From the color maps of physical values in the whole space-time domain, we may divide approximately the evolution into three stages in time. At the first stage for \(t\lesssim 5\), the flow has clearly subsonic characteristics: all four physics values develop rapidly, but always have continuous profiles. At the second stage between \(5\lesssim t\lesssim 8.5\), supersonic characteristics of the flow arise at downstream of the throat at \(x=1.5\), but the physical values are still continuous. At the third stage for \(t\gtrsim 8.5\), discontinuous phenomena emerge. Later on, physical values settle for \(t\gtrsim 10\), where a sharp interface for physics values is evident at \(x=1.875\).
Figure 16: PINNs’ results with setup \(NN_{a}\) for steady states from unsteady process: 3 hidden layers and each layer 30 neurons; Regular \(100\!\times\!100\) training points for space-time domain \(x\!\times\!t\!\in\![0,2.25]\times\![0,25]\).
Figure 17: PINNs’ results with setup \(NN_{d}\) for steady states from unsteady process: 4 hidden layers and each layer 50 neurons; Regular \(100\times 100\) training points for space-time domain \(x\times t\in[0,2.25]\times[0,25]\). Extra \(30\times 30\) training points for space-time domain \(x\times t\in[1.5,2.25]\times[0,25]\).
Figure 18: PINNs’s results for time-dependent subsonic-supersonic flows with discontinuous evolutions for density, Mach number, temperature and pressure. The setup of PINNs is with \(NN_{d}\) as listed in Table 1.
This corresponds to the scenario when the inlet is already open and the outlet is closed before the flow. Therefore, the density and pressure inside the nozzle are identical as the stagnation values of the inlet. As second initial condition, we consider that the outlet is open and the inlet is closed before the flow. Therefore, the density and pressure inside the nozzle are identical as the values of the outlet. The values for the second initial condition are as follows
\[(\rho_{x}^{t=0},v_{x}^{t=0},T_{x}^{t=0},P_{x}^{t=0})=(\rho_{b}=0.81017,0.0,1.0,P _{b}=0.81017).\]
For both cases, the temperature initially has a stagnation value of \(T=1.0\) in the entire nozzle and the boundary conditions are identical as follows
\[(\rho_{x=0}^{t},T_{x=0}^{t},P_{x=0}^{t})=(1.0,1.0,1.0),\quad P_{x=2.25}^{t}=P_{ b}=0.81017.\]
We observe that both flows develop rapidly from the two different initial conditions. After \(t=1\), the two paths of transition states of all physical values are already very similar, as shown in
Figure 19: PINNs’ results at 6 instants of short time for time-dependent discontinuous flows with the first initial conditions: before the flow the inlet is open while the outlet is closed.
Figure 20: PINNs’ results at 6 instants of short time for time-dependent discontinuous flows with the second initial conditions: before the flow the inlet is closed while the outlet is open.
Figs. 24 and 25 in Appendix B. Therefore, here we only present the two sets of results for \(t\in[0,1]\) in Figs. 19 and 20. With the first initial conditions, \(\rho\), \(v\) and \(T\) evolve quickly but smoothly except for \(P\), as there is a discontinuous jump between the boundary value and inner initial values at the outlet. Nevertheless, the pressure becomes continuous after \(t=0.2\), as shown in Fig. 19(d). With the second initial conditions, \(v\) and \(T\) values evolve quickly but smoothly except for \(\rho\) and \(P\), as there are discontinuous jumps between the boundary value and inner initial values at the inlet. Nevertheless, both the density and pressure become continuous after \(t=0.2\), as shown in Figs. 20(a) and (d).
## 5 Solutions in conservative form
In classical numerical methods a conservative form of the PDEs is in favor and numerical solutions of the non-conservative form tend to be unstable. Therefore, we consider the solution procedure of PINNs in the context of conservative form, which is given as follows
\[\partial_{t}U+\nabla\cdot K=J. \tag{5.1}\]
\(U,K\) and \(J\) are vectors: \(U=\begin{pmatrix}U_{1}\\ U_{2}\\ U_{3}\end{pmatrix},K=\begin{pmatrix}K_{1}\\ K_{2}\\ K_{3}\end{pmatrix}J=\begin{pmatrix}J_{1}\\ J_{2}\\ J_{3}\end{pmatrix},\) and they are defined as
\[\left\{\begin{aligned} \rho A&=U_{1},\\ \rho Av&=U_{2},\\ \rho A(\frac{T}{\gamma-1}+\frac{\gamma}{2}v^{2})&=U_{3}. \end{aligned}\right. \tag{5.2}\]
\[\left\{\begin{aligned} \rho Av&=K_{1},\\ \rho Av^{2}+\frac{1}{\gamma}PA&=K_{2},\\ \rho A(\frac{T}{\gamma-1}+\frac{\gamma}{2}v^{2})+PAv&=K_{3}. \end{aligned}\right. \tag{5.3}\]
\[\left\{\begin{aligned} 0&=J_{1},\\ \frac{1}{\gamma}P\frac{\partial A}{\partial x}&=J_{2},\\ 0&=J_{3}.\end{aligned}\right. \tag{5.4}\]
Accordingly, we have to adjust the outputs of the NNs to be \(U_{1},U_{2},U_{3}\) and \(P\), as shown in Fig. 21. To facilitate the construction of loss function, we have to rewrite \(K_{i}\) as a function of the NNs' outputs, namely, \(U_{i}\), as follows
\[\left\{\begin{aligned} U_{2}&=K_{1},\\ \frac{U_{2}^{2}}{U_{1}}+\frac{\gamma-1}{\gamma}(U_{3}-\frac{ \gamma}{2}\frac{U_{2}^{2}}{U_{1}})&=K_{2},\\ \gamma\frac{U_{2}U_{3}}{U_{1}}-\frac{\gamma(\gamma-1)}{2}\frac{U_{2}^{3}}{U_{ 1}^{2}}&=K_{3},\end{aligned}\right. \tag{5.5}\]
where we have used the fact \(P=\rho T\) from the equation of state. Therefore, each component of the loss function is as follows
\[\left\{\begin{aligned} \frac{\partial U_{1}}{\partial t}+\frac{ \partial K_{1}}{\partial x}&=F_{1}(x,t),\\ \frac{\partial U_{2}}{\partial t}+\frac{\partial K_{2}}{\partial x }-\frac{1}{\gamma}P\frac{\partial A}{\partial x}&=F_{2}(x,t),\\ \frac{\partial U_{3}}{\partial t}+\frac{\partial K_{3}}{\partial x }&=F_{3}(x,t),\\ P-\frac{U_{1}}{A}(\gamma-1)\left(\frac{U_{3}}{U_{1}}-\frac{ \gamma}{2}\frac{U_{2}^{2}}{U_{1}^{2}}\right)&=F_{4}(x,t).\end{aligned}\right. \tag{5.6}\]
Here \(F_{1}(x,t)\), \(F_{2}(x,t)\), \(F_{3}(x,t)\), and \(F_{4}(x,t)\) represent residuals of the mass, momentum, energy, and state equations, respectively. For automatic differentiation, we have to expand all terms \(K_{i}\) in Eqs. (5.6) with \(U_{i}\) from Eqs. (5.5). These expansions quickly become unpleasant with many dividing operations, which poses challenges for gradient calculations and optimizations of the NNs' parameters. Despite our best efforts, the loss function does not descend easily and no meaningful predictions are made by PINNs with the conservative form.
## 6 Conclusions
We applied physics-informed neural networks or PINNs to directly solve steady and time-dependent compressible flows within a converging-diverging channel, corresponding to an unsupervised learning. With different boundary conditions, the flow may be completely subsonic, subsonic via a smooth transition to supersonic, or further from supersonic via a discontinuous transition back to subsonic.
Firstly, for a simple diverging channel, when sonic boundary conditions are imposed at the inlet and the outlet are left intentionally free, PINNs provide a subsonic or a supersonic continuous solution randomly and deliberately avoid solutions with discontinuity. The stochastic outcomes result from the randomness during initialization of the neural networks (NNs); Secondly, for a converging-diverging nozzle, PINNs with a default setting cannot capture transition from supersonic to subsonic flows with a discontinuity for the normal shock, although given proper boundary conditions at both the inlet and outlet. Instead, the loss function is unwilling to descend
Figure 21: The structure of PINNs for Euler equations in conservative form.
during training and consequently, a trivial solution with zero velocity and incorrect continuous profiles for other physical values are obtained. The two examples above indicate that during training the optimizer somehow "minimizes its efforts" before minimizing the loss function: it is inclined to offer smooth solutions (right or wrong) and reluctant to find discontinuous solutions. Hence, a small remedy of PINNs is pertinent.
After a close inspection on the descent of each component of the loss function and predictions of the physical values, we promote to put 20 times more weight on the minimization of the momentum equation and meanwhile enforce hard constraints on the boundary conditions of pressure. This exertion is coincident with a recent effort put forward by Perdikaris' group [33], where a dynamic weight is proposed to balance the gradients among different components of the loss function, to mitigate gradient vanishing. For the problems considered in this work, we acknowledge a constant heavy weight for the momentum loss being adequate for accurate solutions, after a couple of numerical experiments with trial and error.
With 90 neurons and 100 training points, the so-modified version of PINNs is able to deliver accurate solutions at steady states for subsonic flows, supersonic flows and mixture of both with a norm shock as sharp transition from supersonic to subsonic flows. For unsteady processes, with 200 neurons and 16900 training points, PINNs are able to predict accurately the time-dependent flows until steady states. Whether for steady or unsteady flows, PINNs are able to identify the locations of shocks accurately and offer very sharp profiles for the transitions without Gibbs phenomenon. These results are promising, as they encourage us to apply PINNs to solve more discontinuous physics phenomena and to replace/supplement traditional numerical schemes to a certain extent. Finally, when the PDEs are expressed in the conservative form, which is in favor by traditional numerical schemes, the output terms of the NNs and their corresponding loss function are entangled and not viable for an effective optimization. Consequently, the predictions offered by PINNs are incorrect. This indicates that PINNs prefer the simple differential form of the PDEs over the conservative form, as the former is more appropriate for straightforward automatic differentiation during optimization.
We envisage two research lines beyond this work. One is to explore PINNs as a direct numerical solver for more general compressible flows in two and three dimensions, especially with shock phenomena. From the performance on one-dimensional flows, it seems promising for PINNs to solve more compressible/discontinuous flows, where no exquisite shock-capturing schemes are essential. Another one is to supply PINNs with partially available experimental data, such as a few pressure values via sensors within the flow, to recover other physics values and/or to estimate unknown coefficients in the PDEs, such as wall frictions.
## Acknowledgments
X. Bian received the starting grant from 100 talents program of Zhejiang University. This work is partially supported by Hangzhou Shiguangji Intelligent Electronics Technology Co., Ltd, Hangzhou, China. The authors appreciate discussions with Dr. Bonan Xu and Mr. Yongzheng Zhu. |
2301.02792 | Linguistic-style-aware Neural Networks for Fake News Detection | We propose the hierarchical recursive neural network (HERO) to predict fake
news by learning its linguistic style, which is distinguishable from the truth,
as psychological theories reveal. We first generate the hierarchical linguistic
tree of news documents; by doing so, we translate each news document's
linguistic style into its writer's usage of words and how these words are
recursively structured as phrases, sentences, paragraphs, and, ultimately, the
document. By integrating the hierarchical linguistic tree with the neural
network, the proposed method learns and classifies the representation of news
documents by capturing their locally sequential and globally recursive
structures that are linguistically meaningful. It is the first work offering
the hierarchical linguistic tree and the neural network preserving the tree
information to our best knowledge. Experimental results based on public
real-world datasets demonstrate the proposed method's effectiveness, which can
outperform state-of-the-art techniques in classifying short and long news
documents. We also examine the differential linguistic style of fake news and
the truth and observe some patterns of fake news. The code and data have been
publicly available. | Xinyi Zhou, Jiayu Li, Qinzhou Li, Reza Zafarani | 2023-01-07T06:48:41Z | http://arxiv.org/abs/2301.02792v1 | # Linguistic-style-aware Neural Networks for
###### Abstract
We propose the hierarchical recursive neural network (HERO) to predict fake news by learning its linguistic style, which is distinguishable from the truth, as psychological theories reveal. We first generate the hierarchical linguistic tree of news documents; by doing so, we translate each news document's linguistic style into its writer's usage of words and how these words are recursively structured as phrases, sentences, paragraphs, and, ultimately, the document. By integrating the hierarchical linguistic tree with the neural network, the proposed method learns and classifies the representation of news documents by capturing their locally sequential and globally recursive structures that are linguistically meaningful. It is the first work offering the hierarchical linguistic tree and the neural network preserving the tree information to our best knowledge. Experimental results based on public real-world datasets demonstrate the proposed method's effectiveness, which can outperform state-of-the-art techniques in classifying short and long news documents. We also examine the differential linguistic style of fake news and the truth and observe some patterns of fake news.3
Footnote 3: The code and data are available at [https://github.com/Code4Graph/HERO](https://github.com/Code4Graph/HERO).
fake news, neural network, linguistic style
## I Introduction
"Fake news," as deceptive and misleading news articles (or statements at times), has been broadly discussed along with its influence on democracies and economies [1]. Public health has also been negatively impacted, especially with the "infodemic" that we face along with the pandemic [2]. Effective fake news detection has thus become an urgent task to mitigate such detrimental impacts.
Psychological theories, such as _Undeutsch hypothesis_[3], have suggested that the linguistic style of fake news is distinguishable from that of the truth. Therefore, effective techniques can be designed to identify fake news by analyzing the linguistic style of news articles [1]. Linguistic style can be captured by looking at the writer's usage of words (lexically and semantically) and the way these words are further formed into sentences (syntactic level) and the document (discourse level) [1]. Within a machine learning framework, existing studies have captured a news article's linguistic style by computing the frequencies of each word [4, 5], part of speech (POS, at the syntactic level) [6, 7, 5], and rhetorical relationship (RR, at the discourse level) [8, 5]. These frequencies form a news article's representation, which is further classified by, e.g., support vector machines (SVM) and random forests to predict the news as fake news or the truth.
These studies have advanced linguistic-style-aware fake news prediction. However, translating a news article's linguistic style into the appearances of words, POSs, and RRs overlooks the linguistic structure that reveals how the article's words, POSs, and RRs are assembled. Specifically, we can form a hierarchical linguistic tree for each news article; see Section III-A for the details and Figure 1 for an illustrated tree for the news piece "Vitamin D determines severity in COVID-19, so government advice needs to change, experts urge: Researchers point to changes in government advice in Wales, England, and Scotland." This tree explicitly presents _the order of words_ used in the article, _syntactic structure_ revealing how these words are recursively structured as the elementary discourse units (EDUs, which are meaningful phrases, sentences, or paragraphs) through POSs, and discourse structure exhibiting how these EDUs are recursively structured as the entire article through RRs. Previous approaches paid full attention to the tree's _node_ information by looking if this news piece used a specific word (e.g., "COVID-19"), POS (e.g., "NNP"), or RR (e.g., "NS-elaboration") in the corpus or how many times it appears without considering the _relational (edge)_ information among the nodes. Although Zhou et al. [9] and Perez-Rosas et al. [4] computed the frequencies of production rules (at the syntactic level only), each rule can merely show the structure within a fundamental component of the tree (i.e., parent-children, such as VP \(\rightarrow\) VBZ NP). Each fundamental component is investigated independently by overlooking how components are connected to form the tree; the tree's structure is hence preserved _locally_ rather than _globally_. In addition, the representation of news articles obtained by the frequencies of these local structures is often high-dimensional and sparse, which can be adverse to the prediction task.
**Present work.** To address the above problems, we propose the hierarchical recursive neural network (HERO) for fake news prediction. The architecture of the proposed neural network adaptively preserves the global structure of the hierarchical linguistic tree of various news articles. To our best knowledge,
this is the first work that develops hierarchical linguistic trees. Leveraging the developed trees, the proposed neural network can learn the linguistic-style-aware representations of news articles by explicitly capturing the writers' usage of words and the linguistically meaningful ways in which these words are structured as phrases, sentences, paragraphs, and, ultimately, the documents. We conduct extensive experiments on real-world datasets with well-established and state-of-the-art approaches, which demonstrate the effectiveness of the proposed neural network in predicting fake news. Additionally, we examine the differential linguistic style of fake news and the truth and identify statistically significant and consistent patterns of fake news across datasets.
The rest of this paper is organized as follows. We review related work in Section II. We introduce the proposed method in Section III and detail the experiments designed and conducted to evaluate the proposed method in Section IV. We conclude in Section V.
## II Related Work
Fake news prediction methods can be categorized as content-based or propagation-based depending on whether the method focuses on investigating news content or its propagation on social media.
Propagation-based methods can utilize rich auxiliary social media information, including news spreaders' intent [2] or profiles [10], relationships between news spreaders and their posts [11], social feedback [12, 13, 14], social networks [15], and propagation paths [16, 17]. Nevertheless, they can only be deployed after news articles published on news outlets have been disseminated on social media. In comparison, content-based methods have the primary advantage of predicting fake news early when news articles have been published online but have not been spread [5]. Additionally, an effective content-based method can be easily extended further by incorporating social context information. With this consideration, we focus on analyzing news content to predict fake news and review related work on content-based fake news prediction approaches.
As news articles are mainly text, content-based methods start by manually extracting linguistic features and predicting fake news using common classifiers such as SVM [4]. Such linguistic features have been related to lexicons (e.g., bag-of-words) [5], POSs [5, 6], context-free grammars (production rules) [4, 5], RRs [5, 8], readability [6, 18], and \(n\)-grams that preserve the sequences of words or POSs [7]. Though news features can be easily interpreted within this machine learning framework, features cannot be automatically extracted, which can significantly impact the prediction performance; hence, the performance heavily relies on experts' involvement and experience. More importantly, as detailed in Section I, it is difficult to capture the global structure of news text (language) at any of the syntactic and discourse levels with these hand-crafted features. Compared to these methods, the proposed neural network can learn the features of news articles, which capture the global and hierarchical structures that news linguistic styles carry.
Recently, neural networks (e.g., Bi-LSTM [7, 19] and Text-CNN [20]) have been frequently employed to identify fake news. These models can learn the features of news text (sometimes, combined with other modalities in news content, such as images [20, 21]). These neural networks
Fig. 1: The hierarchical linguistic tree for the news piece “Why did the US in 2017 give $3.7m to the Wuhan Lab in China? Such grants were prohibited in 2014. Did President Obama grant an exception?” verified as a false statement by PolitiFact.\({}^{2}\)Blue nodes: RRs. Green nodes: POSs.
have focused on the sequentiality or locality of news text but not on its linguistic structure. In comparison, the proposed neural network explicitly catches this structure; it also captures text's sequentiality and locality, which will be detailed in Section III-B. We point out that the proposed neural network provides a fundamental approach to news text representation learning and thus can be easily extended for multimodal fake news prediction.
## III Methodology
We specify the proposed model in this section, which can be divided into three steps. For each news document, we first construct its hierarchical linguistic tree (see Section III-A), then extract its features via the proposed hierarchical recursive neural network that preserves the hierarchical linguistic tree information (see Section III-B), and finally predict it as fake news or the truth (see Section III-C). Figure 2 presents the framework overview.
### _Hierarchical Linguistic Tree Construction_
Given a news document \(D\), we first generate its hierarchical linguistic tree. The tree can explicitly present the order of words used in the document and how these words shape EDUs (meaningful phrases, sentences, or paragraphs) and further shape the entire document. An example is shown in Figure 1. Specifically, our first attempt is to obtain \(D\)'s discourse (rhetorical) structure, which identifies \(D\)'s EDUs and reveals how these EDUs recursively form the document \(D\). To this end, we first utilize Standford CoreNLP [22] to segment \(D\) into EDUs. Then, we apply a modified transition-based system [23] to identify span (S) and nuclearity (N), based on which \(D\)'s rhetorical structure can be obtained without recognizing specific RRs (e.g., _elaboration_ in Figure 1). This semi-naked tree structure allows us to divide each RR node into within-sentence, across-sentence, and across-paragraph levels in terms of its left and right subtrees, extract structural features for each RR node, and ultimately adopt level-specific SVM classifiers [23] to predict the node attribute (i.e., the specific RR). This multi-stage approach outperforms the well-established one [24] in our experiments, where [24] works as an integrated system. Finally, we employ a state-of-the-art discriminative constituency parser for each identified EDU of the document \(D\) to obtain its syntactic structure [25]. The parser consists of a self-attentive encoder [25] and a chart decoder [26] (see Figure 2 for the detailed architecture). The syntactic structure reveals how the EDU's words recursively form the entire EDU.
### _Feature Extraction via Hierarchical Recursive Neural Network_
We propose the hierarchical recursive neural network to extract features of news documents, whose architecture adaptively maintains the global structure of the hierarchical linguistic trees of news documents.
Fig. 2: Framework overview, which contains a top-bottom building process of hierarchical linguistic trees, a bottom-top feature extraction process using the proposed hierarchical recursive neural network, and a classifier to predict fake news. The neural network’s architecture adaptively preserves various news documents’ global and hierarchical linguistic tree structures. The Bi-GRU aggregator catches text’s local sequentiality that is linguistically valuable and often short (explained in Section III-B), which is more effective than self-attention here (see Section IV-B for details).
Given a news document \(D\), its feature extraction using the hierarchical recursive neural network is bottom-top. We first encode \(D\)'s words, which are the leaf nodes of \(D\)'s hierarchical linguistic tree. Then, we aggregate the obtained embeddings of the words that attach to the same parent node, forming the embedding of their parent node. The hierarchical recursive neural network will repeat such aggregations from the lower (syntactic) level to the upper (discourse) level until the document \(D\) as the tree's root node is embedded.
Hence, the question arises of how the aggregator performs on a recurring fundamental component (i.e., a depth-one parent-children structure). Note that for each parent node in a hierarchical linguistic tree, its children contain _local_ and _sequential_ information of the corresponding news document. The information is local because it reveals partial information about the overall news content that is linguistically valuable. It is sequential as we keep the order of words of the news document. Naturally, recurrent neural networks can be adopted as the aggregator to catch the sequentiality of children sharing the same parent. The information locality further relieves the pressure on recurrent neural networks to keep the dependency of long entities since the number of children for a parent node is no more than the EDU length and essentially less than the document length. As seen from Figure 1, the maximum length that recurrent neural networks require to process is four, whereas the document has 29 tokens.
With the above considerations, we develop Bi-GRU (bidirectional gated recurrent unit), one of the well-established recurrent neural networks [27], to aggregate the embeddings of all the child nodes to represent their parent node. We also empirically compare Bi-GRU with multi-head self-attention that has remarkably performed in many tasks; Bi-GRU is more effective for the proposed model. Formally, the embedding of a parent node is computed as
\[\textbf{x}_{p}=\frac{\sum_{c\in\mathcal{C}_{p}}[\overrightarrow{GRU}\{ \textbf{x}_{c}\}\oplus\overleftarrow{GRU}\{\textbf{x}_{c}\}]}{|\mathcal{C}_{ p}|}, \tag{1}\]
where \(p\) denotes the parent node and \(\mathcal{C}_{p}\) is the set of child nodes of \(p\). Vectors \(\textbf{x}_{c}\in\mathbb{R}^{d}\) and \(\textbf{x}_{p}\in\mathbb{R}^{d}\) refer to the features of the child and parent node, respectively. The operator \(\oplus\) denotes concatenation. The GRU is formulated as follows:
\[\begin{array}{l}\textbf{r}_{i}=\sigma(\textbf{W}_{r}\textbf{x}_{i}+\textbf{U }_{r}\textbf{h}_{i-1}),\\ \textbf{z}_{i}=\sigma(\textbf{W}_{z}\textbf{x}_{i}+\textbf{U}_{z}\textbf{h}_{ i-1}),\\ \hat{\textbf{h}}_{i}=\tanh(\textbf{W}_{h}\textbf{x}_{i}+\textbf{U}_{h}(\textbf{h}_{i-1} \odot\textbf{r}_{i})),\\ \textbf{h}_{i}=(1-\textbf{z}_{i})\odot\textbf{h}_{i-1}+\textbf{z}_{i}\odot \hat{\textbf{h}}_{i},\end{array} \tag{2}\]
where \(\textbf{h}_{i}\in\mathbb{R}^{d/2}\) is the output hidden state of the \(i\)-th child, with \(\textbf{h}_{0}=\textbf{0}\). The symbol \(\odot\) denotes Hadamard product. Matrices \(\textbf{W}_{*}\in\mathbb{R}^{(d/2)\times d}\) and \(\textbf{U}_{*}\in\mathbb{R}^{(d/2)\times(d/2)}\) (\(*\in\{r,z,h\}\)) are learnable parameters. \(\textbf{r}_{i}\) and \(\textbf{z}_{i}\) are the reset gate and update gate, respectively. \(\sigma\) and \(\tanh\) are the sigmoid and hyperbolic tangent activation functions, respectively. In a nutshell, the above architecture first employs the Bi-GRU to capture "deep" sequential feature interactions of all the child features and then uses a mean pooling layer over all the hidden states to obtain the parent node's features.
After determining the aggregation within each recurring fundamental component, we introduce three specific hierarchical recursive neural networks (HEROs):
* _Unified HERO_: The first hierarchical recursive neural network is the one with unified aggregators. In other words, all the Bi-GRUs in the neural network share the same set of \(\textbf{W}_{*}\) and \(\textbf{U}_{*}\) (\(*\in\{r,z,h\}\), see Equation (2)).
* _Level-specific HERO_: It is the hierarchical recursive neural network with level-specific aggregators. As detailed, the hierarchical linguistic tree presents both syntactic and rhetorical structures of news content, and the hierarchical recursive neural network preserves such structures. Hence, all the Bi-GRUs in a hierarchical recursive neural network can be grouped by the level (syntax or discourse) they belong to the corresponding tree. We define \(L(v)\) as the function that maps a certain vertex in the hierarchical linguistic tree to its linguistic level (i.e., \(L(v)\in\{\text{syntax},\text{discourse}\}\)). Then, Equation (1) can be reformulated as \(\textbf{x}_{p}=\frac{1}{|\mathcal{C}_{p}|}\sum_{c\in\mathcal{C}_{p}}[ \overrightarrow{GRU}_{L(c)}\{\textbf{x}_{c}\}]\).
* _Attribute-specific HERO_: It stands for the hierarchical recursive neural network with attribute-specific aggregators. In other words, we categorize the hierarchical recursive neural network's recurring fundamental components according to the attributes of their parent nodes in the corresponding hierarchical linguistic tree, which can be various POSs and RRs. We deploy the same Bi-GRU for the components within each category and the different Bi-GRU for the components falling into different categories. Mathematically, we define \(A(v)\) as the function that maps a certain vertex in the hierarchical linguistic tree to its attributes. Assume there are \(m\) different POSs and \(n\) RRs, we have \(A(v)\in\{POS_{i},RR_{j}:i=1,2,\cdots,m,j=1,2,\cdots,n\}\). The root vertex would be assigned with a RR in discourse parsing. For EDU vertices, they would not be assigned with any RRs in discourse parsing but with some POSs in constituency parsing. In this way, Equation (1) is rewritten as \(\textbf{x}_{p}=\frac{1}{|\mathcal{C}_{p}|}\sum_{c\in\mathcal{C}_{p}}[ \overrightarrow{GRU}_{A(p)}\{\textbf{x}_{c}\}\oplus\overleftarrow{GRU}_{A( p)}\{\textbf{x}_{c}\}]\).
### _Fake News Prediction_
We add a softmax classifier on the top of the proposed hierarchical recursive neural network to predict the document \(D\) as fake news or the truth. Let \(\textbf{h}_{D}\) denote \(D\)'s features extracted via the proposed hierarchical recursive neural network. The softmax function maps \(\textbf{h}_{D}\) to the probability of \(D\) being a fake news document by \(p_{D}=\text{Softmax}(\textbf{W}\textbf{h}_{D}+\textbf{b})\), where \(\textbf{W}\) and \(\textbf{b}\) are learnable parameters.
To learn the parameters \(\Theta=\{\textbf{W}_{*},\textbf{U}_{*},\textbf{W},\textbf{b}\}\) within the neural network and classifier, we employ cross-entropy to calculate the classification loss in the model training process. Assume we have \(q\) verified news documents \(\mathcal{D}=\{D_{i}\}_{i=1}^{q}\) with the ground-truth labels \(\mathcal{Y}=\{y_{i}:y_{i}\in\{0,1\}\}_{i=1}^{q}\) (\(y_{i}=0\) for true news, and \(y_{i}=1\) for fake news), the loss is computed
by \(L=-\frac{1}{q}\sum_{i=1}^{q}[y_{D_{i}}\log p_{D_{i}}+(1-y_{D_{i}})\log(1-p_{D_{i}})]\). Based on it, the parameter set \(\Theta\) is estimated by \(\tilde{\Theta}=\arg\min_{\Theta}L\).
## IV Empirical Evaluation
We aim to evaluate the proposed method by answering the following three questions.
1. How effective is the proposed model in fake news prediction compared to the-state-of-art approaches?
2. Is the hierarchical linguistic structure of news documents essential in representing their linguistic styles?
3. What characterizes the linguistic style of fake news as distinguishable from the truth?
To that end, we first detail our experimental setup in Section IV-A and then compare the proposed unified, level-specific, and attribute-specific HEROs in predicting fake news (see Section IV-B). Subsequently, we compare the proposed model with the baselines to verify its effectiveness in predicting fake news (to answer RQ1, see Section IV-C) and conduct the ablation study to assess the importance of our developed hierarchical linguistic trees (to answer RQ2, see Section IV-D). Finally, we characterize the linguistic style of fake news as distinguishable from the truth by doing quantitative and comparative analyses (to answer RQ3, see Section IV-E).
### _Experimental Setup_
We first introduce the datasets used for evaluation (see Section IV-A1), followed by the baselines for comparison (see Section IV-A2). Finally, we detail our implementation details in Section IV-A3.
#### Iv-A1 Datasets
We conduct experiments on two benchmark datasets in fake news prediction: Recovery [9] and MM-COVID [28]. Both datasets contain labeled news documents. Differently, news documents collected in Recovery are articles (long text, often including multiple paragraphs) but in MM-COVID are statements (short text, often formed by one or two sentences). We present the detailed statistics of two datasets in Table I.
#### Iv-A2 Baselines
We involve the following well-received and state-of-the-art methods as baselines in our experiments.
* _HCLF_[5]: HCLF stands for hand-crafted linguistic feature. Each news document's HCLFs include the frequencies of words (i.e., bag-of-word features), POSs, RRs, and production rules. The extracted features are used to predict fake news by employing well-established classifiers. Here we examine a comprehensive list of classifiers-logistic regression, SVM, \(k\)-nearest neighbors, decision trees, naive Bayes, random forest, and AdaBoost-and select the one performing best.
* _EANN_[20]: The event adversarial neural network contains three components: feature extraction by Text-CNN (for text) and VGG-19 (for images), event discrimination to learn event-invariant features of news content, and fake news prediction. We exclude the visual features for a fair comparison.
* _HAN_[29]: HAN exploits attention-GRU for news classification. It captures the hierarchical sequence of documents; i.e., each document is a sequence of its sentences, and each of its sentences is a sequence of words.
* _DRNN_[30]: DRNN is a discourse-structure-aware neural network, which focuses on the tree with rhetorical relationships as edge attributes and leverages an attention mechanism for news classification. In other words, DRNN differs from HERO in the aggregation rule. Compared to DRNN's tree, the hierarchical linguistic tree integrates syntax-level structures and has RRs as nodes and non-attributed edges at the discourse level. DRNN is developed to categorize news documents with more than one elementary discourse unit; otherwise, it is reduced to Bi-LSTM.
* _Text-GCN_[31]: The approach develops the graph convolutional neural network for news classification. The graph investigates the co-occurrence relationship among news documents and the words within the documents.
* _Transformer_[32]: It is a deep neural network model with a self-attention-based encoder-decoder architecture, which has excellently performed in diverse natural language processing tasks. Here, we consider Transformer's encoder-applicable for classification tasks-as a baseline to predict fake news, a non-pretrained version rather than pretrained models (e.g., BERT) for a fair comparison as the pretrained Transformers have learned large-scale external resources.
#### Iv-A3 Implementation Details
We randomly divide each dataset into 0.7:0.1:0.2 proportions for model training, validation, and testing. Macro-F1, micro-F1, and AUC are used to evaluate the performance of methods in news classification. The discourse parser is pretrained using RST-DT [23], and the constituency parser is pretrained using the Penn Treebank [25]. For the neural-network-based models, we uniformly utilize the pretrained GloVe [33] to obtain semantic-aware embeddings of words, with 100 as the embedding dimension. The hidden dimension within neural networks is set as 100, correspondingly. We deploy Adam optimizer to learn parameters, with 50 as the maximum number of epochs. We perform a grid search over the learning rate \(\in\{0.1,0.01,0.001,0.0001\}\) with validation data. In the end, 0.0001 performs best for our models and most of the baselines other than Transformer (0.001) and Text-GCN (0.01). All the experiments of the neural networks are implemented with PyTorch and are conducted on an NVIDIA Quadro RTX 6000 GPU (\(24\) GB memory), Intel(R) Xeon(R) Gold 6248R CPU (\(3.00\) GHz), and with \(64\) GB of RAM. For
\begin{table}
\begin{tabular}{l r r} \hline \hline & **Recovery** & **MM-COVID** \\ \hline
**\# news documents** & 2,029 & 3,536 \\
**- true news** & 1,364 & 1,444 \\
**- fake news** & 665 & 2,092 \\
**Avg. \# words per EDU** & 24 & 17 \\
**Avg. \# EDUs per document** & 38 & 2 \\
**Avg. \# words per document** & 841 & 16 \\ \hline \hline \end{tabular}
\end{table} TABLE I: Data statistics.
HCLFs, classifiers are used with the default hyperparameters presented in the scikit-learn library. \(Z\)-score normalization is applied for the feature matrix to enhance the classification performance.
### _Determining the Best HERO_
We compare the performance of the proposed neural networks with unified, level-specific, and attribute-specific Bi-GRUs in predicting fake news. Table II presents the results. The results indicate that with Recovery data, the performance ranking is attribute-specific HERO \(>\) level-specific HERO \(>\) unified HERO. Specifically, attribute-specific HERO correctly predicts news as fake or true with 0.85 macro-F1 and 0.87 micro-F1 and AUC, outperforming unified HERO by \(\sim\)4% and level-specific HERO by \(\sim\)3%. With MM-COVID data, the performance ranking is attribute-specific HERO \(\approx\) unified HERO \(>\) level-specific HERO. Attribute-specific and unified HEROs achieve \(\sim\)0.89-0.90% in macro-F1, micro-F1, and AUC, outperforming level-specific HERO by \(\sim\)1%. In conclusion, _attribute-specific HERO_ performs best in classifying long articles and short statements as fake news or the truth. This result demonstrates the importance of the node attributes (POSs or RRs) in developed hierarchical linguistic trees.
Additionally, we compare Bi-GRU and self-attention (#heads=10) as aggregators in the proposed hierarchical recursive neural network for fake news prediction. The results indicate that Bi-GRU performs better than self-attention by at least 1% in AUC on both datasets.
### _Comparing HERO with Baselines_
We compare the proposed model with the baselines in predicting fake news. The results presented in Table III reveal that the proposed model can generally outperform the baselines. Specifically, the proposed model has an AUC score approaching 0.87, outperforming HAN by more than 2%, Text-GCN by more than 3%, Transformer by more than 5%, EANN by more than 7%, HCLF by 12%, and DRNN by 17%. With MM-COVID data, the proposed model has an AUC score approaching 0.90, outperforming EANN by more than 6%, DRNN and HAN by \(\sim\)5%, Text-GCN and Transformer by \(\sim\)8-9%, and HCLF by more than 30%. From the table, we also observe that the proposed model outperforms EANN by 6-7% in macro-F1 and AUC but underperforms it by \(\sim\)3% in micro-F1 on MM-COVID. This result suggests that EANN tends to predict given news statements as the major class.
### _Ablation Study_
We compare the proposed model, HERO, which contains hierarchical linguistic (syntax- and discourse-level) structures with its following variants.
* _HERO\(\backslash\)Dis_: It stands for the variant of HERO with only syntax-level structures. In this variant, the embedding of a news document is obtained by averaging its embeddings of EDUs.
* _HERO\(\backslash\)Syn_: It stands for the variant of HERO with only discourse-level structures. In this variant, the embedding of each EDU of a news document is obtained by averaging its words.
* _HERO\(\backslash\)(Syn+Dis)_: It stands for the variant of HERO with no structures, the embedding of a news document is directly obtained by averaging its word embeddings.
\begin{table}
\begin{tabular}{r c c c c c c} \hline \hline & \multicolumn{3}{c}{**Recovery**} & \multicolumn{3}{c}{**MM-COVID**} \\ \cline{2-7}
**HERO** & **MAF1** & **MIF1** & **AUC** & **MAF1** & **MIF1** & **AUC** \\ \hline
**Unified** & 0.801 & 0.822 & 0.827 & 0.889 & 0.891 & **0.899** \\
**Level-specific** & 0.817 & 0.838 & 0.841 & 0.878 & 0.878 & 0.892 \\
**Attribute-specific** & **0.850** & **0.869** & **0.866** & **0.894** & **0.896** & 0.896 \\ \hline \hline \end{tabular}
\end{table} TABLE II: Performance of unified, level-specific, and attribute-specific HEROs in fake news prediction. Attribute-specific HERO performs best, demonstrating that the node attributes (POSs or RRs) in hierarchical linguistic trees are essential. MAF1: Macro-F1. MIF1: Micro-F1
Fig. 3: Ablation study. (a) The proposed HERO outperforms HERO\(\backslash\)Dis by 1% in AUC for unified HERO and by 3% for level- and attribute-specific HEROs. It outperforms HERO\(\backslash\)Syn by 7–9% and HERO\(\backslash\)Syn by 30%+ in AUC. (b) HERO performs similarly to HERO\(\backslash\)Dis as MM-COVID contains short statements having minimal discourse structures (i.e., syntax-level structures dominate). It outperforms HERO\(\backslash\)Syn and HERO\(\backslash\)(Syn+Dis) by 10%+ in AUC. Thus, syntax- and discourse-level structures are both essential.
\begin{table}
\begin{tabular}{r c c c c c c} \hline \hline & \multicolumn{3}{c}{**Recovery**} & \multicolumn{3}{c}{**MM-COVID**} \\ \cline{2-7} & **MAF1** & **MIF1** & **AUC** & **MAF1** & **MIF1** & **AUC** \\ \hline
**HCLF** & 0.752 & 0.801 & 0.746 & 0.566 & 0.624 & 0.577 \\
**Transformer** & 0.774 & 0.793 & 0.810 & 0.804 & 0.809 & 0.806 \\
**Text-GCN** & 0.841 & 0.869 & 0.835 & 0.826 & 0.836 & 0.817 \\
**EANN** & 0.811 & 0.864 & 0.795 & 0.825 & **0.926** & 0.833 \\
**HAN** & 0.847 & 0.869 & 0.844 & 0.840 & 0.856 & 0.846 \\
**DRNN** & 0.711 & 0.778 & 0.698 & 0.845 & 0.846 & 0.848 \\
**HERO** & **0.850** & **0.869** & **0.866** & **0.894** & 0.896 & **0.896** \\ \hline \hline \end{tabular}
\end{table} TABLE III: Performance of the proposed model, HERO, and baselines in fake news prediction. HERO outperforms the baselines by 2–17% in AUC on Recovery and by 3–30% in MM-COVID. MAF1: Macro-F1. MIF1: Micro-F1
The results are visualized in Figure 3. We observe that with Recovery data, the proposed HERO outperforms HERO\(\backslash\)Dis by 1% in AUC for unified HERO and by 3% for level- and attribute-specific HEROs. It outperforms HERO\(\backslash\)Syn by 7-9% and notably outperforms HERO\(\backslash\)Syn by above 30% in AUC. With MM-COVID data, the proposed HERO performs similarly to HERO\(\backslash\)Dis since the statements presented in MM-COVID are short with two EDUs on average and hence have minimal discourse structures (i.e., syntax-level structures dominate hierarchical linguistic structures). Meanwhile, it outperforms HERO\(\backslash\)Syn and HERO\(\backslash\)(Syn+Dis) by more than 10% in AUC. Therefore, we conclude that the proposed HERO is better than its variants, demonstrating the importance of hierarchical linguistic structures.
### _Characterizing Linguistic Style of Fake News_
Fake news has been theoretically identified with a linguistic style distinguishable from the truth [3]. This experiment aims to specify this different linguistic style of fake news. We compare the hierarchical linguistic trees generated by fake news and the truth, which we develop to represent the linguistic style of news documents systematically. The comparison is from the (i) children of parent nodes, (ii) attributes of nodes, and (iii) size, width, and depth of trees.
Children of Parent NodesWe compare fake news with real news in the average and the maximum number of children of parent nodes in hierarchical linguistic trees. Results that are statistically significant with a \(p\)-value \(<0.001\) (using t-test, unless otherwise specified) are in Figure 3(a). We observe that the hierarchical linguistic trees of fake news have more child nodes for each parent node than true news on average. News here indicates long news articles in the Recovery dataset and short statements in the MM-COVID dataset.
Attributes of NodesConsidering the nodes within a hierarchical linguistic tree can indicate the document (as the root), RRs, EDUs, POSs, and words (as the leaf nodes), we first compare fake news with the truth in the proportion of RRs, EDUs, POSs, and words, respectively. The reason for computing their proportions rather than the numbers is to eliminate the impact of the size of trees (discussed in the next paragraph). We observe that compared to true news, the hierarchical linguistic trees of fake news have a significantly smaller proportion of EDU nodes (\(p\)-value \(<0.05\)) and POS nodes indicating NNS (noun in the plural, \(p\)-value \(<0.001\)) but have a significantly larger proportion of nodes indicating specific POSs such as IN (preposition or subordinating conjunction), PP (prepositional phrase), and DT (determiner, \(p\)-value \(<0.001\)). We illustrate the results in Figure 3(b).
Size, Width, Depth of TreesWe compare fake news with the truth in the size, maximum width, and depth of hierarchical linguistic trees. Since hierarchical linguistic trees contain two-level structures, we also compare fake and true news in the size, maximum width, and depth of discourse- and syntactic-level trees.
We observe that the syntactic-level tree of fake news is generally greater with more nodes, broader, and deeper than true news. In particular, the syntactic-level tree of fake news has more leaf nodes than true news, which reveals that fake news often has longer EDUs with more words than true news. The above conclusions hold for long news articles (using Recovery data) and short statements (with MM-COVID data) with a \(p\)-value \(<0.01\); news files in both datasets are rich in syntactic information. Figure 3(c) (the upper ones) presents
Fig. 4: Hierarchical linguistic trees of fake and true news. Orange solid line: Median. Green dashed line: Mean.
the details. Moreover, we observe that fake news articles generate smaller and narrower discourse-level trees that lead to smaller and narrower hierarchical linguistic trees than true news articles (\(p\)-value \(<0.01\), see the bottom figures in Figure 4c). We point out that the discourse structures of short statements are plain with two EDUs on average and hence have trivial impacts on the shape of the entire hierarchical linguistic structures. Lastly, we point out that comparing trees' maximum and average widths leads to the same conclusions. Comparing the longest (i.e., depth) and the average distance between the root and leaves also leads to the same conclusions.
## V Conclusion
We propose a psychology-informed neural network to predict fake news. The proposed neural network learns the linguistic style of news documents represented by hierarchical linguistic trees, which explicitly captures the writers' usage of words and the linguistically meaningful ways these words are structured as phrases, sentences, paragraphs, and, ultimately, documents. We conduct experiments on public real-world datasets. The results demonstrate the effectiveness of the proposed neural network, with 0.87-0.90 AUC scores, and the importance of the developed hierarchical linguistic tree. The proposed neural network can outperform the previous (recurrent, convolutional, graph, and self-attentive) neural networks and feature-engineering-based approach in predicting news-as long articles or short statements-as fake news or the truth. We observe from the data that the hierarchical linguistic trees of fake news can significantly differ from true news in the children of parent nodes, the attributes of nodes, and the size, width, and depth of the trees. In our future work, we aim to enhance the proposed model's performance with multimodal and social-context information.
|
2307.13511 | Estimating Entanglement Entropy via Variational Quantum Circuits with
Classical Neural Networks | Entropy plays a crucial role in both physics and information science,
encompassing classical and quantum domains. In this work, we present the
Quantum Neural Entropy Estimator (QNEE), a novel approach that combines
classical neural network (NN) with variational quantum circuits to estimate the
von Neumann and Renyi entropies of a quantum state. QNEE provides accurate
estimates of entropy while also yielding the eigenvalues and eigenstates of the
input density matrix. Leveraging the capabilities of classical NN, QNEE can
classify different phases of quantum systems that accompany the changes of
entanglement entropy. Our numerical simulation demonstrates the effectiveness
of QNEE by applying it to the 1D XXZ Heisenberg model. In particular, QNEE
exhibits high sensitivity in estimating entanglement entropy near the phase
transition point. We expect that QNEE will serve as a valuable tool for quantum
entropy estimation and phase classification. | Sangyun Lee, Hyukjoon Kwon, Jae Sung Lee | 2023-07-25T14:04:21Z | http://arxiv.org/abs/2307.13511v2 | # Estimating Entanglement Entropy via Variational Quantum Circuits with Classical Neural Networks
###### Abstract
Entropy plays a crucial role in both physics and information science, encompassing classical and quantum domains. In this work, we present the Quantum Neural Entropy Estimator (QNEE), a novel approach that combines classical neural network (NN) with variational quantum circuits to estimate the von Neumann and Renyi entropies of a quantum state. QNEE provides accurate estimates of entropy while also yielding the eigenvalues and eigenstates of the input density matrix. Leveraging the capabilities of classical NN, QNEE can classify different phases of quantum systems that accompany the changes of entanglement entropy. Our numerical simulation demonstrates the effectiveness of QNEE by applying it to the 1D XXZ Heisenberg model. In particular, QNEE exhibits high sensitivity in estimating entanglement entropy near the phase transition point. We expect that QNEE will serve as a valuable tool for quantum entropy estimation and phase classification.
## I Introduction
Entropy, originally introduced in classical thermodynamics and statistical physics, has evolved into a fundamental concept used to quantify information across various scientific disciplines, ranging from computer science to classical and quantum information theory. In particular, the Shannon entropy establishes a profound connection between thermodynamics and information theory, which is further extended to quantum systems through the introduction of the von Neumann entropy. This extension has paved the way for the exploration of intricate relationships between information and physical reality, as demonstrated by the discovery of information fluctuation theorems [1; 2] and Landauer's principle [3; 4; 5; 6] in both classical and quantum domains. Moreover, in the realm of quantum information theory, entropy has been employed to unveil non-classical characteristics inherent to quantum systems. Specifically, the entanglement entropy captures non-classical correlations between distinct parties of a quantum system and plays a pivotal role in distinguishing different phases of quantum materials [7; 8; 9; 10].
In spite of its recognized importance as a valuable theoretical tool, the practical determination of entropy values in physical systems presents substantial challenges. The difficulty arises primarily from the requirement of accurately evaluating the probability distribution, which becomes impractical in high-dimensional systems [11; 12]. In the realm of quantum systems, estimating entropy becomes even more formidable due to the dependence on measurement choices. Specifically, the von Neumann entropy of a quantum state is determined as the minimum entropy among all potential measurement bases, which is achieved by taking the state's eigenbasis. This additional layer of complexity introduces difficulties in obtaining the eigendecomposition of a quantum density matrix, with computational costs exponentially increasing with the size of the quantum system. While several indirect methods have been proposed [13; 14; 15; 16] and experimentally demonstrated [17; 18; 19; 20] to estimate or bound entanglement of quantum states, limitations in accuracy persist. Recently, variational quantum algorithms (VQA) [21], such as the variational quantum state diagonalization (VQSD) [22] and the variational quantum state eigensolver (VQSE) [23] have emerged as potential approaches for estimating quantum state eigenvalues. However, it still remains uncertain whether these methods can efficiently estimate quantum entropy.
The challenging nature associated with quantum entropy estimation prompts the pursuit of a novel and efficient algorithm that can be effectively integrated with a variational quantum circuit, leading to improved performance. To this end, one potential candidate is classical neural networks (NNs), which have demonstrated exceptional capabilities in optimizing objective functions. Notably, this machine-learning technique has recently found application in estimating entropy production for classical thermodynamic systems, named as neural estimator of entropy production (NEEP) algorithm [24], by optimizing the objective function defined by the Donsker-Varadhan inequality [25]. This utilization of NNs in classical thermodynamics provides inspiration for their potential application in the context of quantum entropy estimation. By leveraging the strengths of classical NNs in optimizing von Neumann entropy, it would be possible to enhance the accuracy and efficiency of quantum entropy estimation, paving the way for further advancements in the field.
In this work, we present a quantum-classical hybrid algorithm, Quantum Neural Entropy Estimator (QNEE), for estimating the von Neumann and Renyi entropies of a quantum state. Our method capitalizes on the efficient entropy estimation capabilities of NNs from the outcomes of parametrized quantum circuits while utilizing variational parameters to align the quantum circuit in the correct measurement basis. We highlight that our methodology can be readily extended for any quantum circuit ansatz as the classical NN is constructed independent of the variational quantum circuit's structure.
Classical NN using the cost function based on the Donsker-Varadhan inequality has some useful properties, such as robustness against small-size data and rare events, which have already been verified in classical stochastic processes [12; 24]. Another advantage of this type of cost function is that it can provide a reliable upper bound of the entropy even if the measurement basis is not fully aligned correctly. This opens a possible application of the proposed approach as a phase classifier of quantum systems even without the accurate estimation of entanglement value.
To showcase the effectiveness of our proposed protocol, we apply it to estimate the ground state entanglement entropy of the Heisenberg XXZ model and to distinguish its phase relying on the external magnetic field. Our numerical results demonstrate that QNEE exhibits good performance in estimating entanglement entropy as well as the subsystem's eigenvalues. We also provide a detailed comparison between QNEE and the existing scheme of VQSE.
This paper is organized as follows. In Sec. II, we construct QNEE by combining a classical NN with a variational quantum circuit. In Sec. III, we apply QNEE to estimate the ground state entanglement entropy of the 1D XXZ Heisenberg model and its performance as a phase classifier. We conclude the paper in Sec. IV.
**Concurrent work:** In the course of finalizing this manuscript, we became aware of related research that has emerged in arXiv [26; 27]. It is essential to highlight that our work draws inspiration from the authors' earlier contributions [12; 24; 28] as its quantum extension. Therefore, our cost function for NN is distinct from that employed in the concurrent works. Furthermore, we must emphasize that the primary focus of our work lies in the application of phase classification to a physical Hamiltonian. In contrast, the concurrent works under examination delve into the investigation of random quantum states as illustrative examples. These distinctions in scope and approach further underscore the unique contributions and significance of our study.
## II Quantum entropy estimator
### Cost function of QNEE
In this section, we establish QNEE's cost function to estimate the quantum entropy of a given density matrix \(\hat{\rho}\), describing the state of a system. The cost function is based on the following Gibbs variational principle:
\[S(\hat{\rho})\equiv-\mathrm{tr}\{\hat{\rho}\ln\hat{\rho}\}\leq- \langle\hat{O}\rangle_{\hat{\rho}}+\ln\mathrm{tr}\{e^{\hat{O}}\} \tag{1}\]
for any Hermitian operator \(\hat{O}\), whose expectation value is defined as \(\langle\hat{O}\rangle_{\hat{\rho}}=\mathrm{tr}\{\hat{O}\hat{\rho}\}\). We present a simple derivation of Eq. (1) in Appendix A. Equation (1) was utilized as a quantum entropy estimator in Ref. [27]. However, as explained in Sec. II.3 and Appendix A, a cost function, including the logarithmic function, can be biased and malfunction for large data if additional manipulation is not employed. To avoid this complication, we further linearize Eq. (1) using the inequality \(\ln\mathrm{tr}\{e^{\hat{O}}\}\leq\mathrm{tr}\{e^{\hat{O}}\}-1\) as
\[S(\hat{\rho})\leq-\langle\hat{O}\rangle_{\hat{\rho}}+\mathrm{tr} \{e^{\hat{O}}\}-1. \tag{2}\]
Equation (2) is usually a more appropriate cost function for the stochastic (mini-batch) gradient method [29], which is typically used for large-system NN optimization [30]. The both bounds (1) and (2) imply that we can indirectly estimate the von Neumann entropy using the operator \(\hat{O}\)'s expectation value and its eigenvalues. However, the operator saturating the both bounds in Eqs. (1) and (2) is given by \(\hat{O}^{*}=\ln\hat{\rho}\). In other words, the exact estimation using these bounds requires the information of \(\hat{\rho}\). Nevertheless, an optimization method enables us to estimate the von Neumann entropy without accessing full information of the density matrix \(\hat{\rho}\). For classical stochastic processes, various optimization methods have been employed to estimate the entropy production without detailed information [12; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 82; 84; 86; 87; 88; 89; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 1777; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 188; 189; 189; 191; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 213; 214; 215; 216; 217; 218; 219; 22; 231; 232; 233; 234; 235; 236; 237; 238; 239; 240; 251; 252; 253; 254; 255; 256; 257; 258; 26; 279; 28; 281; 282; 283; 284; 285; 286; 287; 288; 289; 290; 291; 288; 289; 292; 293; 294; 295; 296; 297; 300; 301; 302; 303; 304; 305; 306; 307; 308; 309; 310; 311; 329; 333; 341; 342; 358; 359; 360; 371; 372; 373; 381; 382; 383; 398; 399; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 68; 69; 70; 72; 74; 75; 76; 777; 78; 79; 80; 81; 83; 84; 85; 86; 87; 88; 89; 92; 88; 89; 93; 941; 88; 87; 88; 89; 95; 96; 97; 98; 99; 99; 101; 11; 12; 13; 14; 15; 16; 17; 18; 199; 192; 193; 194; 195; 196; 197; 198; 199; 202; 203; 204; 205; 206; 207; 208; 209; 211; 224; 215; 216; 217; 218; 219; 232; 241; 219; 25; 26; 27; 28; 299; 301; 312; 32; 333; 342; 35; 361; 373; 38; 39; 40; 410; 411; 42; 43; 45; 46; 47; 48; 49; 51; 50; 53; 54; 56; 57; 58; 59; 61; 70; 71; 72; 73; 74; 75; 76; 78; 89; 931; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 94; 95; 96; 97; 101; 11; 12; 13; 14; 15; 16; 17; 18; 198; 199; 203; 21; 222; 23; 24; 25; 26; 27; 28; 29; 31; 32; 334; 35; 37; 39; 50; 51; 538; 59; 60; 61; 62; 63; 640; 65; 67; 68; 69; 70; 73; 71; 72; 74; 76; 78; 79; 81; 80; 82; 84; 85; 86; 87; 88; 89; 99; 90; 911; 92; 933; 94; 95; 96; 97; 103; 98; 104; 105; 106; 107; 108; 107; 108; 119; 111; 139; 120; 133; 134; 14; 15; 16; 17; 18; 199; 193; 18; 194; 195; 196; 197; 198; 204; 21; 22; 24; 26; 27; 28; 299; 303; 31; 32; 34; 36; 37; 38; 39; 40; 41; 43; 44; 45; 46; 47; 48; 49; 51; 52; 53; 54; 56;
where \(\hat{\rho}_{\hat{V}}=\hat{V}\hat{\rho}\hat{V}^{\dagger}\). The optimization over \(\hat{O}\) then becomes finding the optimal diagonal matrix \(\hat{D}\) and the unitary operator \(\hat{V}\). By taking \(\hat{D}(\Theta_{N})=\sum_{i=1}^{2^{n}}h(s_{i};\Theta_{N})|s_{i}\rangle\langle s_{ i}|\) in the computational basis for \(n\) qubits \(\{|s_{i}\rangle\}=\{|00\cdots 0\rangle,\cdots,|11\cdots 1\rangle\}\), we encode \(h(s_{i};\Theta_{N})\) into the output function of a classical NN with a weight set \(\Theta_{N}\). We also encode the unitary operator \(\hat{V}(\Theta_{Q})\) into a variational quantum circuit parameterized by a vector \(\Theta_{Q}\). The QNEE's cost function \(C(\Theta_{N},\Theta_{Q})\) is then defined as
\[S(\hat{\rho})\leq-\sum_{i=1}^{2^{n}}h(s_{i};\Theta_{N})P_{\hat{V}}(s_{i}; \Theta_{Q})+\sum_{i=1}^{2^{n}}e^{h(s_{i};\Theta_{N})}-1\equiv C(\Theta_{N}, \Theta_{Q}), \tag{4}\]
where \(P_{\hat{V}}(s_{i};\Theta_{Q})=\langle s_{i}|\hat{V}(\Theta_{Q})\hat{\rho}\hat {V}^{\dagger}(\Theta_{Q})|s_{i}\rangle\) is the probability of the quantum circuit parameterized by \(\Theta_{Q}\) to output the string \(s_{i}\). Here we define \(\hat{V}_{\rm D}\) as the unitary matrix rendering \(\hat{\rho}\) diagonalized as \(\rho_{\hat{V}_{\rm D}}=\hat{V}_{\rm D}\hat{\rho}\hat{V}_{\rm D}^{\dagger}= \sum_{i}\lambda_{i}|s_{i}\rangle\langle s_{i}|\), where \(\lambda_{i}=\langle s_{i}|\hat{\rho}_{\hat{V}_{\rm D}}|s_{i}\rangle\) is the probability for measuring \(s_{i}\) from \(\hat{\rho}_{\hat{V}_{\rm D}}\). When \(\hat{V}(\Theta_{Q})=\hat{V}_{\rm D}\) and \(h(s_{i};\Theta_{N})=\ln\lambda_{i}\), the inequality in Eq. (4) is saturated, in which case \(P_{\hat{V}}(s_{i};\Theta_{Q})=\lambda_{i}\).
Equation (4) is reminiscent of the cost function of NEEP: a neural estimator for entropy production [24]. Both the cost functions of QNEE and NEEP can be derived from the \(f\)-divergence [33; 34]. The unsupervised learning with a variational cost function has been established as an effective method to estimate the divergence of data rather than other methods, especially when the number of samples is inadequate or the data include rare events [12; 24]. Because the entropy can be considered as the divergence that one distribution is given as a maximally mixed state, we expect that QNEE would be effective when the given density matrix has a few dominant eigenvalues while the remainings are nearly zero, or the number of given shots is insufficient, which is the usual situation for many-qubit systems. The von Neumann entropy is small in general under these conditions.
The \(f\)-divergence yields a diverse set of information measures and their variational functional forms. Consequently, this enables the formulation of cost functions for estimating various physical quantities. One prominent example of such cost functions is the Renyi entropy, defined as
\[S_{\alpha}(\hat{\rho})=\frac{1}{1-\alpha}\ln\mathrm{tr}\{\hat{\rho}^{\alpha}\} \tag{5}\]
for \(\alpha>0\) and \(\alpha\neq 1\). The corresponding cost function of Eq. (5) is
\[\frac{e^{(1-\alpha)S_{\alpha}(\hat{\rho})}-1}{\alpha(1-\alpha)}\leq \sum_{i=1}^{2^{n}}P_{\hat{V}}(s_{i};\Theta_{Q})\frac{e^{(\alpha-1 )h(s_{i};\Theta_{N})}-1}{(1-\alpha)}+\sum_{i=1}^{2^{n}}\frac{e^{\alpha h(s_{i} ;\Theta_{N})}-1}{\alpha}\equiv C_{\alpha}(\Theta_{N},\Theta_{Q}). \tag{6}\]
The detailed derivation of Eq. (6) from the \(f\)-divergence is presented in Appendix B. It is also straightforward to show that Eq. (6) is saturated under the same conditions as Eq. (4), that is, \(h(s_{i};\Theta_{N})\rightarrow\ln\lambda_{i}\) and \(P_{\hat{V}}(s_{i};\Theta_{Q})\rightarrow\lambda_{i}\). With the optimal parameters saturating the equality, the cost function can be easily converted into the Renyi entropy. Note that the cost function for Renyi entropy used in Ref. [27] consists of the two logarithmic terms, which has the same complication as the Gibbs variational principle (1), not straightforwardly applicable to the stochastic gradient method to handle large data without additional data manipulation. These two logarithmic terms of the cost function in Ref. [27] cannot be simply linearized as done in Eq. (2) either since it is not trivial to determine which one is larger between the original one and the linearized one. Therefore, it is not easy to find the variational form appropriate for the stochastic gradient method from the cost function in Ref. [27]. In contrast, our cost function has no such problem, thus, it would be more appropriate for manipulating larger quantum system via NN.
Equation (6) is reduced to the bound for the von Neumann entropy (4) in the limit of \(\alpha\to 1\). Moreover, Eq. (6), originally derived for estimating the Renyi entropy, can also be applied to estimate the von Neumman entropy even for general \(\alpha(\neq 1)\) cases. This is due to the fact that the NN output with optimal parameters \(\Theta_{N}^{*}\) for the Renyi entropy is the same as that of the von Neumann entropy (4), i.e., \(h(s_{i};\Theta_{N}^{*})=\ln\lambda_{i}\) for any \(\alpha\). This freedom for choosing \(\alpha\) arbitrarily opens another possibility to render the NN performance enhanced. A similar technique was used to estimate stochastic entropy production more accurately by tuning \(\alpha\)[33].
### Overview of the QNEE algorithm
Here we present an outline of QNEE's estimation procedure. Figure 1 shows the schematic diagram of the comprehensive QNEE algorithm. As depicted in Fig. 1(a), we prepare an \(n\)-qubit quantum system described by the density matrix \(\hat{\rho}\), for which the von Neumann entropy needs to be estimated. Additionally, we randomly select the initial values of \(\Theta_{N}\) and \(\Theta_{Q}\), which serve as the parameters for the NN and the quantum circuit, respectively. This setup
allows us to commence the repetitive procedure for updating \(\Theta_{N}\) and \(\Theta_{Q}\) until the cost function in Eq. (4) is optimized. The one unit, as illustrated in Figs. 1(b) and (c), of this repetitive procedure can be divided into two processes in terms of the updating parameters: (i) the updating process of \(\Theta_{N}\) via the optimization of the cost function using the NN and (ii) the updating process of \(\Theta_{Q}\) obtained through the calculation of the gradient of the cost function.
The brief overview of this one-unit process is as follows. First, training and test data sets, consisting of \(N_{s}\) strings each, are produced from the quantum circuit with a given \(\Theta_{Q}\). With the training set, we perform unsupervised learning to update \(\Theta_{N}\) using the classical NN, along with minimizing the cost function. In the course of the \(\Theta_{N}\) updating process, we evaluate the cost function by inputting the test data set into the NN several times and finding the lowest one among them. This lowest value of the cost function denoted by \(C^{\rm NN}\) and the corresponding parameters \(\Theta_{N}\) are recorded. Details on this NN optimization process and the structure of the NN are presented in Sec. II.3. For updating \(\Theta_{Q}\), we need to evaluate the gradient of the cost function with respect to \(\Theta_{Q}\), i.e., \(\nabla_{\Theta_{Q}}C^{\rm NN}\). To do this, it is necessary to obtain \(C^{\rm NN}\) for small perturbation \(\delta\) from \(\Theta_{Q}\) in various directions through the NN optimization method mentioned above. Employing this gradient descent method, \(\Theta_{Q}\) is updated slightly toward the direction that minimizes \(C^{\rm NN}\). Details on the structure of the quantum circuit and the \(\Theta_{Q}\) updating process are explained in Sec. II.4.
Upon conducting an adequate number of repetitions of this unit process to update \(\Theta_{N}\) and \(\Theta_{Q}\), we anticipate the optimization of parameters and cost functions. Utilizing the obtained optimal parameters \(\Theta_{N}^{*}\) and \(\Theta_{Q}^{*}\), QNEE produces three essential outputs: the von Neumann entropy (\(S(\hat{\rho})\)), the eigenvalues of the quantum state (\(\lambda_{i}\)), and their corresponding eigenvectors (\(|\lambda_{i}\rangle\)) as follows:
\[S(\hat{\rho}) =C(\Theta_{N}^{*},\Theta_{Q}^{*}), \tag{7}\] \[\lambda_{i} =\exp{[h(s_{i};\Theta_{Q}^{*})]},\] (8) \[|\lambda_{i}\rangle =\hat{V}^{\dagger}(\Theta_{Q}^{*})|s_{i}\rangle=\hat{V}_{D}^{ \dagger}|s_{i}\rangle. \tag{9}\]
We note that the same process can be employed to estimate the Renyi entropy with the cost function in Eq. (6).
Figure 1: Architecture of QNEE. (a) QNEE requires three inputs: density matrix \(\hat{\rho}\), initial parameter \(\Theta_{Q}\) for a quantum circuit, and \(\Theta_{N}\) for a NN. (b) From the given \(\Theta_{Q}\) and density matrix \(\hat{\rho}\), two string data sets are generated: One for training and the other one for testing. (c) We update \(\Theta_{N}\) by minimizing the cost function evaluated with the train data, \(\min_{\Theta_{N}}C^{\rm train}\). During the learning process, the cost function evaluated with the test data \(C^{\rm test}\) is recorded. Then, the NN outputs the lowest value of \(C^{\rm test}\) from the recorded values, denoted as \(C^{\rm NN}\). The quantum circuit’s parameter \(\Theta_{Q}\) is updated by minimizing the NN’s output \(\min_{\Theta_{Q}}C^{\rm NN}\). On successful training, QNEE finds the optimal parameters \(\Theta_{N}^{*}\) and \(\Theta_{Q}^{*}\), and QNEE generates three outputs. (d) The cost function itself becomes von Neumann entropy with optimal parameters. (e) The trained NN can generate the eigenvalues of \(\hat{\rho}\). (f) With a conjugated quantum circuit \(\hat{V}^{\dagger}(\Theta_{Q}^{*})\), eigenvectors can be generated.
### Structure of the classical NN and optimization of \(\Theta_{N}\)
In this section, we elaborate the structure and training process of the classical NN. We design the NN as a combination of an embedding layer and fully connected three hidden layers parameterized by \(\Theta_{N}\). The input string is converted into a vector via the embedding layer and is subsequently processed through the hidden layers with 256 units each along with ReLU activation functions [35]. Therefore, the NN can transform the string data generated via the quantum circuit \(\hat{V}(\Theta_{Q})\) into the scalar value \(h(s;\Theta_{N})\). To optimize \(\Theta_{N}\) of the classical NN, we first generate training and test sets consisting of \(N_{s}\) strings each from the quantum circuit parameterized by \(\Theta_{Q}\). Using the training set, we evaluate the cost function in Eq. (4) as
\[C^{\text{train}}(\Theta_{N},\Theta_{Q})=-\frac{1}{N_{s}}\sum_{s\in\text{train set}}h(s;\Theta_{N})+\sum_{i=1}^{2^{n}}e^{h(s_{i};\Theta_{N})}-1. \tag{10}\]
Over a predetermined number of iterations \(N_{\text{iter}}\), \(\Theta_{N}\) is updated using ADAM optimizer [36] for minimizing the cost function.
During these training iterations, we evaluate the cost function several times using the test set with the updated \(\Theta_{N}\) as
\[C^{\text{test}}(\Theta_{N},\Theta_{Q})=-\frac{1}{N_{s}}\sum_{s\in\text{test set}}h(s;\Theta_{N})+\sum_{i=1}^{2^{n}}e^{h(s_{i};\Theta_{N})}-1. \tag{11}\]
Among these several evaluated \(C^{\text{test}}\), the lowest one is recorded as \(C^{\text{NN}}\) and the corresponding \(\Theta_{N}\) is selected for the NN parameters at a given \(\Theta_{Q}\). Note that the learning curves shown in Figs. 4(d-f) are the plot of the recorded \(C^{\text{NN}}\).
When the NN is ideally trained, the optimal parameter for the NN is given as \(\tilde{\Theta}_{N}(\Theta_{Q})=arg\,\min_{\Theta_{N}}C(\Theta_{N},\Theta_{Q})\). Then, the output of the NN and the cost function are written as
\[h(s_{i};\tilde{\Theta}_{N})= \ln P_{\hat{V}}(s_{i};\Theta_{Q}), \tag{12}\] \[C(\tilde{\Theta}_{N},\Theta_{Q})= -\sum_{i=1}^{2^{n}}P_{\hat{V}}(s_{i};\Theta_{Q})\ln\{P_{\hat{V}}( s_{i};\Theta_{Q})\}, \tag{13}\]
respectively. Equation (13) is the Shannon entropy \(S[P_{\hat{V}}(s_{i};\Theta_{Q})]\) in terms of the diagonal elements of \(\hat{\rho}_{\hat{V}}\). Therefore, we can write down the following variational principle with the ideal-string distribution \(P_{\hat{V}}(s_{i};\Theta_{Q})\) as
\[S[P_{\hat{V}}(s_{i};\Theta_{Q})]\leq-\sum_{i=1}^{2^{n}}h(s_{i};\Theta_{N})P_{ \hat{V}}(s_{i};\Theta_{Q})+\sum_{i=1}^{2^{n}}e^{h(s_{i};\Theta_{N})}-1. \tag{14}\]
This bound (14) can also be derived from the Donsker-Varadhan inequality \(D_{\text{KL}}[P\|Q]\geq\langle h(s)\rangle_{P}-\ln\langle e^{h(s)}\rangle_{Q}\) which is the variational form of the Kullback-Leibler divergence \(D_{\text{KL}}[P\|Q]=\sum_{i}P(s_{i})\ln\left[P(s_{i})/Q(s_{i})\right]\) for two distributions \(P(s_{i})\) and \(Q(s_{i})\)[25]. One can easily derive Eq. (14) from the Donsker-Varadahan inequality by choosing the probability \(Q\) as a uniform distribution and applying the inequality \(\ln x\leq x-1\) to the logarithm function. Note that Eq. (12) reveals the role of the last two terms in \(C^{\text{train}}\); these terms \(\sum_{i=1}^{2^{n}}e^{h(s_{i};\Theta_{N})}-1\) ensure that \(e^{h}\) represents a normalized probability.
Finally, it is worth noting that although Eq. (10) is currently formulated as a full-batch summation, it can be adapted to utilize mini-batch evaluation for calculating the cost function. This implies that the stochastic gradient descent method [29] can be directly applied to the QNEE's cost function without any modifications, presenting an advantage in using the linearization of the logarithmic function, \(\ln x\leq x-1\). The stochastic gradient descent method is efficient in handling large-scale data and facilitating optimizer escape from local minima. Given the exponential increase in data size with the number of qubits, the practical necessity of stochastic gradient descent for managing such vast amounts of data becomes evident. Moreover, we can also extend the application of stochastic gradient descent to the cost function for Renyi entropy estimation, as given in Eq. (6).
### Structure of the quantum circuit and optimization of \(\Theta_{Q}\)
With the cost function calculated by the NN, we update the quantum circuit. Here we explain our choice of ansatz and the optimization process of the quantum circuit. Optimizing \(\Theta_{Q}\) can be viewed as a VQA, the accuracy of which
heavily depends on the structure of the target quantum state and quantum circuit ansatz. In this work, we choose the layered hardware-efficient ansatz [37], which is widely adopted when intricate information of \(\hat{\rho}\) is inaccessible. The ansatz consists of two-qubit gates of one kind and parameterized one-qubit gates, as described in Fig. 2. \(\hat{V}(\Theta_{Q})\) is constructed from \(N_{l}\) layered two-qubit gates \(\bar{V}(\Theta_{Q}^{(i)})\) that are drawn as yellow paths. \(\bar{V}(\Theta_{Q}^{(i)})\) contains four \(\hat{R}_{Y}\) gates and one controlled-Z gate, which is the same choice as the quantum circuit in Ref. [23]. Each quantum circuit layer can be an identity gate with certain parameters to guarantee that increasing \(N_{l}\) improves the accuracy of QNEE [23].
In this study, a gradient descent method is chosen to optimize the cost function via the quantum circuit. To this end, it is necessary to estimate the cost function at differently perturbed \(\Theta_{Q}\)s for obtaining its gradient. For example, if the ansatz is parameterized by one single scalar, we estimate \(C^{\rm NN}(\Theta_{Q})\) and \(C^{\rm NN}(\Theta_{Q}+\delta)\) via the NN, and calculate the gradient \(\nabla_{\Theta_{Q}}C(\Theta_{Q})=[C^{\rm NN}(\Theta_{Q}+\delta)-C^{\rm NN}( \Theta_{Q})]/\delta\), where \(\delta\) is a small value. Then, the parameter is updated as \(\Theta_{Q}-\eta_{Q}\nabla_{\Theta_{Q}}C(\Theta_{Q})\) with a learning rate \(\eta_{Q}\). Therefore, two training processes of the NN are required to calculate the gradient for a scalar \(\Theta_{Q}\) case. When the dimension of \(\Theta_{Q}\) increases, the number of necessary directions for perturbation to obtain the gradient also increases. Using the calculated gradient and a learning rate that we choose, \(\Theta_{Q}\) is updated. After an adequate number of iterations of this \(\Theta_{Q}\) updating process, we take the lowest value of the cost function as the estimated von Neumann entropy among all \(C^{\rm NN}\) values evaluated during the iteration procedure.
When the quantum circuit is ideally trained, the optimal parameter is found as \(\Theta_{Q}^{*}=arg\min_{\Theta_{Q}}C(\bar{\Theta}_{N},\Theta_{Q})\) that diagonalizes the input density matrix with the eigenvalue \(\lambda_{i}=P_{\hat{V}_{D}}(s_{i};\Theta_{Q}^{*})\). Therefore, the cost function is saturated to the von Neumann entropy, and the output of the NN with \(\Theta_{N}^{*}=\arg\min_{\Theta_{N}}C(\Theta_{N},\Theta_{Q}^{*})\) is the logarithm of the eigenvalue as shown in Eqs. (7) and (8), respectively. Additionally, the eigenvectors of \(\hat{\rho}\) can be constructed by the conjugated quantum circuit \(\hat{V}^{\dagger}(\Theta_{Q}^{*})\) as Eq. (9).
Usually, many qubit states or highly entangled states require an ansatz with deep depth. However, the increased dimension of \(\Theta_{Q}\) requires more data to compute the gradient of the cost function. Moreover, a large number of layers can induce a flat landscape of a cost function, which leads to difficulty in training. This is named the Barren plateau [38]. Although various methods are suggested to mitigate the Barren Plateau, being free of this effect seems uneasy. Even with a shallow quantum circuit, QNEE provides useful applications, which will be discussed in Section III in more detail.
### Other methods for entropy estimation
Various VQAs are applicable to estimate quantum entropy [22; 23; 39]. These algorithms can be classified into two categories. One category is the VQAs based on purity minimization. VQSD is one of these algorithms. VQSD employs the SWAP (swapping two qubits) test to compute the purity \(\mathrm{tr}\{\hat{\rho}^{2}\}\). As the result of minimization, the input density matrix is diagonalized, from which the von Neumann entropy can be estimated. The SWAP test requires \(\hat{\rho}\otimes\hat{\rho}\) as its input, which means that the width of the quantum circuit must be twice the number of qubits of \(\hat{\rho}\). In addition, another quantum circuit is necessary to compute the cost function.
The other category exploits majorization [23]; we say that \(\vec{x}\) majorizes \(\vec{y}\), when \(\sum_{i}^{k}x_{i}^{\downarrow}\geq\sum_{i}^{k}y_{i}^{\downarrow}\) for arbitrary \(k\) and \(\sum_{i}^{m}x_{i}=\sum_{i}^{m}y_{i}\)[40]. Here \(m\) is the dimension of the vector, and \(\vec{x}^{\downarrow}\) is a decreasing ordered vector, i.e., \(x_{i}^{\downarrow}\geq x_{i+1}^{\downarrow}\). Because the eigenvalues of \(\hat{\rho}\) majorize the diagonal elements of \(\hat{\rho}\) and the inner product with an ordered energy-level vector is a Schur convex function, the expectation value of an artificially ordered Hamiltonian can be a cost function
Figure 2: Schematic diagrams of (a) layered hardware efficient ansatzs \(V(\Theta_{Q})\) and (b) its block gate \(\bar{V}\). \(V(\Theta_{Q})\) consists of \(N_{l}\) layers of \(\bar{V}\) block gates. In this research, we choose \(\bar{V}\) as the combination of four \(R_{Y}\) gates and one controlled-Z gate. For \(n\) qubits, the dimension of \(\Theta_{Q}\) or the total number of parameters is given as \(2nN_{l}\).
for diagonalizing \(\hat{\rho}\)'s subspace (refer [23] for detailed information). After optimization, the \(\ell\)-largest eigenvalues are estimated using the quantum circuit with the optimal parameter \(\Theta_{Q}^{*}\). From the string data, we can estimate the eigenvalue \(\lambda_{i}\sim N_{i}/N_{s}\) where \(N_{s}\) denotes the number of total shots, and \(N_{i}\) denotes the number of \(s_{i}\) data. One benefit of using the cost function is that the requirement for the width of the quantum circuit only needs to match the number of qubits of \(\hat{\rho}\), which is the same requirement as the QNEE. Note that the VQSE focuses on finding a few largest eigenvalues, which makes the VQSE to be well saturated by expanding the allowed solution space of \(V(\Theta_{Q})\). We note that QNEE's intermediate cost function (see Eq. (13)) also exploits majorization (see Appendix B). Due to the same structure of the quantum circuit ansatz \(V(\Theta_{Q})\) used in QNEE and VQSE, we can fairly compare QNEE with VQSE under the same conditions. We provide this comparison in Sec. III.
## III Application of QNEE to a physical model
### Applications of QNEE
QNEE is a combination of the classical NN and the quantum circuit to estimate the von Neumann entropy. Hence, QNEE shares some advantages of using classical NNs for entropy estimation. For instance, NN such as NEEP has been reported to show high precision in estimating large-valued \(f\)-divergences, which corresponds to low entropy (see Appendix B), with small-size data or data with rare events [24; 12]. In this sense, one can expect that QNEE provides high precision for the small-valued von Neumann entropy. While estimating the quantum entropy additionally requires the diagonalization of the density matrix, we note that this does not necessitate a deep depth of the circuit ansatz for low-entropy quantum states. It is also worth noting that QNEE provides the upper bound Eq. (4) of the von Neumann entropy even when \(\Theta_{Q}\) is not fully optimized throughout the training process. This guarantees that QNEE efficiently characterizes quantum states with low entropy.
Based on these properties originating from the classical NN, QNEE can be utilized as a phase classifier for quantum states even with a shallow-depth ansatz and a small number of iterations. Particularly, it has been actively studied in many-body physics that phase transitions in various many-body quantum systems accompany the drastic change of the entanglement entropy, given by the von Neumann entropy of the subsystem [7; 8; 9; 10; 41; 42; 43]. As an illustrative example, we test our protocol using the XXZ Heisenberg model in a 1D chain (XXZ chain). This model shows the Pokrovsky-Talapov transition that accompanies a sudden change of entanglement entropy [41]. The entanglement entropy abruptly decreases after passing the critical point.
In the following subsections, we present numerical results of classifying the phase of the XXZ chain using QNEE, and compare its performances with the existing methods of VQSE [23]. We employ PyTorch [44], Qiskit [45], and QuTiP [44] to implement QNEE.
### Classifying the phase of XXZ chain
In order to verify the efficiency of QNEE as a phase classifier, we consider the ground state of the XXZ chain [41; 46; 47] Hamiltonian,
\[H_{XXZ}=\sum_{l=0}^{L-1}(\hat{\sigma}_{l}^{x}\hat{\sigma}_{l+1}^{x}+\hat{ \sigma}_{l}^{y}\hat{\sigma}_{l+1}^{y}+\Delta\hat{\sigma}_{l}^{z}\hat{\sigma}_ {l+1}^{z}-\lambda\hat{\sigma}_{l}^{z}), \tag{15}\]
where \(\hat{\sigma}_{l}^{k}\) denotes \(l\)-th spin operator in \(k\in\{x,y,z\}\) direction and \(\Delta\), \(\lambda\) are real numbers. \(L\) is the total number of qubits in the chain. The last term \(\lambda\hat{\sigma}_{l}^{z}\) can be interpreted as the effects of external magnetic field or chemical potential [46]. Here, we impose the periodic boundary condition.
The ground state \(|\Psi_{G}\rangle\) of the XXZ chain shows phase transition at \(\lambda_{c}=2(1-\Delta)\), which is named Pokrovsky-Talapov transiton [41]. For \(0\leq\lambda<\lambda_{c}\), the system is in the critical or massless phase. In the critical regime, the entanglement entropy shows log scaling behavior \(S\propto\ln n+c\), where \(n\) is the number of qubits of the reduced matrix, and \(c\) is a constant [9]. In the case of the finite-size XXZ chain, the scaling relation is modified as \(S\propto\ln\{\frac{L}{n}\sin\frac{n}{L}\}+c^{\prime}\). For \(\lambda>\lambda_{c}\), the system is in the non-critical or ferromagnetic phase, where the entanglement entropy vanishes. This implies that a sudden decrease of the entanglement entropy can be observed as \(\lambda\) increases.
We estimate the entanglement entropy of the XXZ chain with 8 qubits by estimating the von Neumann entropy of the ground state's reduced density matrix \(\rho=\text{tr}_{B}\{|\Psi_{G}\rangle\langle\Psi_{G}|\}\), where \(\text{tr}_{B}\) denotes partial tracing out the sub system \(B\). We set \(\Delta=0.05\) and vary magnetic field parameter \(\lambda\) from 0 to 3. Near the critical point \(\lambda\sim 2\), the entanglement entropy undergoes the phase transition. The estimation results of QNEE for the three and four-qubit reduced density
matrices are presented in Fig. 3. We emulate the variational quantum circuit using a classical computer by collecting the outcome strings from the exact probability distribution of the quantum circuit's outcome. The number of layer \(N_{l}\) is taken as 8 for the three-qubit case and 10 for the four-qubit case. The learning rate of gradient-descent method for the quantum circuit is set as \(\eta_{Q}=0.01\) for both cases. For the NN training process, the learning rate and weight decay of ADAM optimizer are set as 0.00001 and 0.00005, respectively. We train the NN over \(N_{\text{iter}}=10^{4}\) iterations at an initial \(\Theta_{Q}\) for sufficient optimization. In the subsequent NN training processes, the NN is trained over \(N_{\text{iter}}=100\) iterations at each \(\Theta_{Q}\). The entanglement entropy is estimated from five different initial \(\Theta_{Q}\), and the minimum value among them is taken. In the insets of Fig. 3, we plot the mean and standard deviation of the five different trials. For each \(\Theta_{Q}\), 30,000 shots are used to find the optimal parameter \(\bar{\Theta}_{N}\).
The numerical results demonstrate that QNEE well estimates the entanglement entropy of the XXZ chain ground state for a wide range of the parameter regime. In particular, QNEE provides an accurate estimation near \(\lambda_{c}\) and the non-critical regime. This is expected as these are low-entropy states that a classical NN can estimate well. In particular, the standard deviation of QNEE is smaller than the size of markers when \(\lambda\geq\lambda_{c}\), which means that a single trial is enough to classify those phases with QNEE. We also discuss the learning rate of QNEE.
Another output of QNEE is the eigenvalues of the reduced density matrix, the so-called entanglement spectrum. In Fig. 4 (a - c), we plot the estimated eigenvalues for three different values of magnetic field \(\lambda=0.5,2.0,3.0\) for the three-qubit reduced density matrix using the same data as in Fig. 3. Even though QNEE mainly focuses on entropy estimation, its eigenvalue estimation is comparable to VQSE, which is designed for this particular task. A more detailed comparison between QNEE and VQSE will be discussed in the following subsection.
In Fig. 4 (d-f), we plot the learning curves of QNEE for the case of three qubits. The blue line represents the learning curve with the cost function calculated by the classical NN using a finite number of samples. The red curve represents the learning curve with the cost function obtained from the exact wave function of the ground state. As shown in Fig. 4, 100 \(\sim\) 200 iterations are enough to classify the phases. A similar number of iterations are also enough for the four-qubit case.
### Comparison between QNEE and VQSE
We compare the performance of QNEE with VQSE in more detail. We first clarify that the cost functions of QNEE and VQSE are designed for different purposes; the former is for entropy estimation, while the latter is for estimating \(\ell+1\) lowest eigenvalues (see Appendix C and Ref. [23] for more details). Therefore, it is expected that QNEE will provide better performance in the entropy estimation task, as observed in Fig. 3. On the other hand, the eigenvalue estimation by VQSE is, in general, more accurate than QNEE.
In our XXZ chain model, both QNEE and VQSE show good agreement in entropy/eigenvalue estimation in the non-critical regime (\(\lambda>\lambda_{c}\)), where the entropy is vanishing. Near the transition point (\(\lambda\sim\lambda_{c}\)), QNEE produces a
Figure 3: Plots of estimation results on the reduced density matrices of the XXZ chain. Total number of qubits is 8. The reduced-density matrix is prepared by partially tracing out (a) five qubits and (b) four qubits. The entanglement entropy is estimated with 5 different randomly initialized quantum circuits. ‘\(\times\)’ symbol represents the lowest value of cost functions among 5 trials. Insets show the mean and standard deviations of 5 trials. Orange-colored symbols represent the results of QNEE. Blue-colored symbols represent the results of VQSE. The exact value of entanglement entropy is denoted by a black line.
more reliable entropy estimation than VQSE, while the accuracies of the eigenvalue estimations are comparable. In the critical regime, (\(\lambda<\lambda_{c}\)) QNEE estimates the entropy more accurately than VQSE. In particular, for \(\lambda=0.5\), VQSE estimates the eigenvalues (see Fig. 4(a)) better than QNEE, but QNEE yields a higher accuracy of the estimated entropy than VQSE (see Fig. 3(a)). This could lead to an intriguing implication that, in certain cases, the von Neumann entropy can be accurately estimated using QNEE, even when the estimated eigenvalues do not exactly match the correct values. On the other hand, the main advantage of VQSE comes from its low computational cost. As VQSE does not involve a training process, it runs much faster than QNEE to estimate the quantum state's entropy and eigenvalues while yielding a comparable performance.
We also note that QNEE is less sensitive to the initial choices of \(\Theta_{Q}\) than VQSE. This can be captured by comparing the variance of estimated entropy by taking different initial points of \(\Theta_{Q}\) (see the insets of Fig. 3). The estimator's variance for QNEE ranges up to 0.09. In contrast, the variance of VQSE ranges up to 0.16, which is significantly higher than that of QNEE. From the simulation data, we further analyze the behavior of errors depending on the exact entropy value. In Fig. 5, we plot the absolute error of estimation results of QNEE \(|S(\hat{\rho})-C(\Theta_{N},\Theta_{Q})|\) as a function of entanglement entropy for 3- and 4-qubit reduced density matrices of the XXZ chain. QNEE results show a clear tendency where the error and its fluctuation decrease as the entanglement entropy decreases (see Fig. 5). While VQSE shares a similar tendency, this is not as clear as the QNEE case.
Figure 4: (a-c) Plots of estimated eigenvalues and (d-f) learning curves for the reduced density matrix with three qubits. These results are from three \(\lambda\): (a, d) \(\lambda=0.5\), (b, e) \(\lambda=2.0\), and (c, f) \(\lambda=3.0\). In the top plots, orange (blue) color denotes the estimation results of QNEE (VQSE). \(\times\) represents the exact (analytic) result. In the bottom plots, blue line represents cost function \(C^{\rm NN}\) trained with finite samples. The red line represents ideally trained \(C(\bar{\Theta}_{N},\Theta_{Q})\) without finite-sample effect. Here we utilize the same data that is used for Fig 3(a).
Figure 5: Scatter plot of the absolute errors of QNEE (orange symbol) and VQSE (blue symbol) as a function of entanglement entropy. For this plot, we take data from reduced density matrices of three and four qubits of the XXZ chain.
To summarize, QNEE performs better in entropy estimation, while VQSE is more specialized in finding the first few largest eigenvalues. We expect that such a comparison will also be valid for general cases with a larger number of qubits, where a considerable number of non-zero eigenvalues are involved in the quantum state's entropy. However, we must note that the observed tendency between the two methods is not definite and may depend on the quantum state of interest and the choice of variational parameters in quantum circuit ansatz and classical optimizers.
## IV Discussion
We have proposed a novel quantum entropy estimator, QNEE, which combines variational quantum circuits with classical NNs. Our method allows us to estimate the von Neumann entropy and the Renyi entropy from finite samples of circuit outcomes without accessing full information of the density matrix. By leveraging the Donsker-Varadhan inequality to optimize the cost function, classical NNs in the QNEE inherit valuable characteristics from recently developed classical entropy estimators, including NEEP [24]. We emphasize that classical NNs operate independently of the structure of variational quantum circuits, offering a universal application to any form of quantum circuit ansatz.
As a physical model, we have applied QNEE to estimate the entanglement entropy of the ground state. We have demonstrated that QNEE effectively classifies the massless and ferromagnetic phases of the 1D XXZ Heisenberg model. In particular, numerical evidence shows that QNEE can sensitively estimate the entanglement entropy near the critical point even before the estimated eigenvalues converge to the exact ones. We also highlight that such phase classification can still work even without accurately estimating entanglement entropy when one of the phases has low entanglement, as the estimator is designed to upper bound the exact entropy value.
On the other hand, there are some cautions when applying QNEE. Although the cost function provides an upper bound on the exact entropy value in an ideal scenario, the estimated value obtained from insufficient samples can be underestimated compared to the exact value. Such a case has already been reported when using classical stochastic entropy production estimators, including NEEP [24]. However, it has also been studied that the NN-based estimator still provides efficient estimation rather than other methods though [12]. Also, the performance of QNEE is sensitive to the learning rate of the gradient-descent method for the quantum circuit. A lower learning rate is recommended as the number of qubits increases or the number of shots decreases. This can be a potential issue when applying the QNEE to a high dimensional quantum state, as a high learning rate can induce fluctuations in the learning curve.
Our methodology establishes a pathway for investigating quantum state properties using fewer iterations of quantum circuits, which are considerably more resource-intensive than classical processing. We anticipate that QNEE will serve as a vital tool for addressing a broad spectrum of estimation and classification challenges within quantum systems.
###### Acknowledgements.
The authors acknowledge the Korea Institute for Advanced Study for providing computing resources (KIAS Center for Advanced Computation Linux Cluster System). This research was supported by the KIAS Individual Grant Nos. PG081801 (S.L.), CG085301 (H.K.), and PG064901 (J.S.L.) at the Korea Institute for Advanced Study. S. L. thanks Dong Kyum Kim for helpful discussions about machine learning.
## Appendix A Derivation of the upper bound of the von Neumann entropy
Here, we derive the upper bound of the von Neumann entropy, i.e., Eq. (2). In terms of the eigenstate \(|\lambda_{m}\rangle\) of a density matrix \(\hat{\rho}\), that is \(\hat{\rho}=\sum_{m}\lambda_{m}|\lambda_{m}\rangle\langle\lambda_{m}|\), one can write an inequality for an observable \(\hat{O}\) as follows:
\[\ln\mathrm{tr}\{e^{\hat{O}}\}= \ln\sum_{m}\langle\lambda_{m}|e^{\hat{O}}|\lambda_{m}\rangle\] \[\geq \ln\sum_{m}e^{\langle\lambda_{m}|\hat{O}|\lambda_{m}\rangle}\] \[= \ln\sum_{m}\lambda_{m}e^{\langle\lambda_{m}|\hat{O}|\lambda_{m} \rangle-\ln\lambda_{m}}\] \[\geq \ln e^{\sum_{m}\lambda_{m}\left((\lambda_{m}|\hat{O}|\lambda_{m} \rangle-\ln\lambda_{m}\right)}\] \[= \langle\hat{O}\rangle_{\hat{\rho}}+S(\hat{\rho}). \tag{10}\]
Convexity of the exponential function is used for deriving the inequalities in the second and the fourth lines of Eq. (17). We note that the second-line inequality is saturated when the observable operator is diagonalized by the basis set \(\{|\lambda_{m}\rangle\}\). Equation (17) can be rearranged as
\[S(\hat{\rho})\leq-\langle\hat{O}\rangle_{\hat{\rho}}+\ln\mathrm{tr}\{e^{\hat{O }}\}, \tag{18}\]
which represents the Gibbs variational principle. In general, the Gibbs variational principle focuses on finding an optimal density matrix, enabling Eq. (18) to be saturated, which is given by the Gibbs state \(\hat{\rho}^{*}=e^{\hat{O}}/\mathrm{tr}\{e^{\hat{O}}\}\). In contrast, QNEE focuses on finding an optimal observable achieving the equality of Eq. (18), which is given by \(\hat{O}^{*}=\ln\hat{\rho}\).
For implementing the variational method to obtain the optimal entropy value via NN, one might be able to use the right-hand side (rhs) of Eq. (18) as a cost function. However, when employing the stochastic (mini-batch) gradient descent method for optimizing the parameters of the NN, the cost function, including a logarithmic function such as the rhs of Eq. (18) can malfunction and yield an incorrect result without additional data manipulation [30]. To avoid this complication, we linearize the logarithmic function by applying the additional inequality, \(\mathrm{tr}\{e^{\hat{O}}\}-1\geq\ln\mathrm{tr}\{e^{\hat{O}}\}\), which is also saturated when \(\hat{O}^{*}=\ln\hat{\rho}\). Application of this additional inequality to Eq. (18) leads to Eq. (2) as follows:
\[S(\hat{\rho})\leq-\langle\hat{O}\rangle_{\hat{\rho}}+\mathrm{tr}\{e^{\hat{O}} \}-1. \tag{19}\]
## Appendix B Derivation of the upper bound of the Renyi entropy
Here, we derive the upper bound of the Renyi entropy in Eq. (6). For a convex and twice-differentiable real-valued function \(f(x)\) with arbitrary real numbers \(p\) and \(q\) (\(\neq 0\)), the following inequality
\[-qf(p/q)\leq-pf^{\prime}(x)+q\left\{xf^{\prime}(x)-f(x)\right\} \tag{20}\]
holds [33]. We can demonstrate this by differentiating the rhs of Eq. (20) with respect to \(x\), which yields \(f^{\prime\prime}(x)(-p+qx)\). As \(f^{\prime\prime}(x)>0\) by convexity, the rhs has the unique minimum \(-qf(p/q)\) at \(x=p/q\). As Eq. (20) is valid for arbitrary \(p\) and \(q\), we can replace \(p\) and \(q\) with some distributions \(P(s_{i})\) and \(Q(s_{i})\), where \(\sum_{i}P(s_{i})=\sum_{i}Q(s_{i})=1\). By considering that \(x=x(s_{i};\theta)\) is a function of \(s_{i}\) and parameterized by \(\theta\), and summing over all \(i\), we can write Eq. (20) as the following variational form:
\[-D_{f}(P\|Q)\leq\sum_{i}\left[-P(s_{i})f^{\prime}\left(x(s_{i};\theta)\right) +Q(s_{i})\left\{x(s_{i};\theta)f^{\prime}(x(s_{i};\theta))-f(x(s_{i};\theta) )\right\}\right], \tag{21}\]
where \(D_{f}(P\|Q)\equiv\sum_{i}Q(s_{i})f\left(P(s_{i})/Q(s_{i})\right)\) is the \(f\)-divergence. To connect this inequality to the Renyi entropy, we choose the convex function \(f(x)\) as
\[f(x)=\frac{x^{\alpha}-\alpha x-(1-\alpha)}{\alpha(\alpha-1)} \tag{22}\]
for \(\alpha\neq 0,1\). Then, the \(f\)-divergence can be rewritten as
\[D_{f}(P\|Q)=\frac{1}{\alpha(\alpha-1)}\left[\sum_{i}\frac{P^{\alpha}(s_{i})} {Q^{\alpha-1}(s_{i})}-1\right]. \tag{23}\]
We can make a relation between the \(f\)-divergence and the Renyi-divergence \(D_{\alpha}(P\|Q)\) as
\[D_{\alpha}(P\|Q)\equiv \frac{1}{\alpha-1}\ln\left[\sum_{i}\frac{P^{\alpha}(s_{i})}{Q^{ \alpha-1}(s_{i})}\right]\] \[= \frac{1}{\alpha-1}\ln\left[\alpha(\alpha-1)D_{f}(P\|Q)+1\right]. \tag{24}\]
By choosing \(Q\) as the uniform distribution, i.e., \(Q(s_{i})=1/d\) for all \(i\) with \(d=\sum_{i}1\), the Renyi divergence is reduced to the Renyi entropy as
\[D_{\alpha}(P\|Q)=-H_{\alpha}(P)+\ln d, \tag{25}\]
where the Renyi entropy is defined as \(H_{\alpha}(P)\equiv\frac{1}{1-\alpha}\ln\sum_{i}P^{\alpha}(s_{i})\). Plugging Eqs. (3), (4), and (5) into Eq. (4), and substituting \(x(s_{i};\theta)\) with \(Ne^{h(s_{i};\theta)}\), Eq. (4) can be arranged as
\[\frac{e^{(1-\alpha)H_{\alpha}(P)}}{\alpha(1-\alpha)}\leq\sum_{i}P(s_{i})\frac{ e^{(\alpha-1)h(s_{i};\theta)}}{1-\alpha}+\sum_{i}\frac{e^{\alpha h(s_{i};\theta)}} {\alpha}. \tag{6}\]
The quantum generalization of the Renyi entropy for a given density matrix \(\hat{\rho}\) is defined as \(S_{\alpha}(\hat{\rho})\equiv\frac{1}{1-\alpha}\ln\mathrm{tr}\{\hat{\rho}^{ \alpha}\}=\frac{1}{1-\alpha}\ln\left(\sum_{i}\lambda_{i}^{\alpha}\right)\). By noting that the probability distribution \(P_{i}=\mathrm{tr}\{\hat{\rho}\Pi_{i}\}\) for a set of measurement operators \(\{\Pi_{i}\}\) is always majorized by the distribution of eigenvalues \(\lambda_{i}\) and that the Renyi entropy is a Shcur-concave function, we obtain
\[H_{\alpha}(P)\geq S_{\alpha}(\hat{\rho}).\]
In particular, the inequality is saturated when the measurement basis is the eigenbasis of the density matrix \(\hat{\rho}\). By taking \(P_{\hat{V}}(s_{i};\Theta_{Q})=\langle s_{i}|\hat{V}(\Theta_{Q})\hat{\rho}\hat {V}^{\dagger}(\Theta_{Q})|s_{i}\rangle=\mathrm{tr}\{\hat{\rho}_{\hat{V}}\Pi_{i}\}\) and subtracting \(1/[\alpha(1-\alpha)]\) from both sides of Eq. (6), we can reach the final result as
\[\frac{e^{(1-\alpha)S_{\alpha}(\hat{\rho})}-1}{\alpha(1-\alpha)}\leq\frac{e^{( 1-\alpha)H_{\alpha}(P_{\hat{V}}(s_{i};\Theta_{Q}))}-1}{\alpha(1-\alpha)}\leq \sum_{i}P_{\hat{V}}(s_{i};\Theta_{Q})\frac{e^{(\alpha-1)h(s_{i};\Theta_{N})}- 1}{(1-\alpha)}+\sum_{i}\frac{e^{\alpha h(s_{i};\Theta_{N})}-1}{\alpha}, \tag{7}\]
which is Eq. (6). Note that Eq. (7) is reduced to the inequality for the von Neumann entropy (4) in the \(\alpha\to 1\) limit.
## Appendix C details of VQSE
In this section, we illustrate the details of VQSE [23]. VQSE exploits the relationship between majorization and diagonalization. The cost function is given as the expectation value of an artificial Hamiltonian
\[\hat{H}\equiv (1-t)\hat{H}_{L}+t\hat{H}_{G}, \tag{8}\]
where \(\hat{H}_{L}\equiv\hat{I}-\frac{1}{2}\sum_{j=1}^{\ell}r_{j}\hat{\sigma}_{j}^{z}\) is local Hamiltonian that is designed to mitigate Barren plateau, and \(\hat{H}_{G}\equiv\hat{I}-\sum_{s_{j}\in S}q_{j}|s_{j}\rangle\langle s_{j}|\) is a global Hamiltonian. \(t\), \(r_{j}\), and \(q_{j}\) are real number parameters. \(S\) is the set of \(m\) most numerous strings from the quantum circuit, where \(m\) is a positive integer less than \(2^{\ell}\). At the start of the optimization process, \(t=0\) and \(t\) increases at every certain iteration up to \(t=1\). The set \(S\) is also updated with the same period with \(t\). \(\hat{I}\) is the identity operator, and \(\ell\) is the number of qubits. There exist many choices for \(q_{j}\) and \(r_{j}\). The choice used in reference [23] is \(r_{j}=r_{1}+(j-1)\delta\), and the eigenvalues of \(H_{G}\) are set to be the same value as \(\ell+1\) lowest eigenvalues of \(\hat{H}_{L}\). The other eigenvalues of \(\hat{H}_{G}\) are set to be 1. This choice of \(r_{j}\) ensures \(\ell+1\) lowest non-degenerate eigenvalues of \(H_{L}\). Therefore, \(\ell+1\) eigenvalues can be estimated. In our research, we set \(r_{1}=0.2\), \(\delta=0.01\), and \(\ell=n\). We set the learning rate for training VQSE to be 0.05 and update \(t\) and \(S\) at every 25 iterations.
|
2304.01335 | Charting the Topography of the Neural Network Landscape with
Thermal-Like Noise | The training of neural networks is a complex, high-dimensional, non-convex
and noisy optimization problem whose theoretical understanding is interesting
both from an applicative perspective and for fundamental reasons. A core
challenge is to understand the geometry and topography of the landscape that
guides the optimization. In this work, we employ standard Statistical Mechanics
methods, namely, phase-space exploration using Langevin dynamics, to study this
landscape for an over-parameterized fully connected network performing a
classification task on random data. Analyzing the fluctuation statistics, in
analogy to thermal dynamics at a constant temperature, we infer a clear
geometric description of the low-loss region. We find that it is a
low-dimensional manifold whose dimension can be readily obtained from the
fluctuations. Furthermore, this dimension is controlled by the number of data
points that reside near the classification decision boundary. Importantly, we
find that a quadratic approximation of the loss near the minimum is
fundamentally inadequate due to the exponential nature of the decision boundary
and the flatness of the low-loss region. This causes the dynamics to sample
regions with higher curvature at higher temperatures, while producing
quadratic-like statistics at any given temperature. We explain this behavior by
a simplified loss model which is analytically tractable and reproduces the
observed fluctuation statistics. | Theo Jules, Gal Brener, Tal Kachman, Noam Levi, Yohai Bar-Sinai | 2023-04-03T20:01:52Z | http://arxiv.org/abs/2304.01335v2 | # Charting the Topography of the Neural Network Landscape with Thermal-Like Noise
###### Abstract
The training of neural networks is a complex, high-dimensional, non-convex and noisy optimization problem whose theoretical understanding is interesting both from an applicative perspective and for fundamental reasons. A core challenge is to understand the geometry and topography of the landscape that guides the optimization. In this work, we employ standard Statistical Mechanics methods, namely, phase-space exploration using Langevin dynamics, to study this landscape for an over-parameterized fully connected network performing a classification task on random data. Analyzing the fluctuation statistics, in analogy to thermal dynamics at a constant temperature, we infer a clear geometric description of the low-loss region. We find that it is a low-dimensional manifold whose dimension can be readily obtained from the fluctuations. Furthermore, this dimension is controlled by the number of data points that reside near the classification decision boundary. Importantly, we find that a quadratic approximation of the loss near the minimum is fundamentally inadequate due to the exponential nature of the decision boundary and the flatness of the low-loss region. This causes the dynamics to sample regions with higher curvature at higher temperatures, while producing quadratic-like statistics at any given temperature. We explain this behavior by a simplified loss model which is analytically tractable and reproduces the observed fluctuation statistics.
The optimization of neural networks lies at the core of modern learning methodology, with the goal of minimizing a loss function that quantifies model performance. Naturally, the landscape of the loss function plays a critical role in guiding the optimization process and its properties are closely linked to its performance and generalization capacities [1, 2]. However, the high dimensionality of the parameter space, the non-convexity of the loss function, and the presence of various sources of noise make it challenging to characterize its geometry [3, 4] and subsequently to analyze the optimization process over this complicated landscape.
Previous works have studied the topography of the loss landscape and found a number of interesting features. Firstly, it was established that there exists a wealth of global minima, all connected by low-loss paths, a phenomenon referred to as Linear Mode Connectivity [5, 6, 7, 8, 9, 10, 11]. In the final stages of training the network explores this low-loss region and gradient descent predominantly occurs within a small subspace of weight space [12, 13, 14]. In addition, it was seen that the curvature of the explored region sharpens progressively and depends on the learning rate through a feedback mechanism termed "Edge of Stability" [15, 16, 17].
In this work we study the low loss region by injecting noise in a controlled manner during training. Many previous works have studied the importance of noise in the optimization process, modeling it as a stochastic process. Noise sources might include sampling noise in the estimation of the gradient [18, 19, 20], the numerical discretization of gradient flow [21], noisy data [22, 23], stochastic regularization schemes [24] or other sources. Each such noise source gives rise to different noise properties, which qualitatively affect the optimization dynamics [25, 26].
We take a different approach than those described above: we do not use noise to mimic noisy training dynamics, but rather as a probe that allows inferring quantitative geometrical insights about the loss landscape [22, 23]. This is done using standard tools of statistical physics to analyze loss fluctuations, and ensuring that the thermal noise is the only noise source in the system so the stochasticity is completely known.
To study the local landscape, we let the system evolve, starting at the minimum, under over-damped Langevin dynamics, defined by the stochastic differential equation
\[\mathrm{d}\theta_{t}=-\nabla_{\theta}\mathcal{L}(\theta_{t})\,\mathrm{d}t+ \sqrt{2T}\,\mathrm{d}W_{t}, \tag{1}\]
where \(\theta\in\mathbb{R}^{N}\) is the vector of the neural weights and biases, \(\mathcal{L}\) is the loss function (to be specified below), \(T\) the exploration temperature and \(W_{t}\) is a standard \(N\)-dimensional Wiener process. In terms of statistical physics, this is analogous to a system whose phase space coordinates are \(\theta\) and which is described by a Hamiltonian \(\mathcal{L}(\theta)\) in contact with a thermal bath at temperature \(T\). As is well known [27], the long time limit of the probability distribution of \(\theta\) is a Boltzmann distribution, \(p(\theta)\propto e^{-\mathcal{L}(\theta)/T}\), which balances between the gradient and the random noise terms in Eq. (1).
Specifically, we explore the topography of the loss function in the vicinity of a typical minimum, for a simple fully connected network performing a classification task of random data in the over-parameterized regime. Our analysis shows that, for the networks that we studied,
the minimum is constrained only in a small number of directions in weight-space, as was previously observed in various contexts and is generally expected in the over-parameterized regime [28, 29, 12, 13, 7, 14, 2, 2]. Furthermore, and inline with previous studies, we find that at a given exploration temperature the fluctuations behave as if \(\mathcal{L}\) is effectively quadratic, with \(N_{c}\) independent degrees of freedom with non-vanishing stiffness. In other words, \(N_{c}\) is the co-dimension of the low-loss manifold in the vicinity of the minimum, which our method allows to measure directly.
However, contrary to previous works and quite counter-intuitively, we show that this picture _does not_ arise from a simple quadratic approximation of \(\mathcal{L}\) around its minimum, as one might naively interpret these observations. Instead, we find that the stiffness associated with the \(N_{c}\) constrained eigendirections depends linearly on \(T\) over many orders of magnitude, which is a distinctly nonlinear feature. As we explain below, this dependence stems from the exponential nature of the "confining walls" surrounding the low-loss region, and the flatness of the landscape far from these walls. This exponential nature is also what gives rise to the seemingly quadratic properties of the loss fluctuations, but this happens through a delicate balance between the exponential walls and the noise, which cannot be captured with a model of a quadratic loss function.
## I Exact predictions for a quadratic loss
Before describing our results, it would be useful to remind the reader what they would expect to observe in the case of a positive-definite quadratic loss function, \(\mathcal{L}=\sum_{i=1}^{N_{c}}\frac{1}{2}k_{i}\Theta_{i}^{2}\), where \(\{\Theta_{i}\}\) are the coefficients of the Hessian's eigenvectors and \(\{k_{i}\}\) are their associated stiffnesses. \(N_{c}\) is the number of dimensions with non-vanishing stiffness. Plugging this into Eq. (1) yields a multivariate Ornstein-Uhlenbeck process which is fully tractable analytically [31]. We briefly summarize here the main results, whose derivations can be found in the supplementary information.
First, the fluctuations of \(\mathcal{L}\) follow a \(\Gamma\)-distribution
\[P(\mathcal{L};\alpha,\beta)=\frac{\beta^{\alpha}\mathcal{L}^{\alpha-1}}{ \Gamma(\alpha)}\exp(-\beta\mathcal{L}), \tag{2}\]
where \(\alpha=N_{c}/2\), \(\beta=1/T\) and \(\Gamma\) is the Gamma function.
Second, a direct corollary of Eq. (2) is that the mean and standard deviation of \(\mathcal{L}\) are both proportional to \(T\):
\[\begin{split}\mu_{\mathcal{L}}&=\left\langle \mathcal{L}\right\rangle=\tfrac{1}{2}N_{c}T,\\ \sigma_{\mathcal{L}}^{2}&=\left\langle\mathcal{L} ^{2}\right\rangle-\left\langle\mathcal{L}\right\rangle^{2}=\tfrac{1}{2}N_{c} T^{2}\.\end{split} \tag{3}\]
This result, a standard example of the equipartition theorem [32], means that each eigendirection contributes \(\tfrac{1}{2}T\) to the total loss, regardless of its associated stiffness. The "heat capacity" \(C_{h}=\partial\mathcal{L}/\partial T\) simply equals \(N_{c}/2\) and is \(T\)-independent.
Lastly, in terms of dynamics, the evolution of each eigendirection is uncorrelated from the other ones and shows an exponentially decaying correlation. This is quantified by the two-point correlation
\[\chi_{g}(t)=\sigma_{g}^{-2}\left[\left\langle g(t_{0})g(t_{0}+t)\right\rangle -\mu_{g}^{2}\right] \tag{4}\]
where \(g\) is any time-dependent quantity. For a quadratic loss we have \(\chi_{\Theta_{i}}=\exp\left(-|t|/\tau_{i}\right)\) and the correlation time \(\tau_{i}\) is simply the inverse of the stiffness \(\tau_{i}=1/k_{i}\). We note that in these terms, the stiffness of the "soft directions" does not need to strictly vanish - \(k_{i}\) should only be low enough so that the correlation time \(\tau_{i}\) would be so long that the dynamics in this eigendirection would not equilibrate during the simulation time. The auto-correlation of \(\mathcal{L}\) is a sum of such exponentials, \(\chi_{\mathcal{L}}=\sum_{i}e^{-k_{i}|t|}\).
## II Numerical experiment
We consider a classification problem with \(C=3\) classes using a multi-layer perceptron [33], represented by the function \(f(x;\theta)\!:\!\mathbb{R}^{d}\to\mathbb{R}^{C}\). The network is trained on a training dataset \(\{x^{i},y^{i}\}_{i=1}^{D}\) where \(x^{i}\in\mathbb{R}^{d}\) are the inputs and \(y^{i}\in\{0,1\}^{C}\) are one-hot vectors indicating a randomly assigned correct class. The \(\{x_{i}\}\) are drawn from a standard \(d\)-dimensional normal distribution. Full details regarding the architecture of the network and the dataset are given in the supplementary information. The network's output is transformed to a classification prediction via a softmax function. That is, the estimated probability that an input \(x^{i}\) belongs to class \(k\) is
\[p_{k}(x^{i};\theta)=\frac{\exp(f(x^{i};\theta)_{k})}{\sum_{m=1}^{C}\exp(f(x^{i };\theta)_{m})}\, \tag{5}\]
where \(f(\cdot)_{k}\) denotes the \(k\)-th entry in \(f\). Finally, the loss is taken to be the cross entropy between the predicted and true labels:
\[\mathcal{L}=\frac{1}{D}\sum_{i=1}^{D}\ell(x^{i},y^{i},\theta)\,\ \ \ \ell=-\sum_{k=1}^{C}y_{k}^{i}\log(p_{k}(x^{i};\theta)). \tag{6}\]
Our main objective is to explore the topography of the loss function in the vicinity of a typical minimum. To find such a minimum, we train the network using the ADAM optimizer [34] for a predefined amount of epochs. Since the problem is over-parameterized, after some training, the data is perfectly fitted and the loss becomes essentially zero, up to numerical noise. This stage is denoted as "Adam" in Fig. 1a.
To explore the vicinity of this minimum, we then let the system evolve under Eq. (1) using the Euler-Maruyama discretization scheme [35],
\[\theta_{s+1}=\theta_{s}-\eta\nabla\mathcal{L}(\theta_{s})+\sqrt{2\eta T}\xi_{s }\, \tag{7}\]
where \(s\) is the step number, \(\eta=t_{s+1}-t_{s}\) is the discrete time step and \(\xi_{t}\) is a Gaussian random variable with zero mean and unit variance. This exploration is denoted as "Langevin" in Fig. 1. It is seen that the loss increases quickly before reaches a \(T\)-dependent steady state ("thermodynamic equilibrium").
We stress while that the parameter \(\eta\) is reminiscent of the "learning rate" in the machine learning literature, they are not exactly equivalent. Importantly, in our formalism \(\eta\) serves only as the time discretization and appears explicitly in the noise term, whose \(\sqrt{\eta}\) scaling is necessary in order for the dynamics to converge to the Boltzmann distribution in the limit \(\eta\to 0\)[27]. We also note that the convergence of the probability distribution \(p(\theta)\) in the limit \(\eta\to 0\) is a different concept than the convergence of the gradient descent trajectory to that of gradient flow [21]. As such, \(\eta\) is not a parameter of our exploration protocol but rather of the numerical implementation of Eq. (1), and meaningful results should not depend on \(\eta\).
## III Results: Loss fluctuation statistics
We begin by inspecting the moments of the loss fluctuations, \(\mu_{\mathcal{L}}\) and \(\sigma_{\mathcal{L}}\), shown in Fig. 1c. It is seen that both of them scale linearly with \(T\). First, we note that our measurements of \(\mu_{\mathcal{L}}\) at a given temperature are independent of \(\eta\), as expected. Furthermore, a basic prediction of statistical mechanics relates the variance of \(\mathcal{L}\) in equilibrium with the heat capacity, namely \(\sigma_{\mathcal{L}}^{2}=T^{2}C_{h}(T)\)[32]. In our case of a \(T\)-independent heat capacity this relation reads \(\sigma_{\mathcal{L}}=\sqrt{C}_{h}T\), which is numerically verified in Fig. 1c. These results support our claim that the dynamics are thermally equilibrated and follow Boltzmann statistics.
Going beyond the moments, Fig. 1b shows the full distribution of the loss fluctuation, which are well described by a Gamma distribution. Fig. 1d shows the distribution parameters \(\alpha\) and \(\beta\), defined in Eq. (2), which are estimated from the empirical loss distributions using standard maximum likelihood estimators. It is seen that the distribution parameter \(\beta\) agrees with the exploration temperature \(T\), i.e. \(\beta T\approx 1\), over several orders of magnitude in \(T\) and independently of \(\eta\). The number of stiff dimensions, \(N_{c}=2\alpha\), seems to weakly depend on the temperature, decreasing as \(T\) grows. Lastly, we note that that the linear dependence of \(\mu_{\mathcal{L}}\) and \(\sigma_{\mathcal{L}}\) on \(T\) is a property of the low-loss region explored by the dynamics at low \(T\), and it is not observed if the thermal dynamics are started immediately after initializing the network. This is shown explicitly in the supplementary information.
All these observations are _quantitatively_ consistent with a picture of a (locally) quadratic loss function. In other words, at each temperature we can interpret the loss statistics as if they were generated by an effective quadratic loss, which has a \(T\)-dependent number of stiff directions, \(N_{c}(T)=2\alpha(T)\). This number, \(N_{c}\approx 20-60\), is significantly lower than both the dimensionality of \(\theta\) (\(N=900\)) and the number of elements in the dataset, \(D=300\). It is also much _larger_ than the number of classes \(C=3\), which was suggested by Fort et. al. [7] as the number of outlying large Hessian eigenvalues.
We find that the effective dimension of the low loss manifold is directly related to the number of points that lie close to the decision boundary. To demonstrate this, we examine the loss \(\mathcal{L}\) of Eq. (6) as a sum over the losses of individual sample points \(\mathcal{L}=D^{-1}\sum_{i}\ell_{i}\). We find numerically that most of the sample points are well classified, contributing negligibly to the total loss. A common way to quantify how many points contribute non-negligibly is the ratio of the \(L_{1}\) and \(L_{2}\) norms of the loss
Figure 1: (a) Observed loss dynamics during the exploration. First, the network is trained using the ADAM algorithm (black line). Then, the learning algorithm is changed to Eq. (1), where the noise amplitude is controlled by a temperature-like parameter \(T\) (colored lines). Each curve corresponds to a different temperature, all using \(\eta=10^{-2}\). (b) Distribution of the loss fluctuation in steady state normalized by the temperature. For each distribution, the dashed black line corresponds to a gamma distribution, cf Eq. (2), whose parameters are found using maximum likelihood estimation. The inset shows the same data in log-linear axes. (c) Temperature dependence of \(\mu_{\mathcal{L}}\) (circles) and \(\sigma_{\mathcal{L}}\) (squares). Each point corresponds to an average over multiple runs. The solid line shows a fit to \(\mu_{\mathcal{L}}=\Delta T\). The dashed line shows the equilibrium prediction \(\sigma_{\mathcal{L}}=\sqrt{C_{h}}T\) with the obtained value of \(C_{h}\). (d) Corresponding parameters \(\alpha\) and \(\beta\) for the gamma distribution. The symbols and error bars show the average and standard deviation, respectively, over multiple runs.
vector [36],
\[\phi\left(\{\ell_{i}\}\right)=\frac{\left(\sum_{i=1}^{D}\ell_{i}\right)^{2}}{\sum _{i=1}^{D}\ell_{i}^{2}}\, \tag{8}\]
where \(\ell_{i}\) is the contribution of the \(i\)-th example to the loss. \(\phi\) is a measure of sparsity, which counts how many entries in \(\ell_{i}\) contribute to its sum \(d\). For instance, if \(\ell_{1}=\ell_{2}=\cdots=\ell_{k}\) and all other \(\ell_{i}\) vanish then \(\phi=k\). We calculate \(\phi\) for random snapshots of the network during the dynamics, and plot the averaged results in Fig. 2a. It is seen that \(\phi\), the effective number of sample points pinning the decision boundary, quantitatively agrees with \(\alpha\), twice the effective number of constrained dimension in weight space.
### Temperature dependence
However, while the time-independent statistics suggest an effective quadratic loss, the dynamic properties show that the picture is not as simple. Examining again Fig. 1a, one may notice that that the temporal dynamics of \(\mathcal{L}\) seem to slow down at lower temperatures. This is readily verified by looking at the loss auto-correlation, cf. Eq. (4), which shows a distinct slowing down at low \(T\), as seen in Fig. 2a. To quantify this, we define the correlation half-time \(\tau_{\frac{1}{2}}\) as the lag time at which \(\chi_{\mathcal{L}}\) decays to \(\frac{1}{2}\). Plotting \(\tau_{\frac{1}{2}}\) as a function of temperature, cf. Fig. 2b, shows a clear dependence \(\tau_{\frac{1}{2}}\propto T^{-1}\).
The fact that \(\tau_{\frac{1}{2}}\) scales as \(T^{-1}\) raises three interesting insights. First, and most importantly, it is inconsistent with a picture of quadratic loss, which implies that the dynamic timescales are \(T\)-independent, \(\tau_{i}=k_{i}^{-1}\). In contrast, we observe that \(\tau_{\frac{1}{2}}\) changes over 4 orders of magnitude with \(T\).
Secondly, while the quadratic analogy might not hold, one may still relate the temporal timescale with the local stiffness, i.e. \(k\sim\tau^{-1}\). If this scaling relation holds, we should expect the eigenvalues of the loss Hessian to scale linearly with \(T\). To test this, we measured the Hessian of the loss at 1000 randomly selected points during the exploration at steady state and calculated their eigenvalues using standard numerical procedures [37; 38]. The distribution of these eigenvalues is plotted in Fig. 3a, clearly showing a linear scaling with \(T\).
These observations are manifestly inconsistent with a picture of an effectively quadratic loss: in the quadratic picture \(\mu_{\mathcal{L}}\) increases linearly with \(T\) because the system climbs slightly higher up the confining parabolic walls, whose stiffness is constant. Our observation suggests that the picture is quite different: \(\mu_{\mathcal{L}}\) increases in tandem with the stiffness of the confining walls, and due to a delicate balance the net result is indistinguishable from a quadratic picture, as far as static properties are considered. Below we explain this balance and show that it is related to the exponential nature of the confining walls.
Lastly, we remark that the relation \(\tau_{\frac{1}{2}}\sim T^{-1}\) gives rise to a distance scale \(L\), defined by \(L^{2}=T\tau_{\frac{1}{2}}\). \(L\) is the distance, in parameter space, that \(\theta\) would diffuse over the time \(\tau_{\frac{1}{2}}\), if subject only to isotropic Gaussian noise. Since the diffusion coefficient scales with \(T\), \(L\) is \(T\)-independent. Furthermore, since \(\tau_{\frac{1}{2}}\) is the correlation time, one can also interpret \(L\) as a correlation length, or the distance that two nearby networks need to diffuse away from each other in order for their loss to decorrelate, i.e. produce significantly different predictions. Since this distance scale does not depend on \(T\), we conclude that it is an intrinsic property of the loss landscape, i.e. a characteristic length scale in weight space.
To demonstrate the effect of this length scale, we performed another numerical experiment: Starting from the minimum (the end of training phase I in Fig. 2) we let the system diffuse freely, i.e. evolve in time according to Eq. (1) but without the gradient term. This procedure samples points uniformly and isotropically around the starting point. Indeed, Fig. 3b shows that for distances smaller than \(L\) the loss does note deviate significantly from its minimum value. At larger distances, \(\mathcal{L}\) changes
Figure 2: (a) The sparsity \(\phi\), cf. Eq. (8), as a function of \(T\). In gray we overlay our estimations of \(\alpha\), plotted in Fig. 1. It is seen that \(\phi\) quantitatively agrees with \(\alpha\), twice the effective number of constrained dimensions of the low loss manifold. (b) Temperature dependence of \(\tau_{\frac{1}{2}}\). The measurement was repeated over multiple runs, and the plot shows the average (points) and maximum and minimum values (color shading). The black line shows a power law dependence \(\tau_{\frac{1}{2}}=\frac{L^{2}}{T}\). (c) Autocorrelation of the loss (cf. Eq. (4)) in steady-state. The correlation half-time \(\tau_{\frac{1}{2}}\) is the time for which \(\chi_{\mathcal{L}}=0.5\). It is seen that the auto-correlation decays logarithmically at large \(\Delta t\). (d) The same data as in panel c, plotted as a function of the rescaled time lag \(2T\Delta t\). The curves for different \(T\) collapse to a single curve, except at high temperature and long times.
by orders of magnitude over a relatively small distance.
### Summary of the numerical observations
We summarize here the main properties of the loss fluctuations in the vicinity of the minimum, described above:
* Both \(\mu_{\mathcal{L}}\) and \(\sigma_{\mathcal{L}}\) scale linearly with the temperature \(T\), as one would expect from a quadratic loss, cf. Fig. 1c.
* Interpreting the fluctuations as if they were generated from a quadratic loss, the effective number of degrees of freedom is found to be small and weakly \(T\)-dependent, cf. Fig. 1d. In addition, it is closely related to the number of sample points that lie close to the decision boundary, cf. Fig. 2a.
* The correlation time \(\tau_{1/2}\) scales as \(1/T\) and the Hessian eigenvalues scale as \(T\), which is inconsistent with a quadratic loss and gives rise to an emergent \(T\)-independent length scale \(L\), cf. Fig. 2 and Fig. 3.
## IV An analytical toy model
In order to explain our numerical observations, one need to inspect the cross entropy loss Eq. (6). For simplicity, consider a network performing binary classification on a single training example \(\{x,y\}\in\mathbb{R}\times\mathbb{R}\). Since the network is overparameterized, the networks in the low-loss region that we explore classify most of the training samples perfectly. These examples contribute negligibly to the total gradient. However, some samples lie close to the decision boundary. We focus on one such sample \(\{x,y\}\) and assume without loss of generality that the correct class is \(y=1\). Taking a linear approximation of \(f\), the contribution of this sample to the loss is (see supplementary material for derivation)
\[\ell(x;\theta)=\log\left(1+e^{f(x;\theta)}\right)\,\ \ \ \ f=\sum_{i}a_{i} \theta_{i}+b \tag{9}\]
In this description, the only property of \(\theta_{i}\) that affects the loss is its projection on \(a_{i}\), the direction in weight-space that moves the decision boundary towards the sample point. Since all other directions in weight space are irrelevant, we ignore them and examine a one-dimensional loss function
\[\ell_{1D}(\theta)=\log\left(1+e^{a\theta+b}\right)\approx Be^{a \theta}. \tag{10}\]
The approximation in Eq. (10) holds in the vicinity of the minimum because the point is well classified and the exponent is expected to be small. We define \(B\equiv e^{b}\), and assume for concreteness that \(a>0\).
The statistical mechanics of \(\ell_{1D}\) can be obtained in closed form by calculating the partition function \(Z(\beta)=\frac{1}{\theta_{0}}\int_{-\infty}^{\infty}e^{-\beta\ell_{1D}(x; \theta)}d\theta\), where \(\theta_{0}\) is a resolution scale required to ensure the partition function is dimensionless. Formally, Eq. (10) is minimized at \(\theta\to-\infty\), which effectively sets the decision boundary at infinity and prevents the integral which defines \(Z\) from converging. To avoid this unphysical behavior we impose a hard cut-off
Figure 3: (a) The cumulative distribution function of the Hessian eigenvalues sampled during dynamics with \(\eta=10^{-2}\), for various values of \(T\). Very small negative eigenvalues are excluded from this plot. It is seen that at higher temperatures the network explores regions with larger eigenvalues. Inset: the same data plotted as function of \(\lambda/T\) shows a collapse of the distributions, suggesting that the eigenvalues scale linearly with \(T\). (b) The loss as a function of distance in weight space during the exploration. In the warm-colored curves show Langevin exploration (same color code as panel a). The black line shows the behavior in the case of pure diffusion (without gradient descent). The dashed line marks \(L\), the characteristic distance in weight distance obtained from Fig. 2c.
Figure 4: The loss function \(\ell_{1D}\), cf. Eq. (10), is plotted in blue. The probability distribution of \(\theta\), \(p(\theta)\propto e^{-\ell_{1D}/T}\) is shown for three temperatures. It is seen that the probability distributions are qualitatively different from the probability distribution of generated by a quadratic loss, \(p_{Q}\), which is Gaussian. For comparison, we plot \(p_{Q}\) obtained from a quadratic approximation at \(T=10^{-1}\). For this figure we chose \(B=1\) and \(\theta_{*}=20\).
at \(\theta=-\theta_{*}\), where \(\theta_{*}>0\), which would realistically arise when the decision boundary wanders far away and meets another sample point.
With this cutoff, the partition function \(Z(\beta)\) can be obtained analytically in closed form and consequently all other "thermodynamic" quantities can be calculated (see supplementary information for the derivations). The main finding is that this model reproduces the properties of the loss fluctuations described above. Namely, in the limit \(a\theta^{*}\gg 1\) and \(T\ll 1\), both \(\mu_{\mathcal{L}}\), \(\sigma_{\mathcal{L}}\)_and_ the average curvature scale linearly with \(T\), up to logarithmic corrections:
\[\begin{split}&\mu_{\ell_{1D}}\simeq\frac{T}{a\theta_{*}-\gamma+ \log(T/B)}\,\\ &\sigma_{\ell_{1D}}^{2}\simeq\frac{T^{2}\left(a\theta_{*}-\gamma+ \log\left(T/B\right)-1\right)}{\left(a\theta_{*}-\gamma+\log\left(T/B\right) \right)^{2}}\,\\ & H_{\ell_{1D}}=\left\langle\nabla_{\theta}^{2}\ell_{1D}\right\rangle \simeq\frac{a^{2}T}{a\theta_{*}-\gamma+\log(T/B)}\.\end{split} \tag{11}\]
Here \(\gamma\simeq 0.577\) is the Euler-Mascheroni constant. Finally, because the loss is approximately exponential in \(\theta\), it features an intrinsic length scale \(L\simeq a^{-1}\). We note that this length scale depends on the gradient of the network and therefore in general might differ between two different sample points that reside near the decision boundary.
In Fig. 4, we show the full loss given in Eq. (10), and the resulting probability distribution \(p(\theta)\propto e^{-\ell_{1D}/T}\) for various temperatures. It is seen that, due to the flatness of the loss, \(p(\theta)\) is essentially constant at negative \(\theta\) and drops sharply at the decision boundary. As \(T\) grows, the probability explores regions with higher loss and, due to the exponential dependence on \(\theta\), higher curvature. We compare these results against a quadratic approximation for \(\ell_{1D}\), expanded around \(\theta_{0}\) defined by \(\ell_{1D}(\theta_{0})=\mu_{\ell_{1D}}(T)\). It is seen that a quadratic loss is an extremely poor approximation in the low temperature limit.
## V Summary and Conclusions
To summarize our findings, we have used Langevin dynamics to investigate the geometry of the low-loss manifold of an overparameterized neural net. We find that the fluctuation statistics of the loss are a powerful probe that allows inferring geometrical insights about the loss topography. For the network studied here - an overparameterized fully connected neural net performing a classification task on randomly distributed data - the picture that emerges is that in the low loss region, which is explored at low temperatures, most of the sample points are well classified and do not contribute significantly to the loss. However, a small number of sample points "pin" the decision boundary, which fluctuates around them. At a given temperature, these fluctuations have the same statistics as fluctuations produced by a quadratic loss function, whose effective number of degrees of freedom is directly related to the number of data points constraining the decision boundary and can be immediately read off the fluctuation statistics.
However, we find that a quadratic description of the loss is fundamentally inadequate: the effective stiffness scales linearly with \(T\), and correspondingly the characteristic time scale of loss fluctuations grows at low temperatures as \(1/T\). These observations cannot be reconciled with a quadratic approximation of the loss. Rather, we suggest that this behavior is due to the exponential nature of the cross-entropy loss in the low \(T\) regime. As we demonstrate analytically, an exponential loss function in 1D reproduces the observed fluctuation statistics in the limit of low temperature. These conclusions, of course, pertain to the simplified case studied here - a fully connected network classifying random data. Understanding how they apply to structured data or more complicated network architectures is left for future studies.
## VI Acknowledgements
We thank Nadav Cohen, Boaz Barak, Zohar Ringel and Stefano Recanaetsi for fruitful discussions. YBS was supported by research grant ISF 1907/22 and Google Gift grant. NL would like to thank the Milner Foundation for the award of a Milner Fellowship. TK would like to acknowledge Lineage logistics for their funding. TJ was partly supported by the Raymond and Beverly Sackler Post-Doctoral Scholarship.
|
2305.14405 | NeuralMatrix: Compute the Entire Neural Networks with Linear Matrix
Operations for Efficient Inference | The inherent diversity of computation types within the deep neural network
(DNN) models often requires a variety of specialized units in hardware
processors, which limits computational efficiency, increasing both inference
latency and power consumption, especially when the hardware processor needs to
support and execute different neural networks. In this study, we introduce
NeuralMatrix, which elastically transforms the computations of entire DNNs into
linear matrix operations. This transformation allows seamless execution of
various DNN models all with matrix operations and paves the way for running
versatile DNN models with a single General Matrix Multiplication (GEMM)
accelerator.Extensive experiments with both CNN and transformer-based models
demonstrate the potential of NeuralMatrix to accurately and efficiently execute
a wide range of DNN models, achieving 2.17-38.72 times computation efficiency
(i.e., throughput per power) compared to CPUs, GPUs, and SoC platforms. This
level of efficiency is usually only attainable with the accelerator designed
for a specific neural network. | Ruiqi Sun, Siwei Ye, Jie Zhao, Xin He, Jianzhe Lin, Yiran Li, An Zou | 2023-05-23T12:03:51Z | http://arxiv.org/abs/2305.14405v4 | NeuralMatrix: Moving Entire Neural Networks to General Matrix Multiplication for Efficient Inference
###### Abstract
In this study, we introduce NeuralMatrix, a novel framework that enables the computation of versatile deep neural networks (DNNs) on a single general matrix multiplication (GEMM) accelerator. The proposed approach overcomes the specificity limitations of ASIC-based accelerators while achieving application-specific acceleration levels compared to general-purpose processors such as CPUs and GPUs. We address the challenges of mapping both linear and nonlinear operations in DNN computation to general matrix multiplications and the impact of using a GEMM accelerator on DNN inference accuracy. Extensive experiments are conducted on various DNN models from three popular categories (i.e., CNN, Transformers, and GNN) as illustrative backbone models. Our results demonstrate that DNNs suffer only up to a 2.02% accuracy loss after being converted to general matrix multiplication, while achieving 113x to 19.44x improvements in throughput per power compared to CPUs and GPUs.
## 1 Introduction
In recent years, deep learning networks (DNNs) have found applications in a wide range of scenarios, leading to the development of various types of neural networks. As neural network architectures continue to expand in size and complexity, they pose substantial computational challenges for various platforms. Application-specific integrated circuits (ASICs) offer a potential solution for supporting DNNs on mobile and edge devices. For example, Bai et al. [1] introduced a CNN accelerator design that incorporates a multiplier array, add tree, normalization, ReLU, and pooling units. Similarly, Thierry Tambe et al. [2] proposed an edge transformer accelerator featuring processing units (with floating-point vector and accumulate) and dedicated function units for layer normalization, softmax, and other unique operators in each layer.
ASIC-based accelerators are known for their efficient execution of specific DNN applications. However, their inherent specificity, including the types and numbers of functional units, can restrict their adaptability. For example, transformer-based BERT uses 72.5% of its computation cycles for versatile nonlinear operations [2], necessitating the integration of specific types and amounts of nonlinear functional units in its accelerator. However, when the same accelerator is used for other networks, such as CNN and GNN, which have far fewer nonlinear operations, these functional units can become unnecessary burdens. Consequently, a significant gap exists between the generality and computation efficiency of the accelerator when it runs versatile DNN applications [3].
In this study, we introduce NeuralMatrix, a framework that combines the best of both worlds, as illustrated in Fig. 1. On one hand, it overcomes the specificity limitations of ASIC-based accelerators and enables the computation of versatile DNNs on a single general matrix multiplication (GEMM) accelerator. On the other hand, compared to other general-purpose processors such as CPUs and GPUs, NeuralMatrix achieves application-specific acceleration levels by converting DNNs to general matrix multiplication and executing them with GEMM accelerators. Supporting different DNN architectures on a single GEMM accelerator is not trivial. Several challenges need to be addressed, including how to map both linear and nonlinear operations in DNN computation to general matrix
ultiplications so that a GEMM can fully support them, and the impact of using a GEMM accelerator on DNN inference accuracy. To address the first challenge, NeuralMatrix employs a fundamental GEMM mapping layer through which linear operations in the networks are reshaped and reorganized for matrix multiplications. Specifically, NeuralMatrix adopts a novel approach to circumvent the GEMM accelerator's limitation in dealing with nonlinear operations: they are approximated using piecewise functions if needed and further mapped to matrix multiplication through a segmented approach. To address the second challenge, we propose two NeuralMatrix variants that differentiate based on whether the approximation is conducted pre- or post-finetuning, and present extensive experiments to compare the two approaches. To the best of our knowledge, we are the first to transform entire DNNs into general matrix multiplication and execute them using a single GEMM accelerator. The proposed framework, NeuralMatrix, naturally handles the irregularity across versatile DNNs, achieving both generality and computational efficiency.
We conducted extensive experiments to verify the feasibility of NeuralMatrix by testing various DNN models from three popular categories (i.e., CNN, transformer, and GNN) as illustrative backbone models. Our results indicate that DNNs suffer only up to a 2.02% accuracy loss after being converted to general matrix multiplication. Both floating-point computation and integer quantization are included in each network model. Compared to CPUs and GPUs, NeuralMatrix can achieve 113x to 19.44x improvements in throughput per power. In comparison with classic ASIC designs, the accuracy loss is negligible while achieving similar throughput per power (i.e., 86.0% - 310.6%).
## 2 Background and Related Work
### General Matrix Multiplication Accelerator
General Matrix Multiplication (GEMM) accelerators are specialized hardware components or processors developed to expedite matrix multiplication operations [4]. They are widely employed in data centers for high-performance computing and in edge devices to boost the efficiency of various tasks, such as digital signal processing, artificial intelligence, and scientific simulations [5]. GEMM accelerators can be integrated with dedicated devices such as Tensor Processing Units (TPUs) [6], incorporated into System-on-Chip (SoC) configurations [7], or designed as standalone chips [8]. In this study, we specifically focus on GEMM accelerators as standalone hardware processors for mobile device applications, aiming to improve computational efficiency on low-resource hardware.
In a nutshell, GEMM comprises numerous parallel processing elements (PEs) and hierarchical memory (i.e., L1, L2, and L3), as shown in Fig. 1. Within each PE, the multiply-and-accumulate (MAC) unit performs computations, while the local (L1) buffer stores the MAC unit's inputs and partial sums. Multiple PEs are arranged in an array to enable parallel execution of many MAC operations. The network-on-chip (NoC) facilitates data transmission between the local (L1) buffers inside each PE and the global (L2) buffer for GEMM. Additionally, a high-bandwidth off-chip (L3) memory serves as a backup for providing input data and holding output data. Because data access energy and latency increase linearly from L1 to L2 and become orders of magnitude larger in L3 [4], GEMM accelerators are designed to maximize data reuse within on-chip L1 and L2 buffers.
Compared to CPUs and GPUs, which accommodate a variety of instructions through a range of logic and arithmetic components, GEMM accelerators are explicitly designed for matrix multiplication using only multiply-and-accumulate units and buffers. This focused approach to matrix multiplication results in exceptional efficiency [9]. As a result, employing the entire neural network on a single GEMM accelerator can lead to considerable efficiency gains, which is the central focus of this work.
Figure 1: NeuralMatrix translates neural network computation tasks into general matrix multiplication, enabling them to be fully supported by general matrix multiplication (GEMM) accelerators.
### Related Work
The growing size and computational demands of DNNs pose significant challenges. To tackle these issues, various techniques have been proposed, including network pruning [10], compression [2], quantization [11], and early exit [12], all of which aim to minimize network size. On the hardware side, graphics processing units (GPUs) and tensor processing units (TPUs) [13] are employed as general-purpose processors to speed up neural networks. However, these processors incorporate numerous processing units to manage diverse computations, resulting in wasted chip area, power, and energy when only some processing units are utilized, and others remain idle [13].
In pursuit of enhanced efficiency, researchers have developed application-specific integrated circuits (ASICs) to accelerate particular neural networks [14; 15; 16; 17; 18]. Data movement can often become the bottleneck, but ASICs can alleviate this by integrating memory and other processing units on the same chip, reducing the need for data transfer between components [19]. Moreover, ASICs are tailored to execute specific algorithms or sets of algorithms by incorporating only the required functional components for the tasks. Consequently, ASICs offer lower power consumption and improved performance compared to general-purpose processors. However, despite their success in accelerating DNNs, ASICs' specialized features restrict their versatility in boosting diverse applications [3].
The GEMM unit, a processing unit found in Google's TPUs and many ASIC accelerators, is designed to speed up matrix multiplication and accumulation. For instance, Zhou et al. [20] characterized the implicit convolution algorithm on commercial matrix-multiplication accelerators, while Vasudevan et al. [21] accelerated parallel multi-channel convolution using general matrix multiplication. Although the GEMM unit can accelerate matrix operations and general parallel linear operations, it cannot handle nonlinear operations.
To overcome this limitation, we propose NeuralMatrix, a solution that fully utilizes general matrix multiplication (GEMM) to accelerate classic DNNs. By leveraging standalone GEMM units in accelerators, NeuralMatrix enables the acceleration of various neural networks using general-purpose GEMM accelerators. Unlike existing approaches that categorize and accelerate different types of irregularities in neural networks, as surveyed in [3], NeuralMatrix simplifies all computations, including both linear and nonlinear operations, to a single GEMM operation. This approach eliminates the need for additional functional units and data transfers between units, allowing NeuralMatrix to achieve application-specific efficiency using general-purpose GEMM accelerators.
Figure 2: Overview of NeuralMatrix. Different types of DNN computation will go through different decision and process steps. Eventually, an entire neural network can be moved to matrix multiplication and become fully supported by a GEMM accelerator.
NeuralMatrix - Running Networks with Matrix Multiplication
Although GEMM accelerators exhibit remarkable performance and efficiency in carrying out matrix multiplication and addition operations, they are inadequate on their own for handling the computation of various DNNs. In this section, we describe how NeuralMatrix addresses this constraint. Given a computation need from a DNN, NeuralMatrix runs a series of decision and computation processes. Its high-level logic is depicted by the flow-chart in Fig. 2. First, the computation in neural networks can be classified into linear and nonlinear operations. Linear operations are directly mapped to GEMM accelerators through GEMM mapping (SS 3.1). Among nonlinear operations, NeuralMatrix will then decide if one operation already corresponds to a piecewise linear function (e.g., ReLU), which can be computed using the piecewise linear calculation method. If not, an offline piecewise approximation will be performed before it can be handled by piecewise linear calculations (SS 3.2). Considering the relation between the aforementioned approximation and finetuning, we introduce two variations of NeuralMatrix, specifically the _post-approximation_ and _pre-approximation_ methods, and discuss their potential impact on the final inference accuracy (SS 3.3).
### GEMM Mapping
Linear operations are pervasive in DNNs, for example in fully connected layers, convolution kernels, attention mechanisms, and more. These linear operations involve 2D, 3D, or higher-dimensional tensors. We first introduce our GEMM mapping module, which is the foundation of NeuralMatrix since it is the interface to the GEMM accelerators. By applying reshaping and re-blocking techniques, these linear operations can be represented as matrix addition and multiplication operations with various sizes. For instance, in convolutional neural network (CNN) models, the convolution and fully connected layers are the main linear operations that can be transformed into matrix multiplication by reshaping the input features and filters into two matrices. The dimensions of these matrices are determined by the width, height, and number of channels in the original convolution computation.
Given that each GEMM accelerator has its own computational and memory capabilities, matrices of different sizes--reshaped from linear operations in DNNs--are processed block-wise on the GEMM accelerator. In other words, the input and weight matrices are partitioned into smaller blocks to compute the output matrix, taking advantage of the GEMM accelerator's three-level memory hierarchy to minimize energy consumption and buffer access times [4]. The optimal block division is achieved by exploring data flows using a top-down approach: first addressing the stationary scheme, followed by spatial/temporal accesses, and finally determining the tile size to find the optimized data flow of linear operations. The term "stationary" refers to storing matrix data in global and local buffers for extended periods to maximize its reuse. Data reuse can be classified into temporal and spatial reuse. Temporal reuse involves reading data from off-chip DRAM in chronological order, sending it to multiple local buffers, and performing multiplication or addition operations on the partial sums in processing elements (PEs). Conversely, spatial reuse entails moving and processing data in parallel. Lastly, the tile size defines the data size for each movement and computation.
The above division uses a method similar to grid search to find this optimal block division. For example, given a matrix multiplication with dimension \((M\times K)\times(K\times N)\), we change the block size in the three dimensions (stationary, spatial/temporal accesses, and tile sizes) from 2 to 128 with stride 2, and use an early stage model to calculate the latency and energy consumption of GEMM accelerator. Then we will choose the optimal block size in three dimensions with the minimum latency under energy consumption constraints.
### Moving Nonlinear Operations to Matrix Multiplication
Addressing the nonlinear operations inherent in DNNs poses a significant challenge, as they cannot be easily mapped to standard matrix multiplications performed using GEMM accelerators, as described in the previous subsection. To overcome this issue, ASICs are specifically designed. These designs typically involve creating a predetermined number of functional units of various types, meticulously tailored to fulfill the nonlinear computational requirements of a specific deep neural network (DNN) model. While this approach yields highly efficient accelerators adept at handling particular networks, it falls short in terms of flexibility. In contrast, our proposed method emphasizes enhanced adaptability, allowing a wide range of DNNs to be efficiently executed on a single GEMM accelerator.
Offline Piecewise Linearization.In NeuralMatrix, continuous nonlinear operations are approximated using piecewise functions. This method involves dividing a nonlinear function of interest into smaller regions within a chosen interval and approximating each region with a simpler function, such
as a line or polynomial. Continuous piecewise linear (CPWL) approximations specifically utilize lines for approximation, ensuring that the endpoint of one region is the same as the starting point of the next region.
There are two primary advantages of employing CPWL approximation in NeuralMatrix. Firstly, classic GEMM accelerators can support CPWL without any hardware modifications, unlike some other approximation methods, such as look-up table (LUT) based nonlinear approximation, which require extra hardware resources. Secondly, alternative approximation methods like Taylor expansion or Chebyshev approximation necessitate considerable additional computations, which do not align with our ultimate goal of achieving ASIC-level computational efficiency.
Piecewise Linear Calculation.Technically speaking, piecewise linear operations from the above approximation are still nonlinear, and can not be directly mapped to GEMM. Therefore, we develop a three-step approach here that can handle piecewise linear calculations. Here we use piecewise linearized GELU as an illustrative example (in Fig. 2), but the same process can also be used to handle operations that are originally piecewise linear, such as ReLU and Leaky ReLU. The input data is \(X\) and the goal is to calculate \(Y=\text{GELU}_{approx.}(X)\), \(X,Y\in\mathbb{R}^{M\times N}\). The pre-calculated parameter \(k\) and \(b\) of each segment are pre-stored in off-chip DRAM, and indexed by the segment numbers.
Piecewise linear calculation follows the following steps: 1 We use a linear operator to calculate the segment matrix \(S\) for the input matrix \(X\). Each element in \(S\), e.g., \(S_{i,j}\) represents which segment its corresponding input value \(X_{i,j}\) falls into. The calculation of the segment matrix \(S\) is handled by the GEMM accelerator, and the output is further passed to the buffer/DRAM1; 2 The parameter \(k\) and \(b\) are aggregated and sent to the GEMM accelerator in the forms of slope matrix \(K\) and intercept matrix \(B\); 3 Finally, the GEMM accelerator performs element-wise calculations \(Y=X\cdot K+B\) to get the output matrix \(Y\). The other continuous nonlinear functions, such as softmax and layer normalization, can be computed by approximating inverse proportional, root, and exponential functions.
Footnote 1: When this piecewise linear approximation is calculated, the corresponding parameters are already prefetched from DRAM to the global buffers.
Parameter Overhead.NeuralMatrix introduces additional overhead to the original model, resulting from the extra parameters used to represent piecewise linear functions (e.g., \(k\), \(b\) tables). In this section, we provide a quantitative analysis of this aspect. Specifically, we focus on the scenario where the granularity is set to 0.25, as smaller granularity values lead to increased overhead in NeuralMatrix. Since we employ fixed granularity for the entire network, utilizing larger granularity values will proportionally decrease the parameter overhead. For instance, if the granularity is doubled to 0.5, the overhead will be reduced by half. Even with the smallest granularity in our experiments, the parameter overhead remains minimal (less than 0.7%). Therefore, in the following sections, we will utilize 0.25 as the default segment granularity.
### Approximation with Fine-Tuning
NeuralMatrix primarily focuses on improving the inference-time efficiency of fine-tuned DNNs on resource-limited platforms. One approach, which we referred to as the _post-finetuning_ method, involves standard fine-tuning of the original DNN. Following this, the DNN is transferred to GEMM with necessary approximated piecewise linearization, as detailed in the previous subsection.
In addition, NeuralMatrix can be seamlessly integrated with training, offering an alternative _pre-finetuning_ approximation approach. This technique involves mapping a pre-trained DNN to its approximated form before finetuning it on specific downstream tasks. The loss function used during finetuning remains unchanged from conventional finetuning, and standard automatic differentiation techniques can be employed for back-propagation. Both the post- and pre-finetuning methods yield final approximated DNNs with identical architectures, and consequently, have the same inference time cost on GEMM.
\begin{table}
\begin{tabular}{l r r r} \hline \hline DNN Model & ResNet-50 & BERT-base & GCN \\ \hline Extra Parameter Size (FP16) & 114KB & 49.2KB & 0.24KB \\ Extra Parameter Size (INT8) & 57KB & 24.6KB & 0.12KB \\ Normalized Parameter Size & 0.46\% & 0.01\% & 0.10-0.74\% \\ \hline \hline \end{tabular}
\end{table}
Table 1: Parameter overhead in NeuralMatrix (granularity=0.25).
Any approximation method that maps DNNs, including nonlinear functions, to the GEMM accelerator will inevitably lead to a loss in computational accuracy, which in turn results in end-to-end inference accuracy loss. In the piecewise approximation of nonlinear operations, the accuracy of the approximation is dependent on the granularity of linear segments. Finer granularity contributes to higher accuracy but increases the number of parameters stored in the DRAM. In Section 4.2, we will demonstrate how to find a suitable tradeoff and select the appropriate granularity of linear segments to achieve both low memory cost and high inference accuracy for various downstream tasks.
## 4 Evaluation
We first verify inference accuracy of moving differnet DNNs to GEMM by the proposed NeuralMatrix. Next we compare with existing computation platforms to showcase the efficiency of NeuralMatrix. Finally, we test the scalibility of NeuralMatrix across different network sizes, including depth and width, as well as variant network models.
### Implementation Details
In this study, we utilized the state-of-the-art GEMM accelerator modeling framework (with 90-95% modeling accuracy) presented in [4], to estimate the GEMM accelerator performance and energy cost2 with 45nm chip fabrication technology node [22]. The architectural-level parameters, including the number of processing elements (PEs) and memory bandwidths of the GEMM accelerator, are determined by the rule of highest throughput per power on general matrix multiplication, given the constraints of the thermal design power cannot exceed 15W for mobile applications. The parameters are summarized in Table 2. We choose the output stationary data movement strategy as it avoids frequent data movements to and from memories and benefits the lowest energy consumption for large matrix multiplication [23].
Footnote 2: The energy cost is calculated through hardware module utilization multiplied by synthesized energy consumption sampled on different inputs.
To assess the computation latency and power consumption of CPUs and GPUs, we conducted tests on the Intel i7-11700 CPU, NVIDIA 3090Ti GPU, and Jetson Orin SoC, utilizing an IT9121 power analyzer. For the existing FPGA and ASIC solutions, we gathered relevant data from published papers. Our analysis focuses on network inference latency (including throughput and relative speedup) as well as power consumption (energy usage). It is worth noting that our evaluation metric does not employ floating-point operations per second (FLOPS), as it solely indicates the computational capability of the processor. However, there exists a significant disparity between the computation supported by the processor and the actual utilization by DNNs.
NeuralMatrix moves the versatile DNN models to GEMM, and the entire network was computed by general matrix multiplications and additions. We experiment with three main DNN categories: BERT, ResNet and GCN. We evaluate the performance of our models using standard benchmarks for each network type, considering both floating-point (FP16) and integer (INT8) quantization of each network model. For CNN-based ResNet-50, we use CIFAR-10 [24], CIFAR-100 [24], Mnist-Fashion [25], and QMNIST [26] datasets. For transformer-based BERT, we use the General Language Understanding Evaluation (GLUE) benchmark [27], which consists of nine language understanding tasks, ranging in dataset sizes, text genres, and difficulty levels. For GNN-based GCN, we use Cora [28], Citeseer [29], Pubmed [30], and Reddit [31] datasets.
### Inference Accuracy
Before demonstrating the computational efficiency advantage of NeuralMatrix, we first empirically verify its inference accuracy through three popular DNN architecture categories and seventeen tasks of different natures. We ensure that applying NeuralMatrix with the appropriate setup only compromises ignorable accurate loss compared with the original models' final performance. Fig. 3 displays the final inference accuracy of various DNN architectures on multiple benchmark datasets. In this experiment, we select some of the most well-known pre-trained DNNs in each category,
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline Parameters & **Total PE Numbers** & **Cluster Numbers** & **Stationary Scheme** & **Frequency** \\ Value & 1024 & 8 & Output stationary & 1 GHz \\ \hline Parameters & **L1 buffer Size** & **L2 buffer Size** & **NoC Bandwidth** & **DRAM Bandwidth** \\ Value & 4 KB & 1 MB & 128 Gbps & 32 Gbps \\ \hline \hline \end{tabular}
\end{table}
Table 2: The parameters of our GEMM accelerator.
such as ResNet-50 [32], BERT-base [33], and GCN [34], to represent CNN, transformer, and GNN, respectively. Although we experimented with 17 benchmark datasets, due to space limitations, we only showcase four benchmarks for each DNN category and include the rest in the supplementary materials. The dashed lines in each figure represent the inference accuracy of the original floating-point and quantization DNN models, which can be considered as the performance upper-bound given the pre-trained network. The vertical bars in the sub-figures illustrate different NeuralMatrix variants. Specifically, the leftmost bars represent the post-finetuning performances, while the middle and right bars indicate the pre-finetuning performance using floating-point and INT8 quantization, respectively.
From the figures, we observe that the best-performing NeuralMatrix attains comparable performance to the original DNNs in every benchmark across different DNN categories. We find that pre-finetuning consistently achieves better performance than the post-finetuning approach, possibly because the network during finetuning is identical to the final inference ones. Thus, we recommend using the pre-finetuning method when transitioning DNNs to GEMMs. As the granularity increases from 0.25 to 1.0, we notice inference accuracy declines across all NeuralMatrix variants, which is expected since larger granularity incurs higher approximation errors. Generally, accuracy only begins to drop gradually when the granularity of exceeds 0.5 with floating-point and 0.25 with INT8 quantization. This demonstrates that a reasonably small granularity can achieve desirable performance.
Regarding specific DNN models, we observe that for both BERT and ResNet, the performance gap between different NeuralMatrix variants increases as the baseline performance decreases. This suggests that one can choose a larger granularity for easier tasks but a smaller one for more difficult tasks. In contrast, the GCN models do not exhibit significant differences among the baseline and various NeuralMatrix variants, possibly because GCNs are typically shallower. We provide further analysis regarding DNN depth and performance in Section 4.4 below.
### Benefits of Computation and Energy Efficiency
This section presents the potential computation and energy cost savings that can be reaped by moving DNNs to matrix multiplication and executing it on a GEMM accelerator. We compare the inference latency and cost of running neural networks on various types of processing units, including general-purpose CPUs, GPUs, SoCs, application-specific FPGA and ASIC designs, as well as the
Figure 3: End-to-end inference accuracy of different tasks using different DNN architectures.
NeuralMatrix on a GEMM accelerator. For the GNN, we only list the GNN runs on general-purpose processors as we are not able to find the standard ASIC designs for GNN who has numerous variants.
Table 3 provides a comparison of the inference latency and energy cost of moving ResNet-50 and BERT to general matrix multiplication using various types of processors. Moving ResNet-50 to general matrix multiplication achieves 9.64\(\times\) and 1.42\(\times\) improvements on latency with 11.7\(\times\) and 13.71\(\times\) reduced power consumption compared with the general-purpose CPU and GPU. While moving BERT to general matrix multiplication can achieve 8.53\(\times\) and 1.48\(\times\) improvements on latency with 9.48\(\times\) and 11.1\(\times\) reduced power consumption compared with the general-purpose CPU and GPU. If the throughput per power is regarded as the metrics for computation efficiency, NeuralMatrix achieves up to 113 \(\times\) and 19.44 \(\times\) improvements compared with CPU and GPU. Furthermore, compared with the FPGA-based ResNet-specific and BERT-specific designs, moving ResNet-50 and BERT to matrix multiplication also improves the computation efficiency by 2.32 - 3.50 times. Also, we must admit that part of the benefits, compared to FPGA-based BERT-specific designs, comes from the inherent superiority of the ASIC-based operation over the LUT-based operations in clock frequency and power consumption. Finally, compared with the ResNet-specific and BERT-specific ASIC designs, the same levels (i.e., 86.0% - 310.6%) of computation efficiency are achieved given the generality of running versatile on the same GEMM accelerator.
### Scalability Across Network Sizes
Further, we experimentally test and compare the inference accuracy of NeuralMatrix across different network sizes, including depth and width, as well as variant network models. The inference accuracy of NeuralMatrix is summarized in Table 4 for different sizes of ResNet, BERT, GCN, respectively. From the accuracy results, it is evident that the accuracy of NeuralMatrix is relatively robust to various network sizes and variant models of classic networks.
Figure 4 illustrates the computation efficiency (throughput per power) of NeuralMatrix for different network sizes and variants on the GEMM. The plots also indicate the related ASIC designs for these
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Type} & \multirow{2}{*}{Processor} & \multirow{2}{*}{Models} & \multirow{2}{*}{Size} & \multicolumn{3}{c}{FP16} & \multicolumn{3}{c}{INTS} \\ \cline{5-12} & & & Original & NeuralMatrix & & Original & NeuralMatrix & & Original & NeuralMatrix \\ \hline \multirow{8}{*}{CNN/CIFAR-10} & ResNet-18 & 44.8 MB & 95.02 & 92.87 (-2.13) & 94.66 & 92.12 (-2.54) \\ & ResNet-34 & 85.3 MB & 96.12 & 94.85 (-1.27) & 96.10 & 94.43 (-1.67) \\ \cline{1-1} & ResNet-50 & 90.1 MB & 96.18 & 95.67 (-1.7) & 96.13 & 94.43 (-2.05) \\ \cline{1-1} & ResNet-101 & 163 MB & 97.13 & 95.67 (-1.51) & 96.44 & 94.82 (-1.62) \\ \cline{1-1} & ResNet-152 & 223 MB & 97.29 & 95.12 (-2.17) & 96.87 & 94.38 (-2.49) \\ \hline \multirow{8}{*}{Transformers/SST-2} & ALBERT & 47.4 MB & 92.50 & 88.42 (-4.13) & 89.76 & 85.28 (-4.48) \\ & DistilBERT & 263 MB & 90.41 & 88.65 (-1.35) & 88.19 & 86.24 (-1.95) \\ \cline{1-1} & BERT-Base & 436 MB & 92.32 & 92.32 (-0.00) & 92.24 & 92.07 (-0.25) \\ \cline{1-1} & BERT-Large & 1340 MB & 93.46 & 93.02 (-0.34) & 93.12 & 92.20 (-0.92) \\ \hline \multirow{8}{*}{GNN/CORA} & GCN (L=1) & 94.0KB & 72.90 & 72.34 (-0.56) & 72.37 & 71.84 (-0.53) \\ \cline{1-1} & GCN (L=2) & 95.6KB & 84.28 & 84.31 (+0.03) & 83.95 & 84.02 (+0.07) \\ \cline{1-1} & GCN (L=3) & 99.4KB & 83.44 & 84.38 (+0.04) & 84.21 & 84.11 (-0.10) \\ \cline{1-1} & GCN (L=6) & 113.9KB & 81.18 & 81.11 (+0.07) & 80.86 & 80.92 (+0.06) \\ \cline{1-1} & GCN (L=9) & 128.3KB & 80.53 & 80.58 (+0.05) & 79.82 & 79.91 (+0.09) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Inference accuracy (%) of NeuralMatrix compared with original DNN models of different categories and sizes. We use CIFAR-10, SST-2 and CORA datasets for CNN, Transformer and GNN respectively.
networks (ResNet and BERT) [35; 36; 37; 38; 39; 40; 41; 42]. Compared to ASIC designs for networks of different sizes, NeuralMatrix can achieve the same level of computation efficiency. For the network with smaller sizes, the computation efficiency of NeuralMatrix increases as the networks become wider and deeper. Network width has a greater impact on computation efficiency compared to depth. Later, the computation efficiency reaches the peak and stable range for the network with large sizes. According to this design space exploration, the NeuralMatrix will be more effective for the large neural network, especially the wider networks. This is because a wider neural network will be generated to a larger matrix and allows more parallel computation to be performed simultaneously by the parallel processing elements in the GEMM accelerator. We can conclude that by transforming the computation of the entire neural network to general matrix multiplication, NeuralMatrix enables versatile neural networks to be efficiently executed by one GEMM accelerator. Meanwhile, NeuralMatrix demonstrates its superior efficiency for large neural networks, which can fully utilize its PEs after being transformed to matrix multiplication by NeuralMatrix.
## 5 Discussion and Limitations
In this study, we concentrated on three widely-used DNN backbone models and employed fixed approximation granularities for the entire DNN as a demonstrative example to highlight the benefits of transitioning entire neural networks to general matrix multiplications. Nevertheless, we believe that NeuralMatrix can be applied to broader categories of network architectures as well. Moreover, by examining different nonlinear functions and their positions, it may be feasible to further minimize the accuracy loss of NeuralMatrix by employing varying approximation granularities [43]. Due to time and space constraints, we only implemented and tested the piecewise linear approximation method; however, alternative approximation methods might potentially enhance network inference accuracy at the expense of increased computation cost. Furthermore, our evaluation of scalability across network sizes revealed that larger (i.e., wider) networks demonstrate greater computation efficiency gains. Moving forward, we also envision the proposed NeuralMatrix being effectively utilized not only in the inference process but also in the training process of large models in the future.
## 6 Conclusion
Current hardware accelerators based on application-specific integrated circuits (ASIC) are often limited to efficiently supporting specific deep neural network (DNN) applications. To overcome this limitation and enable versatile DNNs on a single hardware accelerator, we introduce NeuralMatrix. This approach transitions entire DNNs to matrix multiplication, which can then be executed by general matrix multiplication accelerators. NeuralMatrix utilizes three mainstream DNN backbone models - CNN, transformer, and GNN - along with their variant models as illustrative examples. Our pre-finetuning training reveals that the shift to matrix multiplication incurs negligible inference accuracy loss. The evaluation demonstrates that NeuralMatrix can attain ASIC-level computation and energy efficiency on general-purpose matrix multiplication accelerators. Consequently, this enables the efficient support of a broad range of DNNs on a single hardware accelerator.
Figure 4: The computation efficiency (throughput per power) of NeuralMatrix with different network categories under different widths and depths. |
2306.06551 | An Efficient and Accurate Memristive Memory for Array-based Spiking
Neural Networks | Memristors provide a tempting solution for weighted synapse connections in
neuromorphic computing due to their size and non-volatile nature. However,
memristors are unreliable in the commonly used voltage-pulse-based programming
approaches and require precisely shaped pulses to avoid programming failure. In
this paper, we demonstrate a current-limiting-based solution that provides a
more predictable analog memory behavior when reading and writing memristive
synapses. With our proposed design READ current can be optimized by about 19x
compared to the 1T1R design. Moreover, our proposed design saves about 9x
energy compared to the 1T1R design. Our 3T1R design also shows promising write
operation which is less affected by the process variation in MOSFETs and the
inherent stochastic behavior of memristors. Memristors used for testing are
hafnium oxide based and were fabricated in a 65nm hybrid CMOS-memristor
process. The proposed design also shows linear characteristics between the
voltage applied and the resulting resistance for the writing operation. The
simulation and measured data show similar patterns with respect to voltage
pulse-based programming and current compliance-based programming. We further
observed the impact of this behavior on neuromorphic-specific applications such
as a spiking neural network | Hritom Das, Rocco D. Febbo, SNB Tushar, Nishith N. Chakraborty, Maximilian Liehr, Nathaniel Cady, Garrett S. Rose | 2023-06-11T00:47:09Z | http://arxiv.org/abs/2306.06551v3 | # An Efficient and Accurate Memristive Memory for Array-based Spiking Neural Networks
###### Abstract
Memristors provide a tempting solution for weighted synapse connections in neuromorphic computing due to their size and non-volatile nature. However, memristors are unreliable in the commonly used voltage-pulse-based programming approaches and require precisely shaped pulses to avoid programming failure. In this paper, we demonstrate a current-limiting-based solution that provides a more predictable analog memory behavior when reading and writing memristive synapses. With our proposed design READ current can be optimized by \(\sim\)19x compared to the 1T1R design. Moreover, our proposed design saves \(\sim\)9x energy compared to the 1T1R design. Our 3T1R design also shows promising write operation which is less affected by the process variation in MOSFETs and the inherent stochastic behavior of memristors. Memristors used for testing are hafnium oxide based and were fabricated in a \(65\,\mathrm{nm}\) hybrid CMOS-memristor process. The proposed design also shows linear characteristics between the voltage applied and the resulting resistance for the writing operation. The simulation and measured data show similar patterns with respect to voltage pulse based programming and current compliance based programming. We further observed the impact of this behavior on neuromorphic-specific applications such as a spiking neural network.
Memristor, LRS, HRS, neuromorphic computing, DPE, voltage-controlled, current compliance, memory reliability, accuracy, low-power memory.
## I Introduction
Over the last decade, artificial intelligence has added a useful suite of new computational tools. However, utilizing these tools in an effective manner with limited access to internet and power has proven to be a challenge. Meeting such requirements, using state-of-the-art computing systems, is hindered in part by the separation of the memory and computation unit [1], which causes large energy consumption during the constant rapid transfer of data between memory and processor. This problem can be mitigated by designing a circuit which uses in-memory computing [2, 3, 4, 5]. A classical example of in-memory computing is the dot product engine (DPE) [6, 7, 8], which can be the backbone of many machine learning algorithms including neuromorphic computing [9, 10] and reservoir computing [11, 12]. The most basic implementation of a DPE performs a dot product between a vector and a matrix. The energy during this operation can be reduced further if a memory device such as a memristor is used to store the values of the matrix as a function of their stored resistance. Due to lower _READ_ power and multi-level storage capacity, the memristor device shows a higher energy efficiency compared to conventional digital memory such as SRAM or DRAM [13, 14, 1, 15].
A memristor is a two-terminal device that can act as an analog memory [16]. By applying a specific voltage to the top electrode for some time, the memristor can have its resistance value lowered through the formation of a filament through the material. This is known as the low resistance state (LRS). By applying a negative voltage to the top electrode of the memristor, this filament can be broken which RESETs the memristor into a high resistance state (HRS). For example, the memristive devices used in this paper (resistive random access memory, or RRAM in a one transistor one RRAM or 1T1R configuration) utilize hafnium oxide as the switching material and can be _SET_ into a LRS by applying a voltage of positive \(0.7\,\mathrm{V}\). The purpose of the _SET_ operation is analogous to writing data to memory. During this process, a specific final resistance value is targeted which will later be sensed during _READ_ operations. During this _SET_ process, the memristor's resistance will rapidly reduce as long as the voltage is above the threshold. The lowering of the resistance reaches a stopping point once the voltage goes below the threshold. On the other hand, if a constant current is maintained, this stopping point will occur at a point relative to the magnitude of the current applied. This is due to Ohm's law. Therefore, by limiting the maximum current, a minimum memristor resistance value can be targeted.
When designing memristive based memory systems, specific challenges need to be considered. Managing variation across devices and during read/write cycles is one of these challenges for analog memories. The behavior is required to be predictable across all structures in order to have a reliable memory. This variation can cause large discrepancies between operation during simulation and post-fabrication. In
memristive memory, this is especially apparent due to the inherent stochastic behavior in the memristors themselves. This is largely due to the unpredictable process of filament formation which takes place during a SET operation. To minimize these effects, an LRS range of 5 k\(\Omega\) to 20 k\(\Omega\) is utilized due to its more predictable behavior compared to HRS. Another consideration is the voltage pulse shape during programming [17, 18, 19]. If the pulse shaping is not precise, the write resistance values can vary from their targeted value. To overcome these issues, the 3T1R memristive circuit can be utilized. This circuit is programmed utilizing current compliance through the memristor as opposed to a voltage pulse across the memristor. Also, by targeting the current through the memristor, a low-power array-based spiking neural network can be designed. If we consider a DPE design, the READ power is more crucial than the write power. This is due to the memory element only being written during the initial training operation. After training, the system will be READ repeatedly during the testing process and after deployment. Therefore, the overall power consumption of a design such as a DPE can be optimized if the READ current is reduced.
Analog implementations of a DPE using memristors as the analog storage elements shows promising features for several application domains [7, 8, 20]. However, due to the inherent variability in memristor operation as discussed earlier, computation will be inaccurate compared to traditional digital implementations [1]. This variability is due to the stochastic nature in which the memristor forms and breaks its filament. Careful design of the DPE can help mitigate some of this variability. DPE circuits are commonly implemented in a 2D crossbar array configuration where the top of each memristor is connected across the rows. Then the bottom of each memristor is connected to an n-channel MOSFET which is also connected across the columns as seen in Figure 1. This is known as an in-line configuration.
A memristive DPE can be used in neuromorphic applications such as a spiking neural network (SNN). SNNs perform computations using spikes. These spikes can be applied to a DPE in order to perform a multiplication [11, 12, 20, 21]. The result of this multiplication can be tailored to a specific application such as a classification problem or a control problem [22]. In order to get the best performance in the application, the result of the multiplication needs to be predictable within a certain range.
In this work, toward resolving the above challenges, we make the following contributions:
1. Proposed 3T1R memristive array utilizing current compliance for better reliability. 3T1R configuration provides better controllability during the _SET_ and _READ_ operations.
2. Lower _READ_ current for each resistance level is demonstrated. Due to optimized _READ_ operation, 3T1R shows lower power consumption compared to 1T1R configuration.
3. Experimental measurements and simulated results show a similar trend for variation in LRS.
4. The effect of pulse shaping during programming is observed in simulation and measured data.
5. Analog DPE is modeled and simulated to evaluate the reliability and performance of the proposed design.
6. The effect of our proposed accurate and efficient approach for neuromorphic applications are analyzed.
The remainder of the paper is organized as follows. Section II focuses on in-line style memristive DPE design. Here, the READ and write operation of the design are discussed. Section III shows the proposed design with 3T1R structure. Section IV analyses the non-linear regions of the simulation results for SET current and READ current and their impact reliability. In addition, the impact of process variation on both designs is examined using Monte Carlo. Simulated results from cadence using the memristor model from [23] and our mathematical model is evaluated. In Section V, physical data from a fabricated chip is also analyzed to observe the reliability. Applications are evaluated in Section VI. There are four different classification tasks taken under consideration for an SNN application. A brief overview of peripheral circuity is provided in section VII. Next, a comparison with prior work is discussed in section VIII. Finally, the paper concludes with future work in Section IX.
## II In-line Style Memristive DPE
In our proposed design a TiN TE-HfO2-TiN BE material stack is utilized to construct the memristor device. After fabrication with the 65nm CMOS 10LPe process [24], 26 devices are measured with our probe station to consider process variation and cycle-cycle variation. A Verilog-A model is derived based on our measured data such as the I-V curve, threshold voltage for switching, switching time, and process variation at the different memristive regions. In addition, curve fitting parameters are also used to better match model performance with measured results. A brief illustration of our model is presented in prior work [23]. This model is utilized to characterize the memristor devices during a system-level simulation in Cadence Spectre.
DPEs are very popular for machine learning hardware. There are different architectures available for DPE such as 1T1R (1T: one transistor, 1R: one memristor) [7, 8, 25, 3T1R [26], and so on. Due to its simple structure and operation, most of the existing designs use the 1T1R configuration [7, 8]. In this 2D crossbar array configuration, the top of each memristor is connected across the rows. Then the bottom of each memristor is connected to an n-channel MOSFET which is connected across the columns as seen in Figure 1 (a). This describes an in-line configuration.
### _SET /Write Operation_
Fig. 1 (a) illustrates a 4x4 DPE configuration where each cell represents a 1T1R synaptic memory element. There are two input signals, \(V_{in}\) and \(V_{g}\), for each row and column respectively. Each \(V_{in}\) provides a row-wise input pulse across the DPE unit. Moreover, \(V_{g}\) provides a column-wise READ or write access across the DPE unit. Fig. 1 (b) shows the internal circuit connections of a single 1T1R cell in this DPE. The top node of the memristor is connected to \(V_{in}\) with the bottom connected to the drain of the \(NMOS\) device. Additionally,
\(V_{g}\) is used as a selector for both READ and write operations in a column. During a write, a voltage pulse is driven across the selected memristor using the \(V_{in}\) (top of memristor). This voltage pulse must be carefully shaped in order to drive the appropriate analog voltage for a determined amount of time to properly write the resistance of the device [9]. Shaping this voltage pulse to produce a reliable write is a significant challenge for the 1T1R configuration. The amplitude of \(V_{in}\) for a write operation is in the range of \(0.9\,\mathrm{V}\) to \(1.1\,\mathrm{V}\), depending on the value to be written. \(V_{g}\) signals work as a digitally controlled signal to access the various cells. Analog values are considered in the LRS range of 5 k\(\Omega\) to 20 k\(\Omega\) due to more predictable resistance as compared to HRS [27].
### _READ Operation_
During a READ operation, currents flowing through multiple 1T1R cells accumulate into each column of the DPE array to form the output denoted as \(I_{out}\). The applied \(V_{in}\) could be an analog voltage corresponding to an element in a vector input. Similarly, the resistance of the memristor is expected to have been written with a value corresponding to a matrix element. Thus, the current through a particular 1T1R cell corresponds to the product of a vector element (\(V_{in}\)) and the inverse of the memristor's resistance. The source of each 1T1R \(NMOS\) is shorted with the columnar \(I_{out}\) such that its current into the column to be summed with other such currents courtesy Kirchoff's current law. In this case, the \(I_{out}\) values of the columns correspond to the output vector elements. \(V_{g}\) is the gate signal of the \(NMOS\) and it also acts as an access transistor for the READ operation. Due to high voltage forming operation, thick oxide transistors are often required for \(NMOS\) and \(PMOS\). Moreover, a thick oxide transistor is useful to reduce the flicker noise from the system [28]. For the READ operation, the \(V_{in}\) and \(V_{g}\) will be \(0.6\,\mathrm{V}\) and \(1.2\,\mathrm{V}\), respectively, to achieve low-power READ operation. All the \(V_{g}\)s can be considered as common access signals for DPE operation. As a result, all the column current can be read by activating a single \(V_{g}\). On the other hand, to get more controllability, \(V_{g}\)'s can be operated separately and READ as many column currents as needed.
## III Proposed 3T1R Style DPE
Fig. 2 shows the proposed 3T1R design for memristive DPEs. Our proposed DPE utilizes a current-controlled memristive synapse as the base memory cell. Fig. 2 (a) illustrates a 4x4 DPE arrangement with our current-controlled synapse. According to Fig. 2 (b) \(V_{readB}\) and \(V_{in}\) are the input signals for each 3T1R cell. Similar to the in-line configuration, \(I_{out}\) are the column currents representing output vector elements in the DPE circuitry. Fig. 2 (b) shows the internal configuration of the 3T1R cell. In this circuit, two \(NMOS\) transistors, one \(PMOS\) transistor, and a memristor (R) are used. In the first stage, the drain of \(MP3\) is connected to the top of the memristor and the drain of \(MN1\) is connected at the bottom of the memristor. The presence of the \(PMOS\) helps reduce leakage current through the column. For low-power design, the memristor can also be isolated from \(VDD\) when the circuit is not in operation (hold operation). On the other hand, in 1T1R combination, the top node of the memristor is always shorted with \(V_{in}\). Thus, additional control circuitry is required. \(MN2\) plays an important role to obtain a reduced _READ_ current for our proposed synapse. The gate of the \(MN2\) is controlled precisely with the help of a gate voltage that is generated by the product of the resistance of the memristor and the \(1^{st}\) stage current (Iin).
Fig. 1: (a) 4x4 in-line configuration with 1T1R structure. (b) \(V_{in}\) (red) is the common connection for all memristors in a row. \(V_{g}\) (black) are the controlling signals (READ or write) for all the memristors in a column. \(I_{out}\) (green) is the cumulative current output from each column. The width and length of \(MN1\) are \(5\,\mathrm{\SIUnitSymbolMicro m}\) and \(0.5\,\mathrm{\SIUnitSymbolMicro m}\) respectively. Here, large thick oxide transistors are required due to the high voltage during forming.
Fig. 2: (a) 4x4 3T1R style DPE configuration is illustrated. (b) \(V_{readB}\) is the common connection for all the gate of \(PMOS\) in a column. \(MP3\) will be ”OFF” during a hold operation otherwise, it will be “ON” during a write or READ operation. \(V_{in}\) is the controlling signal to access the memristor in a row to do READ and write operations. The \(V_{in}\) is from \(0.8\,\mathrm{V}\) to \(1.1\,\mathrm{V}\), during a write operation. On the other hand, the \(V_{in}\) is \(0.6\,\mathrm{V}\), during a READ operation. \(I_{out}\) is the cumulative current output from a specific column. Here, the drain of each \(MN2\) will be shorted together for a DPE configuration. Each cell contains its own ground connection for the source of MN1 and MN2.
### _SET /Write Operation_
In Fig. 2 (b), the input current \(I_{in}\) is controlled using the \(V_{in}\) at the gate of \(MN1\). This current will only depend on \(V_{in}\) with \(MN1\) acting as a current sink for the circuit. For a SET or write operation, \(V_{readB}\) is used to digitally turned "ON" \(MP3\). The input \(V_{in}\) is driven in the range from \(0.8\,\mathrm{V}\) to \(1.1\,\mathrm{V}\) to SET the resistance level in the range from 5 k\(\Omega\) to 20 k\(\Omega\).
### _READ Operation_
After a SET operation, the analog value of the 3T1R cell is also READ using a current \(I_{in}\) controlled by \(MN1\). For this, \(V_{readB}\) will set \(MP3\) "ON" with \(0\,\mathrm{V}\) and \(V_{in}\) will set \(MN1\) into the linear region of operation with \(0.6\,\mathrm{V}\). To control the current and reduce power consumption, \(MN1\) is operated at a lower voltage. The voltage at the bottom of the memristor will turn on the \(MN2\) with a minimal gate voltage. Finally, the READ current \(I_{2}\) is measured from the drain of the \(MN2\). This cell-level output current \(I_{2}\) is connected to the common column to be summed with currents from other cells, again leveraging KCL. In the next section, an empirical comparison between 1T1R and 3T1R is provided.
## IV Cadence Simulation Results
### _SET and READ Simulation Results_
In our simulation, a \(65\,\mathrm{nm}\) CMOS 10LPe process from IBM is utilized [24]. A Verilog-A model is used to characterize the memristive device [23]. A statistical block in this model shows the process variation and stochastic behavior of the memristor. Monte Carlo analysis is used to observe the effect of process variation and non-linear effects of the 1T1R and 3T1R circuits. Fig. 3 (a) and (b) show the simulation results for SET operation with an initial resistance of 100 k\(\Omega\) at different \(V_{in}\). The range is varied from \(0.8\,\mathrm{V}\) to \(1.2\,\mathrm{V}\). The SET current for 1T1R cell varies from a low current of about \(7\,\mathrm{\SIUnitSymbolMicro A}\) to \(132\,\mathrm{\SIUnitSymbolMicro A}\). On the other hand, the SET current of 3T1R is slightly higher and varies from about \(35\,\mathrm{\SIUnitSymbolMicro A}\) to \(292\,\mathrm{\SIUnitSymbolMicro A}\). Here, the high SET current of 3T1R is a trade-off for better accessibility for write operations. 3T1R has a more desirable writing functionality compared to 1T1R since the SET current is more linear with respect to the voltage. Due to Ohms law, the final resistance achieved is directly proportional to the current applied during a SET operation. So, having a linear set current will offer better predictability and reliability for programming operations. Fig. 3 (c) and (d), show the power consumption of 1T1R and 3T1R cells respectively. Due to the high SET current, 3T1R draws higher power than 1T1R devices. According to Fig. 3 (c), a SET operation of 1T1R shows the power range from about \(5\,\mathrm{\SIUnitSymbolMicro W}\) to \(160\,\mathrm{\SIUnitSymbolMicro W}\). On the other hand, the power consumption of 3T1R is significantly higher, ranging from about \(115\,\mathrm{\SIUnitSymbolMicro W}\) to \(964\,\mathrm{\SIUnitSymbolMicro W}\). SET current or power is only considered when the memory array will be utilized for training the dataset. READ power has a higher impact than write power on the total power consumption of a system such as a DPE. However, writing a precise resistance value to the memristor is critical for repeatable results in machine learning.
In the READ operation, it is important to precisely represent the weight value for any machine learning application. Thus, the usability of the DPE design hinges on if the current values for a specific resistance level are distinguishable or not. At first, we consider the 1T1R combination and READ the output current \(I_{2}\) when the memristor resistance is varied in the range from 5 k\(\Omega\) to 20 k\(\Omega\). According to Fig. 4 (a), the READ current range is non-linear for the 1T1R, and ranges from about \(110\,\mathrm{\SIUnitSymbolMicro A}\) to \(35\,\mathrm{\SIUnitSymbolMicro A}\). Due to the non-linearity, the current values are not easily distinguishable at the specific resistance levels. As a result, weight precision is limited in this design. On the other hand, our proposed design can differentiate resistance levels from 5 k\(\Omega\) to 20 k\(\Omega\) due to improved linearity. As seen
Fig. 4: Simulation of the current during the READ operation for both devices are shown here. (a) shows the READ current for 1T1R devices from 5 k\(\Omega\) to 20 k\(\Omega\). (b) illustrated the READ current simulation for 3T1R devices for the same resistance range. 3T1R shows a more linear and lower current level according to the resistance level. (c) and (d) show the power consumption of the 1T1R and 3T1R circuits, respectively.
Fig. 3: Cadence simulations are run with \(65\,\mathrm{nm}\) CMOS technology at room temperature. (a) shows the simulated result for 1T1R device during SET operation. SET voltages are varied from \(0.8\,\mathrm{V}\) to \(1.2\,\mathrm{V}\). (b) illustrates the SET current simulation for 3T1R configuration. (c) shows the power simulation for 1T1R device. (d) plots the power range for the 3T1R combination.
in Fig. 4 (b), the 3T1R design was also able to achieve a lower current and thus lower power. This linear output stage makes our design suitable for higher resolution weights, as compared to the 1T1R. Here, the READ current range for 3T1R is from about \(3.005\,\mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro} \mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro} \mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro} \mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro} \mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro} \mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro} \mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro} \mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro} \mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro} \mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro} \mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro} \mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro} \mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro} \mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro} \mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro} \mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro} \mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro} \mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro} \mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro} \mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro} \mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro} \mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro} \mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro} \mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro} \mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro} \mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro} \mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro} \mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro} \mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro} \mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro} \mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro} \mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro} \mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro} \mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro} \mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro} \mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolUnitSymbolMicro} \mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolMicro} \mathrm{\SIUnitSymbolMicro}\mathrm{\SIUnitSymbolUnitSymbolMicro} \mathrm{\SIUnitSymbolMicro} \mathrm{\SIUnitSymbolUnitSymbolMicro} \mathrm{\SIUnitSymbolMicro} \mathrm{\SIUnitSymbolUnitSymbolMicro} \mathrm{\SIUnitSymbolMicro} \mathrm{\SIUnitSymbolUnitSymbolMicro} \mathrm{\SIUnitSymbolMicro} \mathrm{\SIUnitSymbolUnitSymbolMicro} \mathrm{\SIUnitSymbolMicro} \mathrm{\SIUnitSymbolUnitSymbolMicro} \mathrm{\SIUnitSymbolUnitSymbolMicro} \mathrm{\SIUnitSymbolUnitSymbolMicro} \mathrm{\SIUnitSymbolUnitSymbolMicro} \mathrm{\SIUnitSymbolUnitSymbolMicro} \mathrm{\SIUnitSymbolUnitSymbolMicro} \mathrm{\SIUnitSymbolUnitSymbolMicro} \mathrm{\SIUnitSymbolUnitSymbolMicro} \mathrm{\SIUnitSymbolUnitSymbolMicro} \mathrm{\SIUnitSymbolUnitSymbolMicro} \mathrm{\SIUnitSymbolUnitSymbolMicro} \mathrm{\SIUnitSymbolUnitSymbolMicro} \mathrm{\SIUnitSymbolUnitSymbolMicro} \mathrm{\SIUnitSymbolUnitSymbolMicro} \mathrm{\SIUnitSymbolUnitSymbolMicro} \mathrm{\SIUnitSymbolUnitSymbolMicro} \mathrm{\SIUnitSymbolUnitSymbolMicro} \mathrm{\SIUnitSymbolUnitSymbolMicro} \mathrm{\SIUnitSymbolUnitSymbolMicro} \mathrm{\SIUnitSymbolUnitSymbolMicro} \mathrm{\SIUnitSymbolUnitSymbolMicro} \mathrm{\SIUnitSymbolUnitSymbolMicro} \mathrm{\SIUnitSymbolUnitSymbolMicro} \mathrm{\SIUnitSymbolUnitSymbolMicro} \mathrm{\SIUnitSymbolUnitSymbolMicro} \mathrm{\SIUnitSymbolUnitSymbolMicro} \mathrm{\SIUnitSymbolUnitSymbolMicro} \mathrm{\SIUnitSymbolUnitSymbolMicro} \mathrm{\SIUnitSymbolUnitSymbolMicro} \mathrm{\SIUnitSymbolUnitSymbolMicro} \mathrm{\SIUnitSymbolUnitSymbolMicro} \mathrm{\SIUnitSymbolUnitSymbolMicro} \mathrm{\SIUnitSymbolUnitSymbolMicro} \mathrm{\SIUnitSymbolUnitSymbolMicro} \mathrm{\SIUnitSymbolUnitSymbolMicro} \mathrm{\SIUnitSymbolUnitSymbolMicro} \mathrm{\SIUnitSymbolUnitSymbolMicro} \mathrm{\SIUnitSymbolUnitSymbolMicro} \mathrm{\SIUnitSymbolUnitSymbolMicro} \mathrm{\SIUnitSymbolUnitSymbolMicro} \mathrm{\SIUnitSymbolUnitSymbolMicro} \mathrm{\SIUnitSymbolUnitSymbolMicro} \mathrm{\SIUnitSymbolUnitSymbolMicro} \mathrm{\SIUnitSymbolUnitSymbolMicro} \mathrm{\SIUnitSymbolUnitSymbolMicro} \mathrm{\SIUnitSymbolMicro} \mathrm{\SIUnitSymbolMicro} \mathrm{\SIUnitSymbolMicro} \mathrm{\SIUnitSymbolMicro} \mathrm{\SIUnitSymbolUnitSymbolMicro} \mathrm{\SIUnitSymbolMicro} \mathrm{\SIUnitSymbolMicro} \mathrm{\SIUnitSymbolMicro} \mathrm{\SIUnitSymbolMicro} \mathrm{\SIUnitSymbolUnitSymbolMicro} \mathrm{\SIUnitSymbolMicro}
\(\sim 973\,\mathrm{\SIUnitSymbolMicro m}^{2}\). Due to an extra \(NMOS\) and \(PMOS\) the area of the 3T1R array is \(\sim\)2.75x larger than the 1T1R array. This area overhead is a trade-off for improved reliability and power consumption.
## V Measured Data
1T1R and 3T1R memristive cells were fabricated as test structures using a \(65\,\mathrm{nm}\) CMOS technology. Hafnium oxide is used for the memristive layer. After fabrication, the raw wafer was tested in our probe station. We have set up different test cases for the devices such as forming, RESET, SET, READ, and endurance. In total, 38 devices were selected for the measurement test. Fig. 8 shows our fabricated chip. There are different test structures constructed with 1T1R and 3T1R combinations. All the signal names are labeled accordingly for 1T1R and 3T1R devices.
Fig. 8: Die photo of our fabricated chip (top). There are five different test structures. We have explored one test structure here. The bottom part is the one 12x2 test structure, where we have 1T1R and 3T1R devices. All the pads are labeled accordingly.
Fig. 10: Measured DC test data of 3T1R configuration where MTop is fixed at a DC voltage and the gate of the \(NMOS\) is varied with different SET voltage. From \(0.7\,\mathrm{V}\) to \(0.9\,\mathrm{V}\), the LRS shows high variability and less stability. On the other hand, at higher SET voltage, the std. dev. of the resistance is lower with LRS. By controlling compliance current with the gate of \(M_{N1}\) a resistance range from 5 k\(\Omega\) and 20 k\(\Omega\) can be targeted during programming in a DC environment. Therefore no pulse shaping is required.
Fig. 7: Layout of 1T1R and 3T1R memristive memory array on \(65\,\mathrm{nm}\) CMOS process. (a) shows a 4x4 memory array of 1T1R cells. (b) illustrates a 4x4 memory array of 3T1R cells. Here, the memristive layer is HfO\({}_{2}\).
Fig. 9: Measured DC test data of 1T1R configuration where gate voltage of NMOS is fixed and the \(V_{in}\) is varied from \(0.8\,\mathrm{V}\) to \(1.2\,\mathrm{V}\). From \(0.8\,\mathrm{V}\) to \(0.9\,\mathrm{V}\), the LRS shows high variability and less stability. This is due to the SET threshold not being met. On the other hand, at higher SET voltages, the variation in resistance is lower with LRS. Most importantly the SET operation range is fixed at \(\sim 4.2\,\mathrm{k\Omega}\). It indicates that without a proper pulse shape, 1T1R is not functional for SET operation.
### _Resistance Variation at LRS_
Fig. 9 exhibits the impact of DC operation. For that test, ten 1T1R cells were taken under consideration at different SET voltages. A "KEITHLEY 2636A SYSTEM SourceMeter" is utilized to provide DC voltage for the SET operation at \(V_{in}\) and \(V_{g}\). Here the \(V_{g}\) is fixed at \(1.2\,\mathrm{V}\) and \(V_{in}\) is varied from \(0.8\,\mathrm{V}\) to \(1.2\,\mathrm{V}\). According to Fig. 9 the std. dev. of the resistance is very high from \(0.8\,\mathrm{V}\) to \(0.9\,\mathrm{V}\). In addition, the std. dev. is comparatively lower after \(0.9\,\mathrm{V}\). However, the SET resistance window is limited to a narrow region around \(\sim 4.2\,\mathrm{k\Omega}\), which is not a functional SET window for our targeted programming region (\(5\,\mathrm{k\Omega}\) to \(20\,\mathrm{k\Omega}\)). Due to the absence of proper pulse width, the 1T1R device is not functional for SET operation.
Fig. 10 shows the LRS variation and distribution at a low \(V_{in}\) from 38 experimental samples. According to Fig. 10, the resulting resistance when the SET voltage is varied from \(0.7\,\mathrm{V}\) to \(1.2\,\mathrm{V}\). The SET variation at LRS is drastically increased from \(0.7\,\mathrm{V}\) to \(0.9\,\mathrm{V}\). Fig. 10 also shows the std. dev. value at different \(V_{in}\) voltages during SET operation. The std. dev. value is significantly lowered at around \(0.8\,\mathrm{V}\)\(V_{g}\). In addition, at LRS the resistance shows less variability and more reliable programming opportunity. It can be said from Fig. 10, the LRS has a negligible variation for the SET operation in the range of \(5\,\mathrm{k\Omega}\) to \(20\,\mathrm{k\Omega}\).
### _Pulse Shape Effect Based on Measured Data_
A Keithley B1500A semiconductor parameter analyzer was utilized to provide the specific pulse widths to the devices. Twenty devices were measured for different SET voltages at various pulse widths. Fig. 11 (a) shows the pulse width effect at \(0.8\,\mathrm{V}\). Lower pulse width results in higher resistance after a SET operation. Conversely, a higher pulse width results in a lower final resistance after the SET operation. In addition, the SET resistance is not in our targeted LRS region for programming. Due to that, \(0.8\,\mathrm{V}\) as a SET voltage is not reliable with various pulse widths. Fig. 11 (b) exhibits the pulse shaping effect on the 1T1R devices at \(1.2\,\mathrm{V}\) for \(V_{in}\). Here the pulse width has less effect on the SET resistance. According to Fig. 6, there are minimal effects of pulse width on the 3T1R design. Thus, for the work shown in Fig. 11, the pulse width analysis is not considered for the 3T1R design. However, when compared to Fig. 11 (a) there are some similarities that can be observed between the relationship of resistance and pulse width with different \(V_{set}\) voltages. For instance, as the pulse width increases, the resistance decreases. This effect is less apparent for \(0.8\,\mathrm{V}\) since only a partial filament is formed due to the low \(SET\) voltage. It can also be seen that with a lower \(V_{set}\) the resulting resistance is higher overall.
### _Aging and Failure Effects on the Device_
Device failure was investigated in a previous paper on 1T1R devices with respect to voltage and endurance [30]. The paper showed that devices fail due to either current, voltage, or aging effects (endurance/cycling). If these elements are limited to certain ranges of resistance and a smaller difference between LRS and HRS is maintained, the ReRAM device can survive for a longer time. Thus, the main failure inducer is long-term switching/ endurance aging. It has been shown previously that devices can switch for over 100 million cycles. In addition, 40% of devices can manage 1 billion cycles at room temperature [30]. Another endurance test showed over 10 years of retention is possible at 125\({}^{\circ}\)C temperature [31]. Device failure mitigation can be addressed throughout testing through different schemes such as the co-optimization framework [32]. However, in this work device aging will be handled by adjusting the weight mapping between software to hardware based on the device's current performance.
## VI Software Model & Applications
### _Software Model_
A single-layer machine learning classification model can be explained as \(\sigma(WX+b)\) where \(\sigma\) is the activation function.
Fig. 11: Measured data of 20 1T1R configurations programmed with different pulse widths. (a) shows the pulse width variation for 1T1R device configuration at \(0.8\,\mathrm{V}\). Here the \(V_{g}\) is fixed at \(1.2\,\mathrm{V}\) and the \(V_{in}\) varied from lower to higher voltage pulse width. The pulse width is varied from \(100\,\mathrm{ns}\) to \(1\,\mathrm{ms}\). (b) illustrates the pulse width effect on 1T1R configuration at \(1.2\,\mathrm{V}\). Here, the pulse width variation shows a lower effect compared to the effect at \(0.8\,\mathrm{V}\).
The linear product (\((WX+b)\)) obtains non-linearity by the activation function which is required by the machine learning model. The dot product of weight matrix W and input vector X can be calculated using the memristive crossbar dot product engine. The size of the matrix, W is the number of input features, and a number of output classes. The bias term can be mapped as the fixed column currents independent of the input as it appears from the equation [33]. The output class is the index of the column having the largest current, \(class=argmax_{col}\sigma(column\_currents)\). This argmax function is non-differentiable and Therefore, the softmax function is used here as a pseudo replacement. The loss function used is the mean squared error of the actual class and model predicted class. In this paper, a software model is trained to determine the parameters of an abstract model where the weight values are floating point values. Later those values were mapped to the memristive conductances which were found from a Cadence Spectre simulation. A linear mapping strategy is followed to convert all the weights to memristive conductance values and all the bias terms to the bias current. For mapping, first of all, the weights were shifted to be positive values [33]. Then, the weights of range (\(W_{L},W_{H}\)) are mapped to the conductance range (\(G_{L},G_{H}\)). All transformations were offset by adding additional terms to the original bias current. If all the bias currents across columns are equal, then the bias term can be ignored and the linear product is the output current across the columns.
Four datasets, namely Iris, Wine, and Breast Cancer Wisconsin (Diagnostic) and banknote from the UCI repository [34] are employed for the purpose of classification in this study. To create input spikes, the data is one-hot encoded with 4 bins. For example, the iris dataset has 4 features and 4 bins for each class. Thus, the total number of features is 16. The dataset is split into 70% training data and 30% testing data. One of the advantages of one-hot encoding is that it returns a specific number of spikes (the number of features) for every data instance at all columns and therefore, a specific number of the memristor is READ for any data instance. This unique feature gives advantages to simplifying the model and the mapping to the hardware.
### _Network Architecture_
The network architecture consists of an input layer and an output layer, without any hidden layers. Initially, the trained network includes bias values. However, during the process of mapping to hardware, the bias values were adjusted to be equal. This eliminates the need for them to be represented in hardware, as detailed in [33]. TABLE II presents the models for the four different classifications. As an example, the iris dataset, which comprises 4 features, has each feature encoded with 4 bins, resulting in a total of 16 units in the input layer. Including the bias term, the input layer consists of 17 units. The number of units in the input layer is equivalent to the number of classes in the classification. The trainable parameters represent the parameters of the model that underwent training.
### _Results from the Software Model_
For mapping an abstract software model to the existing hardware design, a lookup table is required that provides the corresponding current value for all memristive conductances. In this work, two look-up tables are created for 1T1R and 3T1R, respectively, using the results from the Cadence Spectre simulation. In addition, the column current of the DPE is calculated using this look-up table. At the output, \(n\) current levels (\(I_{1},I_{2},....,I_{n}\)) need to be distinguishable in order to determine the column which has the maximum current. Otherwise, all current appears to be the same due to the limitations of current sensing. In this work, it is assumed that the ADC has \(50\,\mathrm{nA}\) current resolution. One of the limitations of analog DPE-based classification systems is the possibility of multiple columns sharing the maximum current. This is a limitation imposed by the ADC. In those cases, the classifier will take the class randomly from one of those columns.
The software model also behaves similarly. Table I shows the comparison of the DPE-based classification using 1T1R and 3T1R for four datasets. The validation metrics are training accuracy, testing accuracy, and mean energy required for the classification of a single sample. The mean energy is proportional to the size of the input vector and the memristive DPE size. Breast cancer has the largest DPE and therefore requires the largest energy among all four classification tasks. The testing accuracy degradation column shows the 3T1R-based DPE performs slightly worse in the wine dataset compared to 1T1R based DPE. But, the 3T1R has a low current range
\begin{table}
\begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline Dataset & Number of Features & Units in Input Layer & Units in Output Layer, Number of class & Trainable parameters \\ \hline Iris & 4 & 17 & 3 & 51 \\ \hline Wine & 13 & 53 & 3 & 159 \\ \hline Breast Cancer Diagnostic & 30 & 121 & 2 & 242 \\ \hline Banknote & 4 & 17 & 2 & 34 \\ \hline \end{tabular}
\end{table} TABLE II: Architecture of the Network
\begin{table}
\begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline Dataset & \begin{tabular}{c} Training Acc. (1T1R), (\% ) \\ \end{tabular} & \begin{tabular}{c} Testing Acc. (1T1R), (\%) \\ (1T1R), (\%) \\ \end{tabular} & \begin{tabular}{c} Testing / READ \\ (5T1R), (\%) \\ \end{tabular} & \begin{tabular}{c} Training Acc. (3T1R), (\%) \\ \end{tabular} & \begin{tabular}{c} Testing Acc. (3T1R), (\%) \\ \end{tabular} & \begin{tabular}{c} Testing Acc. (3T1R), (\%) \\ (pJ) \\ \end{tabular} & \begin{tabular}{c} Energy (3T1R), (\%) \\ \end{tabular} & \begin{tabular}{c} Energy \\ (3T1R), (\%) \\ \end{tabular} & \begin{tabular}{c} Energy \\ (5) \\ \end{tabular} &
\begin{tabular}{c} Improvement \\ (\%) \\ \end{tabular} \\ \hline Iris & 91.43 & 88.88 & 449.61 & 85.71 & 88.88 & 63.09 & 0 & \(\sim\)7.13x \\ \hline Wine & 96.77 & 85.18 & 1446.56 & 96.77 & 81.48 & 205.47 & \(\sim\)3.7 & \(\sim\)7.04x \\ \hline Breast Cancer & 98.24 & 93.56 & 2324 & 97.99 & 93.56 & 318.46 & 0 & \(\sim\)7.3x \\ \hline Banknote & 93.43 & 91.99 & 296.78 & 92.81 & 91.26 & 42.39 & \(\sim\)0.73 & \(\sim\)7x \\ \hline \end{tabular}
\end{table} TABLE I: PERFORMANCE EVALUATION BETWEEN 1T1R AND 3T1R
for \(5\,\mathrm{k}\Omega\) to \(20\,\mathrm{k}\Omega\) and therefore, suffers from current sensing limitations. A \(1\,\mathrm{\SIUnitSymbolMicro s}\) pulse is provided at the gate of MN1 with \(0.6\,\mathrm{V}\) amplitude. Hence, a READ current is calculated from the \(2^{nd}\) stage of the proposed 3T1R synapse. The percentage of samples in each dataset in which multiple columns have at or over the maximum current sensible is described next. For the Iris, Wine, and Breast Cancer Wisconsin (Diagnostic), and Banknote dataset, the percentage same current level for 3T1R was 24.44%, 11.11%, 0.58%, and 1.70% respectively. However, for the 1T1R these results are just 4.4%,0%, 0% and 0% for the same test. This limitation of 3T1R can be overcome by using an ADC with better current sensing capability. On the other hand, the energy is improved by around \(7\times\) when using 3T1R compared to 1T1R.
In addition, in TABLE I the testing dataset accuracy for the Wine dataset is 85.18% when the memristor crossbar is made by the 1T1R devices. The accuracy reduces by approximately 3.7% when 1T1R in the crossbar is replaced by 3T1R.
The 3T1R-based trained DPE model is shown in Fig. 12. The DPE model is just the resistance of the memristor which is shown as the color bar. In the color bar, the range 5-20 represents \(5\,\mathrm{k}\Omega\)-\(20\,\mathrm{k}\Omega\). Lastly, Fig. 13 shows the confusion matrix of the testing dataset for all four classifications. It is clear from the confusion matrix that all the models classify all the classes pretty well. Moreover, in this work, our targeted programming region is from \(5\,\mathrm{k}\Omega\) to \(20\,\mathrm{k}\Omega\). In the 3T1R, depending on the programming / SET voltage, sixteen different resistance levels are targeted. In prior work, it has been shown up to 16 levels of resistance can be achieved with our memristive device [30]. In the DPE configuration, if the array is small then the cumulative current in a column shows almost negligible degradation in precision. However, if the column is containing more than 16 synaptic units then the cumulative current shows saturation in the column current.
## VII Peripheral Circuitry
A CMOS digital-to-analog converter (DAC) is utilized for programming. The energy consumption of our DAC to program the synapse with 1V is \(0.311\,\mathrm{f}\mathrm{J}\). Here, \(1\,\mathrm{V}\) is the output of the DAC circuit from \(1.2\,\mathrm{V}\) as the supply voltage. Moreover, a Winner-Take-All (WTA) circuit is utilized to rank the column current from the cross-bar array. With a particular sample from the Wine dataset, the column current from each column is about \(11.2\,\mathrm{\SIUnitSymbolMicro A}\), \(11.5\,\mathrm{\SIUnitSymbolMicro A}\), and \(11.3\,\mathrm{\SIUnitSymbolMicro A}\) for \(Iout1\), \(Iout2\), and \(Iout3\) respectively. Our designed WTA circuit consumes \(0.607\,\mathrm{p}\mathrm{J}\) energy with \(1\,\mathrm{\SIUnitSymbolMicro s}\) read pulse to detect the maximum current from three columns. Three columns need only nine \(NMOS\) transistors to classify the maximum current. Hence, a sense amplifier is used to convert the analog value into a digital value with one bit per column.
## VIII Comparison with prior work
An analog implementation of a DPE can consume less power during each READ cycle compared to traditional memory like SRAM and DRAM [14, 15, 35]. According to TABLE III, in [1], authors utilized SRAM with 4-bit precision at \(45\,\mathrm{n}\mathrm{m}\) CMOS process to implement a DPE and it consumed \(120\,\mathrm{\SIUnitSymbolMicro A}\) for reading at \(0.4\,\mathrm{V}\). Whereas a memristive analog memory implementation is used to implement a DPE and it consumed only \(27.58\,\mathrm{\SIUnitSymbolMicro A}\) at \(2\,\mathrm{V}\)[36]. In our paper, two different approaches are observed for a DPE implementation. The 1T1R approach draws \(83.52\,\mathrm{\SIUnitSymbolMicro A}\) current at \(0.6\,\mathrm{V}\). On the other hand, the 3T1R approach shows, \(4.459\,\mathrm{\SIUnitSymbolMicro A}\) of READ current at \(0.6\,\mathrm{V}\). Our proposed 3T1R device shows about \(\sim\)26.91x and \(\sim\)6.19x READ current improvement compared to [1] and [36] respectively. In addition, the 3T1R approach can reduce \(\sim\)18.73x READ current compared to a 1T1R design. Another research group presents a mixed-signal synapse circuit
Fig. 12: Model of four classification tasks. A model shows the distribution of the memristor’s weight for different classification tasks such as (a) Iris, (b) Wine, (c) Breast Cancer, and (d) Banknote. The model adapts lower weight value more to ignore the sensitivity effects of HRS.
Fig. 13: Confusion matrix of four datasets are shown here. Four confusion matrices (a) Iris, (b) Wine, (c) Breast Cancer, (d) Banknote illustrate the prediction efficiency of the model. If the diagonal boxes contain a maximum number of rows, the model prediction is better. All confusion matrices show excellent prediction capability.
with 5-bit data precision [37]. The READ power consumption of this design is \(27.43\,\mathrm{\SIUnitSymbolMicro W}\), which is \(\sim\)5.15x higher than our 3T1R approach. Another memristive-based design draws \(14\,\mathrm{\SIUnitSymbolMicro W}\) READ power, which is \(\sim\)2.62x higher than our proposed 3T1R design [35]. Moreover, the 3T1R design consumes \(\sim\)9.37x lower READ power compared to the 1T1R design. A memristor based design with \(28\,\mathrm{nm}\) CMOS process shows \(17.8\,\mathrm{pJ}\) energy consumption [38] which has \(\sim\) 3.33x energy overhead compared to our 3T1R design. Moreover, the 3T1R design can save \(\sim\) 9.37x energy compared with the 1T1R design. However, according to Fig. 3 (c) and (d), 3T1R shows about 2x and 6x _SET_ current and power overhead compared to the 1T1R structure. This higher _SET_ current can be reduced by lowering the memristive device's overall _SET_ threshold and using a thin oxide transistor instead of a thick oxide transistor, which is targeted for future work. The overall area of a 3T1R synapse is almost 2.76x larger than a 1T1R synapse. However, the 3T1R design shows significant area improvement compared to prior work [37]. The area overhead of a 3T1R-based DPE can be optimized further by sharing the \(MP3\) for a whole column. In Fig. 7, each synapse utilizes it's own \(MP3\) transistor.
## IX Conclusion and future work
Array-based memristive memory for SNNs are power-hungry during READ cycles. They also show high sensitivity due to process variation in MOSFET fabrication and the inherent stochastic behavior of memristors. In this work, we leveraged a 3T1R design to achieve a reduction in READ current and process variation sensitivity. According to simulation results, \(\sim\)26.91x and \(\sim\)18.76x of the READ-out current reduction can be achieved as compared to the prior work and 1T1R approach respectively. In addition, both READ power and energy are saved by about 9.37x with the 3T1R device when compared with the 1T1R device. At the same time, the write variation is also reduced by targeting a programmable LRS, ranging from 5 k\(\Omega\) to 20 k\(\Omega\). This region was chosen since it contains the least variability in resistance during programming. Finally, hardware-realistic Cadence Spectre results were used to verify a software model for four different dataset classification tasks. Due to the small change in READ current across the resistance range of our 3T1R DPE, its precision is impacted. As a result, classification tasks show a slight decrease in testing accuracy compared to the 1T1R-based DPE. Overall, the proposed design is power efficient and reliable. For future work, the SET currents of the 3T1R will be optimized alongside further testing of our fabricated chip. At the same time, the output precision will be enhanced for the 3T1R cells. Optimized electrical processes such as write-read-verify and ultra-fast pulsing, pulses less than \(10\,\mathrm{ns}\), can be used to achieve more distinct resistance levels. Further analysis of scaling issues due to peripheral circuitry and interconnect distances will be taken under consideration. In addition, a super-resolution crossbar can be a potential candidate to explore and compare with our design [39, 40].
|